Archive

Archive for the ‘NetApp’ Category

Why enterprise-class traditional NAS products will remain

January 7, 2013 2 comments

I’ve commented before that “Unified Storage” as a differentiator is no longer the differentiator it once was, given the fact that virtually all major storage vendors are now offering a “Unified” product.  Previously, only EMC and NetApp offered a true Unified storage solution.   Now, IBM has SONAS built into the V7000, HP is building IBRIX into their products, HDS released the HUS platform that leverages BlueArc NAS, and Dell also is integrating Exanet into their products.   

 

However, it’s important to note that not all Unified storage products are the same.   Just because a manufacturer can “check the box” on a spec sheet that they have NAS doesn’t mean all NAS products work the same.    On a related note, now that EMC has acquired Isilon, which many perceive to be a superior product to Celerra, the rumors are always going around about when will VNX File be replaced with Isilon code on the VNX series.

I’m here to tell you that:

  • EMC and NetApp are still best equipped to fulfill the needs of traditional enterprise NAS use cases compared to any other vendor.
  • I don’t believe Isilon will replace VNX File (Celerra) anytime soon.
  • While Isilon, SONAS, IBRIX, etc are superior for scale-out use cases, that’s not the case for traditional enterprise NAS requirements.

Why is this the case?  First let me clarify, when I say traditional enterprise NAS requirements, I’m talking large enterprise, as in tens of thousands of users.    For a smaller shop, these don’t apply.   Here are some sample requirements:

  • Support for hundreds of file systems and/or mountpoints (much different than the big-data use case people talk of today involving a single file system that scales to petabytes)
    • Large enterprises have dozens if not hundreds of legacy file servers.   Wouldn’t it be great to consolidate these or virtualize them behind some file gateway?  Sure!  Is it realistic in a huge environment with thousands of custom applications that have hard-coded UNC paths to these locations, immense user disruption and re-education, etc?  Not really.
  • Robust NDMP support
    • Large enterprises may be using advanced features of NDMP such as volume-based backup and checkpoint/snapshot based NDMP backups.   Do all scale-out NAS offering support these?  I don’t know to be honest but I’d be surprised. 
  • Number of CIFS sessions
    • Handling 20,000 users logging in each morning, authenticating against AD, downloading user/group SIDs for each account, and handling drive map creations for each user that may be part of the login script is a unique requirement in its own right.  It’s very intensive, but not from the standpoint of “scale-out” processing intense.   Being able to open all these CIFS user sessions, maintain them, and potentially fail them over is not what scale-out NAS was designed for.
  • Multiple CIFS servers
    • Same point as above under multiple file systems.  It’s not necessarily so simple for an organization to consolidate tens or hundreds of file servers down to one name.
  • Multi-protocol support
    • Scale-out NAS was not designed for corporations that have invested a lot in making their traditional NAS boxes work with advanced multi-protocol functionality, with complex mapping setup between Windows AD and Unix NIS/LDAP to allow users to access the same data from both sides with security remaining intact.
  • Snapshots
    • Most scale-out NAS boxes offer snapshots, but make sure they are Shadow-Copy client integrated, as most large organizations let their users/helpdesk perform their own file restores. 
  • Advanced CIFS functions
    • Access Based Enumeration – hides shares from users who don’t have ACL rights.
    • Branch Cache – increases performance at remote offices
    • Robust AD integration and multi-domain support (including legacy domains)
  • Migration from legacy file servers with lots of permission/SID issues.
    • If you’re migrating a large file server that dates back to the stone age (NT) to a NAS, it most likely is going to have a lot of unresolvable SIDs hidden deep in its ACL’s for one reason or another.   This can be a complex migration to an EMC or NetApp box.  I know from experience Celerra had multiple low-level params that could be tweaked as well as custom migration scripts all designed to handle issues that can occur when you start encountering these problem SIDs during the migration.   A lot of knowledge has been gained here by EMC and NetApp over the past 10 years and built into their traditional NAS products.   How are scale-out NAS products designed to handle these issues?  I am hard-pressed to believe that they can handle it.

 

The reality is that EMC’s Celerra codebase and NetApp’s ONTAP were purpose-built NAS operating systems designed to deal with these traditional enterprise requirements.   SONAS, IBRIX, BlueArc, Exanet, and Isilon were not.    These scale-out products (which I evaluated many years ago at a former employer and even had the opportunity to watch SONAS be developed and productized) were designed for newer scale-out use cases, often involving High Performance Computing (HPC).   In fact, HPC was the sole reason my former employer looked at all of these excluding Exanet.    Many of these products use SAMBA to provide their CIFS support.  Isilon was just recently switched to a more enterprise-class custom CIFS stack.  SONAS definitely uses SAMBA because it was built upon clustered SAMBA.   HPC has completely different requirements for NAS than traditional corporate file sharing, and so companies that built products focused on the HPC market were not concerned about meeting the needs of corporate file shares.

 

Now this is slowly changing, as we see more traditional enterprise features being built into the latest Isilon “Mavericks” code release, particularly around security.  I’m sure the other vendors are rapidly making code modifications as well now that they’ve all picked the NAS technology that they will make their SAN’s “unified” with.    But it will take time to catch up to 10 years of complex Windows permission and domain integration development that Celerra/VNX and NetApp have as advantages on their side.    From a quick search, it appears Isilon does not support MS Access Based Enumeration, so to think that EMC is going to dump Celerra/VNX code and plop in Isilon code on its Unified storage arrays is silly, when there are probably thousands of customers using this functionality.

 

Categories: EMC, IBM, NAS, NetApp

The Importance of Storage Performance Benchmarks

September 15, 2011 2 comments

As one who scans the daily flood of storage news every day, I’ve started to notice an uptick in the number of articles and press releases over the past year highlighting various vendors who have “blown away” a benchmark score of some sort and claim ultimate superiority in the storage world. 2 weeks later, another vendor is trumping that they’ve beaten the score that was just posted 2 weeks prior. With the numbers we’re seeing touted, I’m sure 1 bazillion IOPS must only be right around the corner.

Most vendors who utilize benchmarks tend to be storage startups, looking to get some publicity for themselves, not that there’s anything wrong with that. You gotta get your name out there somehow. For the longest time, the dominant player in the storage world, EMC, refused to participate in benchmarks saying they were not realistic of real-world performance. I don’t disagree with that, in many cases benchmarks are not indicative of real-world performance. Nevertheless, now even EMC has jumped into the fray. Perhaps they decided that not participating costs more in negative press than it does good.

What does it all mean for you? Here are a couple things to consider:

  1. Most benchmark tests are not indicative of real-world results. If you want to use a benchmark stat to get a better sense of what the max system limits are, that’s fine. But, don’t forget what your requirements truly are and measure each system against that. In most cases, customers are using a storage array for mixed workloads from a variety of business apps for a variety of use cases (database, email, file, and VMware, etc).  These different applications all have different I/O patterns.  The benchmark tests don’t simulate workloads that are related to this “real-world” mixed I/O pattern.   Benchmarks are heavily tilted in favor of niche use cases with very specific workloads. I’m sure there are niche use cases out there where the benchmarks do matter, but for 95% of storage buyers, they don’t matter. The bottom line is be sure the system has enough bandwidth and spindles to handle your real MB/sec and IOPS requirements. Designing that properly will be much more beneficial to you than getting an array that recently did 1 million IOPS in a benchmark test.
  1. Every vendor will reference a benchmark that works in their favor. Every vendor pretty much seems to be able to pull a benchmark stat out of their hat that favors their systems above all others.

Here’s an example I saw last year when working with a customer who evaluated 3 different vendors (IBM, NetApp, and EMC), and how I helped the customer get clear on what was real. In this case, both non-EMC vendors were referencing SPC numbers that showed how badly an EMC CX3-40 performed relative to their platforms. A couple alarms went off for me immediately:

  1. The CX3-40 was a generation old relative to the other platforms. The CX4 was the current platform on the market (now replaced by VNX). In other words, not an apples-to-apples comparison.
  2. At the time the CX3-40 existed, EMC did not participate in SAN benchmarks for its mid-range or high-end arrays.

I took a look at the V7000 SPC-1 benchmark and found some interesting conclusions.  Here is a chart that shows how the V7000 performed on the benchmark, and it shows other competitors as well:


The V7000 scored 56,500.   Interesting to note, since the box only supported 120 drives at the time, they had to utilize the SVC code in it to add on a DS5020 box, which allowed them to add more drives (200 total) to the configuration.  They put 80 15K RPM drives in the DS5020, higher speed drives the V7000 didn’t support natively at the time.    What’s important to note about the CX3-40 results seen in the SPC1 results is that this was a CX3-40 that NetApp purchased and ran a test on, then submitted the results to without EMC’s permission.  I don’t care what your vendor affiliation is, that’s not a fair fight. EMC had no input into how the array was configured and tuned.    Even though the array could hold 240 drives, it was only configured it for 155.    The CX3-40 scored 25,000.   Let’s make a realistic assumption that if EMC had configured the array and tuned it for the benchmark as other vendors did, then it could have done at least 25% better.   This would give it a score of 31,000.    The CX3-40 was a predecessor to the CX4-240 and they both hold 240 drives.   Performance and spec limits pretty much doubled across the board from CX3 to CX4, because EMC implemented a 64-bit architecture with the CX4’s release in 2008. So, again let’s make a realistic assumption and take the 31,000 result of the CX3-40 and double it to create a theoretical score for the CX4-240 of 62,000.

If I look at other arrays that are comparable to the CX4-240 in the results list, such as the DS5300 or FAS 3000 series, this theoretical score is right in the ballpark of the other arrays.     I would hope most would agree that this shows all the arrays in-scope were within striking distance of each other.   What exactly do these numbers mean relative to your business….not much. You can’t design a system for your business needs using these numbers. When most customers are analyzing their performance requirements, they have figures for IOPS, throughput, and latency that they need to meet to ensure good business application performance, not a theoretical benchmark score target.

Benchmarks can certainly be interesting, and I admit sometimes I think it’s cool to see a system that did X number of GB’s per second of throughput or X million IOPS, but my recommendation is don’t get to spun up on them in your search for a storage platform.  Every vendor has a benchmark that makes them look the best.  Instead, use your own metrics or work with a trusted business partner who can help you gather the data specific to your environment and evaluate each technology against how well it meets your business needs.

Categories: EMC, NAS, NetApp, SAN

Does Archiving to Centera or CAS Still Matter?

May 4, 2011 3 comments

Over the past 2 years, I’ve noticed a rather drastic reduction in the number of archiving conversations I have with customers. Email archiving still pops up, but most of the folks who need to do it are already doing it. File system archiving seems to be even less common these days, though it still pops up occasionally. There is certainly still a market in healthcare and financials, but even that seems less prevalent than it was at one time. Archiving did come up in a recent conversation, which got me thinking about this topic again and I thought it’d make a good blog post.

Without a doubt, the archive market seems to have shrunk. I’m reminded of my time at EMC a year and a half ago when I had to go thru some training about “recapturing the archive market”. From the early-mid 2000’s until the late 2000’s, the “archive first” story was the hottest thing going. EMC built an entire business on the Backup, Recovery, and Archive story (BURA), which encompassed the idea of archiving your static and stale data first, to save money by shrinking the amount of data you need to back up and store on more expensive Tier 1 storage. As a result, they made the term Content Addressable Storage (CAS) go mainstream and be copied by others.  The Centera platform was a product EMC purchased rather than developed in-house, but they created a successful business out of it nonetheless. The predecessor of the Centera was a product called FilePool. The founders of FilePool are actively involved in another CAS startup now called Caringo.

How CAS Works

The Content Address is a digital fingerprint of the content. Created mathematically from the content itself, the Content Address represents the object—change the binary representation of the object, (e.g. edit the file in any way) and the Content Address changes. This feature guarantees authenticity—either the original document remains unchanged or the content has been modified and a new Content Address is created.

Step 1 An object (file, BLOB) is created by a user or application.
Step 2 The application sends the object to CAS system for storing.
Step 3 CAS system calculates the object’s Content Address or “fingerprint,” a globally unique identifier.
Step 4 CAS system then sends the Content Address back to the application.
Step 5 The applications store the Content Address—not the object—for future reference. When an application wants to recall the object, it sends the Content Address to the CAS system, and it retrieves the object. There is no filesystem or logical unit for the application to manage.

CAS systems also had another compelling advantage back in the day, that being there was very little storage management involved. No RAID groups, LUNs, or Storage Groups to ever build or allocate. No traditional file system to ever manage. Per IDC, a full time employee could effectively manage considerably more CAS storage than any other type (320TB vs. 25TB for NAS/SAN).

I have to admit, the CAS story was compelling. Thousands of customers signed up and bought hundreds of PB’s of CAS from multiple vendors. The Fortune 150 company I worked for in the past implemented hundreds of TB’s of Centera CAS as part of an archiving strategy. We archived file system, database, and email data to the system using a variety of ISV packages. Given that this market used to be so hot, I’ve often thought about the possible scenarios for it cooling off, and why many people now choose to use a Unified storage platform for archiving rather than a purpose-built CAS system. Here are a few of the thoughts I’ve had so far (comments welcome and appreciated):

  1. CAS wasn’t as simple as claimed. Despite the claims of zero storage management, in reality I think several of the admin tasks that were eliminated by CAS were replaced by new management activities that were required for CAS. Designing archive processes with your internal business customers, evaluating various archiving software packages, configuring those software packages to work with your CAS system, and troubleshooting those software packages can be cumbersome and time-consuming.
  2. Storage management has gotten considerably easier in the last 5 years.   Most vendors have moved from RAID groups to pool’s, LUN/Volume creation is handled via GUI instead of CLI, and the GUI’s have been streamlined and made easy for the IT generalist to use.   Although I would say a CAS appliance can still be easier to manage at scale, the difference is not near as great as it was in 2005.
  3. NetApp created a great story with their one size fits all approach when they built in WORM functionality to their Unified storage platform, which was soon copied by EMC in the Celerra product and enhanced to include compliance.
  4. Many customers didn’t need guaranteed content authenticity that CAS offers, they simply needed basic archiving. Before NetApp and EMC Unified platforms offered this capability, Centera and other CAS platforms were the only choice for a dedicated archive storage box. Once NetApp and then EMC built in archiving into the cost-effective mid-range Unified platform, my opinion is it cut Centera and other CAS systems off at the knees.
  5. CAS systems were not cheap, even if they could have a better TCO than Tier 1 SAN storage. It was primarily larger enterprises that were typically able to afford CAS, while the lower-end of the market quickly gravitated to a Unified box that had archive functionality built in.
  6. Backup windows were not always reduced by archiving. Certainly there were some cases where it could help, but also areas where it did not. As an example, many customers wanted to do file system archiving on file systems with millions and millions of files. When you archive, the data is copied to the archive and a stub is left in the original file system. Using traditional backup, these stubs still need to be backed up, and the backup application sees them as a file. This means even if the stub is only 1KB, it still causes the backup application to slow way down as part of the catalog indexing process. There are some workarounds like doing a volume-based backup, which backs up the file system as an image. However, there are caveats here as well. As an example, if you do file-system de-dupe on an EMC platform in conjunction with archiving, you can no longer do granular file-level recoveries from a volume-based backup. Only a full-destructive restore is allowed.
  7. Many customers didn’t really need to archive for compliance purposes, rather they simply wanted to save money by moving stale data from Tier 1 storage to Tier2/3 storage. This required adding in cost and complexity for a file migration appliance or ISV software package to perform the file movement between tiers, which ate away at the cost savings. Now that many storage arrays have auto-tiering functionality built-in, the system will automatically send less frequently accessed blocks of data to a lower tier of storage, completely transparent to the admin and end-user, with no file stubbing required.

To sum it up, what would I recommend to a customer today? CAS is still a very important storage product and although it’s not a rapidly growing area, it still has a significant install base that will remain for some time. There still are some things that a CAS system can do that the Unified boxes cannot. Guaranteed content authenticity with an object-based storage model is certainly one of those, and probably the most important. If you require as good of a guarantee as you can possibly get that your archive data is safe, CAS is the way to go. As I alluded to before, this still has importance in the healthcare and financial verticals, though I see smaller institutions in those verticals often choose a Unified platform for cost-effectiveness. Outside of those verticals, if your archive storage needs are <100TB, I’m of the opinion that a Unified platform is most likely the way to go, keeping in mind every environment can be unique. There may also be exceptions for applications that offer CAS API integration thru the XAM protocol. If you’re using one of those applications, then it may also make sense to investigate a true CAS platform.

Further reading on CAS:

http://en.wikipedia.org/wiki/Content-addressable_storage

Categories: Archive, Backup, EMC, NAS, NetApp

Why NAS? The Benefits of Moving Traditional File Servers to NAS vs. Virtualizing Them

April 14, 2011 Leave a comment

Customers are often presented with the dilemma of moving file servers to NAS (CIFS shares) or virtualizing the file servers. The latter keeps the environment largely “as is” with the flexibility benefits of having servers virtualized. On the surface, the latter option can seem to have a lot of appeal, because you get to keep everything within the construct of your virtualization hypervisor.

Advantages of Virtualizing Windows File Servers

  1. Maintains existing way of doing things.
  2. Allows you to leverage advanced virtualization functionality, such as VMware VMotion and VMware SRM for DR for your file servers.

It’s important, though, to understand all of the benefits that NAS truly offers. The advantages of leveraging NAS instead of traditional file servers (physical or virtual) are still numerous. The rest of this article lists the advantages that specifically exist with the EMC Celerra NAS platform. Some points carry over to other NAS platforms as well, but not all.

Advantages of Moving Windows File Servers to EMC Celerra NAS

  1. Microsoft-compatibility without the bloat: Celerra uses a purpose-built NAS OS that is only about 25MB in size compared to a default Windows Server install. This makes Celerra much more efficient at doing the job of serving up file data. Since it does not run Windows, it is not susceptible to Microsoft vulnerabilities and virus code cannot be executed on it directly. When you virtualize a Windows file server, you still have a Windows server that is susceptible to infection or worms. Removing these servers from the environment will reduce the number of servers less that the administrator has to worry about.
  2. Microsoft Support: EMC is an official licensee of the CIFS/SMB protocol (no reverse engineering), so it is guaranteed to be fully compatible with Active Directory and all supported Windows clients.  EMC also maintains a joint support center with Microsoft engineers in Atlanta, GA.
  3. Checkpoints/Snapshots: File system snapshots enable instant restore for users. You can do this now with Volume Shadow Copies on your Windows server, but it’s not nearly as scalable as a Celerra. Currently, you are allowed up to 96 per file system with Celerra.
  4. De-dupe: With EMC Celerra, you can de-duplicate CIFS and NFS data, including NFS-based VMware datastores. With typical user data on CIFS shares, you can expect to see a 30-40% reduction in the amount of space used. Celerra also has functions to prevent re-hydration of the data when doing an NDMP-protocol backup. In my testing with de-dupe on NFS-based VMware datastores, I saw between 20-40% reduction in the amount of space used for virtual machines.
  5. Failover and DR: All file server data can be easily replicated with Celerra Replicator. Failover can still be accomplished with the click of a button from the GUI.
  6. Scalability: You typically don’t see Windows file systems with much more than 2TB of data on them due to scalability issues. Celerra can have up to 32TB on a single server and it truly scales to that amount.
  7. Virtual Desktops: NAS can make perfect sense for VDI environments, as you can gain efficiencies by centralizing user data to a CIFS share. Granted, you can do that on a traditional Windows file server, but you cannot take advantage of advanced Celerra features. One of these features is de-duplication. You can crunch down typical user data by 30-40% with no performance impact that end users are going to notice.
  8. NDMP backup: NDMP is a backup protocol that all NAS and backup vendors standardized on many years ago. It is needed since true purpose-built NAS operating systems are closed, with no ability for users to directly interact with the OS, hence you cannot install a backup agent. Due to the fact the NDMP code is built into the OS, NDMP backups are traditionally much more reliable than traditional backup agents. The data also traverses directly from the storage to the backup medium, reducing network traffic.
  9. Multi-protocol: Should you ever need to enable NFS access for Linux servers in your environment, this capability exists natively within the Celerra. On a Windows file server, you must enable Microsoft Unix file sharing services, which overwhelming evidence shows does not perform well and is not reliable.
  10. Built-in HA: Every Celerra is configured as an active/passive or n+1 cluster. This is automatically setup right out of the box. The failover code is simply part of the operating system, so it is much cleaner than a traditional Windows cluster, both from a management standpoint and a failover standpoint.
  11. Easy Provisioning: File servers can be provisioned on the Celerra from start to sharing files in under 5 minutes. Even a VM would take more time, not just to spin-up the VM, but actually create the shares. The Celerra comes with an easy GUI, but you can also use traditional Windows CIFS management tools to create shares.
  12. Virtual Provisioning and Auto-Expansion: Celerra has for many years now supported virtual provisioning (aka thin provisioning). This allows you to provision a share that appears to be 1TB in size, even though physically it might only be 200GB. This is useful if you have business customers that regularly ask for more space than they need. In the past, you would allocate everything they asked for, only to find out 1-2 years down the road that they only used 25% of what they said they needed. There was no easy way to reclaim this space and re-purpose it. Now, you can use virtual provisioning to alleviate this issue, and rely on auto-expansion capabilities, which will grow the physical file system as needed, so you are regularly bothered as the administrator to constantly expand the file system.
  13. File Retention: Many businesses today have policies and procedures governing the retention of business documents. Some businesses may even fall under government regulation that adds an extra layer of scrutiny. Celerra natively supports a technology called File-level Retention (FLR), which allows you to use a GUI or custom scripts to set retention on a group of files. This will prevent even an administrator from being able to delete the underlying file system.
  14. Tiered Storage: Celerra natively supports tiering files to other types of storage within the same box, or to a completely different box, whether it is a different NAS box or a Windows server.
Categories: EMC, NAS, NetApp

Storage Industry Update: The Consolidation Trend (Best-of-breed vs. Single-stack)

March 22, 2011 1 comment

The storage industry has seen considerable consolidation lately. It started a couple years ago with HP acquiring LeftHand and Dell acquiring EqualLogic. More recently, EMC acquired DataDomain and Isilon, HP acquired 3PAR, and as of this week Dell has acquired Compellent. This latest move had been rumored for sometime after Dell failed in its attempt to acquire 3PAR.

Dell originally put out the bid for 3PAR, obviously looking for a large-enterprise storage solution that it could offer in-house. Dell for years had re-branded EMC Clariion storage arrays under its own moniker, but that agreement never expanded into the large-enterprise array space, to include the ubiquitous EMC Symmetrix. Symmetrix has long been known to be the market leader in the enterprise space, and with the introduction of the VMAX line, now has native scale-out functionality. In the past, enterprise arrays primarily were scale-up oriented. A tremendous amount of focus has come upon scale-out models within the past 12-18 months thanks to the proliferation of cloud strategies. Due to the enormous scale that cloud infrastructures must be able to grow into, traditional scale-up boxes based on proprietary architectures were simply too costly. Using commodity Intel components with scale-out architecture allows customers and/or service provides to achieve significant scale at a lower cost.

The recent behavior by multiple manufacturers shows that they are feeling the pressure to boost their product portfolios with regard to scale-out storage. It’s also clear that many manufacturers are trying to create a product suite so they can try to “own” the complete computing stack. IBM has been in the position for quite some time. Oracle acquired Sun to enter the hardware business. HP decided to outbid Dell for 3PAR because it needed scale-out storage. HP’s only true in-house storage product was the EVA, and LeftHand is an SMB solution that can’t scale to meet enterprise needs. In the enterprise space, HP had been OEM’ing an array from Hitachi called the USP. The USP is a monolithic scale-up array that didn’t offer scale-out capabilities. Hence, HP needed 3PAR to create a scale-out enterprise storage array, which most likely will lead to the termination of their OEM agreement with Hitachi.

The HP-3PAR acquisition left Dell as the odd-man out amongst the major players. With 3PAR off the market, Compellent was the most logical choice left. The interesting thing here is most folks would not recognize Compellent as a large enterprise class of array. Today, it is software that runs on whitebox servers. Dell must see something in the Compellent code that leads them to believe it can be reconstructed in a scale-out fashion. This is not going to be a trivial task. 

What does all this mean for you, the end-user? Personally, I feel that this consolidation is ultimately bad for innovation. The theory is pretty simple, when you try to be a jack of all trades, you end up being a master of none. We see this in practice already. IBM has historically had product offerings in all major infrastructure areas except networking, but few are recognized as being truly market leading. Servers have been one area where IBM does shine. Their storage arrays are typically generations behind the competition. HP has also been known to manufacturer really great servers, and now they are getting some serious consideration in the networking space. However, HP storage has been in disarray for quite some time. There has been a serious lack of product focus, the EVA in particular is very outdated and uncompetitive, and there is no in-house intellectual property in the high-end storage space. Dell has been known to make great servers as well, but didn’t really have any other offerings of their own for enterprise data centers. In the end, all of these conglomerates tend to do really well in one area while being mediocre when it comes to the rest, storage being one of the mediocre areas. This is proven in all the recent market share reports that show these companies have been losing storage market share to companies like EMC and NetApp.

So why are EMC and NetApp so successful right now? I believe it’s because of their singular focus on storage, which helps them have the most innovative products on the market that offer the highest quality and reliability as well. EMC’s strategy is a bit more holistic around the realm of information infrastructure than NetApp’s, but it is still highly focused nonetheless compared to an HP or IBM. Without a doubt, this is why they continue to lead with best-of-breed products year after year, and continue to retain their market leader status. This also bodes well for the VMware-Cisco-EMC (VCE) and VMware-Cisco-NetApp (VCN) strategies. Rather than one company trying to be a jack of all trades, you have market leaders in each specific category coming together to create strategic partnerships with each other. The best-of-breed products can be combined into a stack, with the strategic partnerships allowing for options like one phone call for support, and extensive integration testing between components. It provides the benefits of single-source stack together with the benefits of a best-of-breed approach, which essentially is giving you the best of both worlds!

Categories: EMC, NetApp, SAN