Why enterprise-class traditional NAS products will remain
I’ve commented before that “Unified Storage” as a differentiator is no longer the differentiator it once was, given the fact that virtually all major storage vendors are now offering a “Unified” product. Previously, only EMC and NetApp offered a true Unified storage solution. Now, IBM has SONAS built into the V7000, HP is building IBRIX into their products, HDS released the HUS platform that leverages BlueArc NAS, and Dell also is integrating Exanet into their products.
However, it’s important to note that not all Unified storage products are the same. Just because a manufacturer can “check the box” on a spec sheet that they have NAS doesn’t mean all NAS products work the same. On a related note, now that EMC has acquired Isilon, which many perceive to be a superior product to Celerra, the rumors are always going around about when will VNX File be replaced with Isilon code on the VNX series.
I’m here to tell you that:
- EMC and NetApp are still best equipped to fulfill the needs of traditional enterprise NAS use cases compared to any other vendor.
- I don’t believe Isilon will replace VNX File (Celerra) anytime soon.
- While Isilon, SONAS, IBRIX, etc are superior for scale-out use cases, that’s not the case for traditional enterprise NAS requirements.
Why is this the case? First let me clarify, when I say traditional enterprise NAS requirements, I’m talking large enterprise, as in tens of thousands of users. For a smaller shop, these don’t apply. Here are some sample requirements:
- Support for hundreds of file systems and/or mountpoints (much different than the big-data use case people talk of today involving a single file system that scales to petabytes)
- Large enterprises have dozens if not hundreds of legacy file servers. Wouldn’t it be great to consolidate these or virtualize them behind some file gateway? Sure! Is it realistic in a huge environment with thousands of custom applications that have hard-coded UNC paths to these locations, immense user disruption and re-education, etc? Not really.
- Robust NDMP support
- Large enterprises may be using advanced features of NDMP such as volume-based backup and checkpoint/snapshot based NDMP backups. Do all scale-out NAS offering support these? I don’t know to be honest but I’d be surprised.
- Number of CIFS sessions
- Handling 20,000 users logging in each morning, authenticating against AD, downloading user/group SIDs for each account, and handling drive map creations for each user that may be part of the login script is a unique requirement in its own right. It’s very intensive, but not from the standpoint of “scale-out” processing intense. Being able to open all these CIFS user sessions, maintain them, and potentially fail them over is not what scale-out NAS was designed for.
- Multiple CIFS servers
- Same point as above under multiple file systems. It’s not necessarily so simple for an organization to consolidate tens or hundreds of file servers down to one name.
- Multi-protocol support
- Scale-out NAS was not designed for corporations that have invested a lot in making their traditional NAS boxes work with advanced multi-protocol functionality, with complex mapping setup between Windows AD and Unix NIS/LDAP to allow users to access the same data from both sides with security remaining intact.
- Snapshots
- Most scale-out NAS boxes offer snapshots, but make sure they are Shadow-Copy client integrated, as most large organizations let their users/helpdesk perform their own file restores.
- Advanced CIFS functions
- Access Based Enumeration – hides shares from users who don’t have ACL rights.
- Branch Cache – increases performance at remote offices
- Robust AD integration and multi-domain support (including legacy domains)
- Migration from legacy file servers with lots of permission/SID issues.
- If you’re migrating a large file server that dates back to the stone age (NT) to a NAS, it most likely is going to have a lot of unresolvable SIDs hidden deep in its ACL’s for one reason or another. This can be a complex migration to an EMC or NetApp box. I know from experience Celerra had multiple low-level params that could be tweaked as well as custom migration scripts all designed to handle issues that can occur when you start encountering these problem SIDs during the migration. A lot of knowledge has been gained here by EMC and NetApp over the past 10 years and built into their traditional NAS products. How are scale-out NAS products designed to handle these issues? I am hard-pressed to believe that they can handle it.
The reality is that EMC’s Celerra codebase and NetApp’s ONTAP were purpose-built NAS operating systems designed to deal with these traditional enterprise requirements. SONAS, IBRIX, BlueArc, Exanet, and Isilon were not. These scale-out products (which I evaluated many years ago at a former employer and even had the opportunity to watch SONAS be developed and productized) were designed for newer scale-out use cases, often involving High Performance Computing (HPC). In fact, HPC was the sole reason my former employer looked at all of these excluding Exanet. Many of these products use SAMBA to provide their CIFS support. Isilon was just recently switched to a more enterprise-class custom CIFS stack. SONAS definitely uses SAMBA because it was built upon clustered SAMBA. HPC has completely different requirements for NAS than traditional corporate file sharing, and so companies that built products focused on the HPC market were not concerned about meeting the needs of corporate file shares.
Now this is slowly changing, as we see more traditional enterprise features being built into the latest Isilon “Mavericks” code release, particularly around security. I’m sure the other vendors are rapidly making code modifications as well now that they’ve all picked the NAS technology that they will make their SAN’s “unified” with. But it will take time to catch up to 10 years of complex Windows permission and domain integration development that Celerra/VNX and NetApp have as advantages on their side. From a quick search, it appears Isilon does not support MS Access Based Enumeration, so to think that EMC is going to dump Celerra/VNX code and plop in Isilon code on its Unified storage arrays is silly, when there are probably thousands of customers using this functionality.
DISCLOSURE: Nick Howell from NetApp
You could even take it a step further by saying that Celerra is still just a proxy/gateway device requiring access to an underlying storage processor and separate OS/firmware to write the pieces down to disk. You can also stick a gateway in front of a VMAX and allow it to serve CIFS in the same fashion, as far as I know.
Too often we get so wrapped up in checking off features that we can be too close to the glass, and forget what is important to end users.
We’re all guilty of that, and it’s posts like this that help reel us back in to things that are important to focus on.
Very good points. Thanks for sharing!
-Nick
@that1guynick
datacenterdude.com
Isilon does support Access Based Enumeration, but doesn’t expose the setting in the Web UI. It is disabled by default, and can be enabled globally via the command ‘isi smb settings shares modify –access-based-enumeration yes’
I’d love to see this get put into the Web UI since just about everyone uses it.