Archive

Archive for the ‘EMC’ Category

Why NAS? The Benefits of Moving Traditional File Servers to NAS vs. Virtualizing Them

April 14, 2011 Leave a comment

Customers are often presented with the dilemma of moving file servers to NAS (CIFS shares) or virtualizing the file servers. The latter keeps the environment largely “as is” with the flexibility benefits of having servers virtualized. On the surface, the latter option can seem to have a lot of appeal, because you get to keep everything within the construct of your virtualization hypervisor.

Advantages of Virtualizing Windows File Servers

  1. Maintains existing way of doing things.
  2. Allows you to leverage advanced virtualization functionality, such as VMware VMotion and VMware SRM for DR for your file servers.

It’s important, though, to understand all of the benefits that NAS truly offers. The advantages of leveraging NAS instead of traditional file servers (physical or virtual) are still numerous. The rest of this article lists the advantages that specifically exist with the EMC Celerra NAS platform. Some points carry over to other NAS platforms as well, but not all.

Advantages of Moving Windows File Servers to EMC Celerra NAS

  1. Microsoft-compatibility without the bloat: Celerra uses a purpose-built NAS OS that is only about 25MB in size compared to a default Windows Server install. This makes Celerra much more efficient at doing the job of serving up file data. Since it does not run Windows, it is not susceptible to Microsoft vulnerabilities and virus code cannot be executed on it directly. When you virtualize a Windows file server, you still have a Windows server that is susceptible to infection or worms. Removing these servers from the environment will reduce the number of servers less that the administrator has to worry about.
  2. Microsoft Support: EMC is an official licensee of the CIFS/SMB protocol (no reverse engineering), so it is guaranteed to be fully compatible with Active Directory and all supported Windows clients.  EMC also maintains a joint support center with Microsoft engineers in Atlanta, GA.
  3. Checkpoints/Snapshots: File system snapshots enable instant restore for users. You can do this now with Volume Shadow Copies on your Windows server, but it’s not nearly as scalable as a Celerra. Currently, you are allowed up to 96 per file system with Celerra.
  4. De-dupe: With EMC Celerra, you can de-duplicate CIFS and NFS data, including NFS-based VMware datastores. With typical user data on CIFS shares, you can expect to see a 30-40% reduction in the amount of space used. Celerra also has functions to prevent re-hydration of the data when doing an NDMP-protocol backup. In my testing with de-dupe on NFS-based VMware datastores, I saw between 20-40% reduction in the amount of space used for virtual machines.
  5. Failover and DR: All file server data can be easily replicated with Celerra Replicator. Failover can still be accomplished with the click of a button from the GUI.
  6. Scalability: You typically don’t see Windows file systems with much more than 2TB of data on them due to scalability issues. Celerra can have up to 32TB on a single server and it truly scales to that amount.
  7. Virtual Desktops: NAS can make perfect sense for VDI environments, as you can gain efficiencies by centralizing user data to a CIFS share. Granted, you can do that on a traditional Windows file server, but you cannot take advantage of advanced Celerra features. One of these features is de-duplication. You can crunch down typical user data by 30-40% with no performance impact that end users are going to notice.
  8. NDMP backup: NDMP is a backup protocol that all NAS and backup vendors standardized on many years ago. It is needed since true purpose-built NAS operating systems are closed, with no ability for users to directly interact with the OS, hence you cannot install a backup agent. Due to the fact the NDMP code is built into the OS, NDMP backups are traditionally much more reliable than traditional backup agents. The data also traverses directly from the storage to the backup medium, reducing network traffic.
  9. Multi-protocol: Should you ever need to enable NFS access for Linux servers in your environment, this capability exists natively within the Celerra. On a Windows file server, you must enable Microsoft Unix file sharing services, which overwhelming evidence shows does not perform well and is not reliable.
  10. Built-in HA: Every Celerra is configured as an active/passive or n+1 cluster. This is automatically setup right out of the box. The failover code is simply part of the operating system, so it is much cleaner than a traditional Windows cluster, both from a management standpoint and a failover standpoint.
  11. Easy Provisioning: File servers can be provisioned on the Celerra from start to sharing files in under 5 minutes. Even a VM would take more time, not just to spin-up the VM, but actually create the shares. The Celerra comes with an easy GUI, but you can also use traditional Windows CIFS management tools to create shares.
  12. Virtual Provisioning and Auto-Expansion: Celerra has for many years now supported virtual provisioning (aka thin provisioning). This allows you to provision a share that appears to be 1TB in size, even though physically it might only be 200GB. This is useful if you have business customers that regularly ask for more space than they need. In the past, you would allocate everything they asked for, only to find out 1-2 years down the road that they only used 25% of what they said they needed. There was no easy way to reclaim this space and re-purpose it. Now, you can use virtual provisioning to alleviate this issue, and rely on auto-expansion capabilities, which will grow the physical file system as needed, so you are regularly bothered as the administrator to constantly expand the file system.
  13. File Retention: Many businesses today have policies and procedures governing the retention of business documents. Some businesses may even fall under government regulation that adds an extra layer of scrutiny. Celerra natively supports a technology called File-level Retention (FLR), which allows you to use a GUI or custom scripts to set retention on a group of files. This will prevent even an administrator from being able to delete the underlying file system.
  14. Tiered Storage: Celerra natively supports tiering files to other types of storage within the same box, or to a completely different box, whether it is a different NAS box or a Windows server.
Advertisements
Categories: EMC, NAS, NetApp

Storage Industry Update: The Consolidation Trend (Best-of-breed vs. Single-stack)

March 22, 2011 1 comment

The storage industry has seen considerable consolidation lately. It started a couple years ago with HP acquiring LeftHand and Dell acquiring EqualLogic. More recently, EMC acquired DataDomain and Isilon, HP acquired 3PAR, and as of this week Dell has acquired Compellent. This latest move had been rumored for sometime after Dell failed in its attempt to acquire 3PAR.

Dell originally put out the bid for 3PAR, obviously looking for a large-enterprise storage solution that it could offer in-house. Dell for years had re-branded EMC Clariion storage arrays under its own moniker, but that agreement never expanded into the large-enterprise array space, to include the ubiquitous EMC Symmetrix. Symmetrix has long been known to be the market leader in the enterprise space, and with the introduction of the VMAX line, now has native scale-out functionality. In the past, enterprise arrays primarily were scale-up oriented. A tremendous amount of focus has come upon scale-out models within the past 12-18 months thanks to the proliferation of cloud strategies. Due to the enormous scale that cloud infrastructures must be able to grow into, traditional scale-up boxes based on proprietary architectures were simply too costly. Using commodity Intel components with scale-out architecture allows customers and/or service provides to achieve significant scale at a lower cost.

The recent behavior by multiple manufacturers shows that they are feeling the pressure to boost their product portfolios with regard to scale-out storage. It’s also clear that many manufacturers are trying to create a product suite so they can try to “own” the complete computing stack. IBM has been in the position for quite some time. Oracle acquired Sun to enter the hardware business. HP decided to outbid Dell for 3PAR because it needed scale-out storage. HP’s only true in-house storage product was the EVA, and LeftHand is an SMB solution that can’t scale to meet enterprise needs. In the enterprise space, HP had been OEM’ing an array from Hitachi called the USP. The USP is a monolithic scale-up array that didn’t offer scale-out capabilities. Hence, HP needed 3PAR to create a scale-out enterprise storage array, which most likely will lead to the termination of their OEM agreement with Hitachi.

The HP-3PAR acquisition left Dell as the odd-man out amongst the major players. With 3PAR off the market, Compellent was the most logical choice left. The interesting thing here is most folks would not recognize Compellent as a large enterprise class of array. Today, it is software that runs on whitebox servers. Dell must see something in the Compellent code that leads them to believe it can be reconstructed in a scale-out fashion. This is not going to be a trivial task. 

What does all this mean for you, the end-user? Personally, I feel that this consolidation is ultimately bad for innovation. The theory is pretty simple, when you try to be a jack of all trades, you end up being a master of none. We see this in practice already. IBM has historically had product offerings in all major infrastructure areas except networking, but few are recognized as being truly market leading. Servers have been one area where IBM does shine. Their storage arrays are typically generations behind the competition. HP has also been known to manufacturer really great servers, and now they are getting some serious consideration in the networking space. However, HP storage has been in disarray for quite some time. There has been a serious lack of product focus, the EVA in particular is very outdated and uncompetitive, and there is no in-house intellectual property in the high-end storage space. Dell has been known to make great servers as well, but didn’t really have any other offerings of their own for enterprise data centers. In the end, all of these conglomerates tend to do really well in one area while being mediocre when it comes to the rest, storage being one of the mediocre areas. This is proven in all the recent market share reports that show these companies have been losing storage market share to companies like EMC and NetApp.

So why are EMC and NetApp so successful right now? I believe it’s because of their singular focus on storage, which helps them have the most innovative products on the market that offer the highest quality and reliability as well. EMC’s strategy is a bit more holistic around the realm of information infrastructure than NetApp’s, but it is still highly focused nonetheless compared to an HP or IBM. Without a doubt, this is why they continue to lead with best-of-breed products year after year, and continue to retain their market leader status. This also bodes well for the VMware-Cisco-EMC (VCE) and VMware-Cisco-NetApp (VCN) strategies. Rather than one company trying to be a jack of all trades, you have market leaders in each specific category coming together to create strategic partnerships with each other. The best-of-breed products can be combined into a stack, with the strategic partnerships allowing for options like one phone call for support, and extensive integration testing between components. It provides the benefits of single-source stack together with the benefits of a best-of-breed approach, which essentially is giving you the best of both worlds!

Categories: EMC, NetApp, SAN

Feeling Buyer’s Remorse for Purchasing an EMC CX in 2010 after the VNX Announcement?

March 14, 2011 Leave a comment

Now that we’re a few weeks past EMC’s major marketing announcement for the next generation of their storage platform, called the VNX, I’m sure there are some customers out there wondering if they made a mistake ordering an EMC Clariion CX4 or related Celerra NS platform in the 2nd half of 2010. I want to allay any fears you might have about this by addressing the following points:

  1. Although announced in January, the VNX just started shipping near the end of Q1.  Considering the typical implementation happens about 6 weeks after ordering, and there may be a data migration involved, you wouldn’t be running on your new platform until May or June of this year most likely.
  2. The CX4 will remain a viable and shipping platform for some time into 2011. Once EMC does decide to stop shipping the product, they have committed to supporting the platform for at least 5 years. So, for any customer who bought a CX4/NS in the 2nd half of 2010, you’re going to get at least 6 years out of it, which falls into the typical 5-7 year lifespan of a mid-range array from any manufacturer.
  3. The CX4 is based on the most-proven and reliable mid-range platform technology in the industry. While the VNX builds upon this, it is a new architecture leveraging a new back-end technology and there will be a transition period for working out any gotcha’s.
  4. The VNX is based upon the same software enhancements that were already put into the CX4 platform, such as FAST VP, FAST Cache, and Unisphere management. At this time, there are no extra software enhancements in the VNX that you are missing out on with the CX4 platform.
  5. Although SAS drives will eventually replace Fibre-Channel drives, that won’t happen anytime soon. There will always be FC drives available for the life of your CX4. Additionally, SAS drive options are fairly limited compared to FC drives currently. Only 300GB and 600GB drives are available for SAS in 15K speeds, and only 2TB NL-SAS (aka SATA) drives are available.
Categories: EMC