Archive

Author Archive

Update for SQL Server customers using EMC AppSync

August 27, 2013 Leave a comment

For those who don’t know, EMC AppSync is a product that can provide application-consistent snapshots. In the case of Windows and MS Apps, this means AppSync coordinates an array snapshot, clone, or CDP bookmark to happen at the exact same moment VSS quiesces the I/O within the application on the host. There’s a bit more to it than that, but suffice to say that is the gist of it. AppSync is slowly replacing the functionality of Replication Manager from EMC. NetApp SnapManager products offer the same basic functionality if you’re familiar with that product line.

For VNX customers, there has been a bit of a gap in support for VNX Application-consistent snapshots with SQL server. AppSync did not yet support SQL Server in the 1.0 release, but Replication Manager did not support the new VNX Snapshots, which abandon copy-on-write for redirect-on-write. So, the biggest news with this release (from my perspective) is that now VNX customers can get full application-consistency with SQL Server leveraging the new VNX Snapshots.

Other big enhancements include support for Exchange 2013 and NFS datastores in VMware.

See the release notes for a full list of new features:

https://community.emc.com/community/connect/appsync/blog/2013/06/26/announcing-appsync-15

Categories: EMC

VNX Monitoring and Reporting 1.1 Released

August 6, 2013 Leave a comment

For the customers I talk to as part of my regular travels, a frequent request by EMC midrange customers using CX or VNX is an easier way to get high-level performance metrics out of their SAN. For folks who only have a single storage array in their environment, they typically don’t login on a regular basis as they have many hats to wear and a high-quality SAN tends to be somewhat “set it and forget it”. Although the built-in Analyzer tool is extremely thorough, if you don’t use it regularly it can be a bit cumbersome. The good news here is EMC released a product a few months back called VNX Monitoring & Reporting (M&R). This tool is an EXTREMELY cost-effective product that provides the user with an easy to use dashboard with a ton of reports yielding info on array performance (file and block). It is backwards compatible and supported on CX4-based platforms. I’ve even had one customer make it work with a CX3, though that is not supported.

Now, for some of my customers who had M&R and were also savvy with Analyzer, they were noticing a few anomalies in the data reporting between the two products. Those of you who have experienced this will want to upgrade to 1.1 whenever it is convenient.

See below for all the new features of VNX M&R 1.1, and don’t forget you can download the product from support.emc.com and run for 30 days in trial mode. It’s not documented in the download package, so as a tip the default password is admin/changeme. The install is a very straightforward Windows install (i.e. Next, Next, Next, Finish). Then plug in the IP’s of your array and you’re good to go.

Also, for VMware-EMC customers, don’t forget the EMC vCenter plug-in’s. Specifically, the EMC Storage Viewer plugin offers high-level real time reporting on array performance, but there is no historical reporting.

see https://support.emc.com/products/28885_VNX-Monitoring-&-Reporting

Chargeback
Beginning with this release, a new Data Enrichment feature called
Chargeback is available. Chargeback calculates cost-of-service for
business units and applications in an organization and displays it
in the Chargeback reports. The VNX Monitoring and Reporting
Administrator assigns a cost-per-gigabyte value to LUNs by RAID
type, tier, and array model. Specific LUNs are then associated with
a business unit and application. Total cost-per-gigabyte for each
business unit and application is calculated based on this
configuration

Heat Map Reports
Heat map reports are now available in this release. A heat map is a
grid in which metrics that have similar values are grouped together
in order to create “hot” and “cold” areas. These values will be
colored according to predefined thresholds. Unlike other types of
reports, visual thresholds for Heat maps cannot be adjusted.

Scheduled Reports
Reports can now be scheduled to run at specific times and be
delivered as PDFs to selected recipients through email. Whenever a
report is scheduled to be generated, it queries the database for the
latest data.

Alerts
Alerts are now available and can be configured to trigger an email
or be captured in an SNMP trap based on when certain
performance thresholds are violated. For example, an Alert can be
configured to trigger whenever storage processor utilization
exceeds 75%. Alerts can also be configured to trigger only for
specific components, such as specific SPs on specific systems. The
Alert email recipients are configurable by alert.

There are six pre-defined alerts available with this release:
• Data Mover Processor Utilization (%)
• File System Percent Subscribed (%)
• LUN Response Time (milliseconds)
• Storage Pool Percent Subscribed (%)
• Storage Processor Dirty Pages Utilization (%)
• Storage Processor Utilization (%)

Categories: EMC

A new opportunity

August 6, 2013 1 comment

Regular readers have probably noticed that this blog has been dark for a few months. Life has been busy having been married about 10 months now, and along with that a new career opportunity popped up that was too good to turn down. I have updated my “About” page to fully disclose my new role and employer, which I believe is important to do as everybody has bias in some way, shape, or form, whether we like to admit it or not. What I love about my new role (with CDW) is the opportunity to have significant exposure to a large array of technologies and manufacturers, which allows me to “keep it real” so to speak when talking with customers. That being said, there are technologies that I don’t get much exposure to. Additionally, though I go thru training and maintain accreditation/certification in a wide variety of storage technologies, like anyone I can tend to gravitate to what I know best, which dates back to my years as a storage administrator/architect.

So, with that being said, let the blogging resume, albeit at what will probably be a somewhat slower pace as life just keeps getting busier and time keeps flying faster.

Categories: Uncategorized

SRM for VNXe is here (NFS only)!

February 5, 2013 Leave a comment

SRM for VNXe is here (NFS only)!

FINALLY!   VNXe finally supports VMware Site Recovery Manager (SRM)!  Since I’ve been critical of this lack of support in the past, I wanted to make sure I posted this link to add visibility.   Note:  this is technically a “preview” and is not 100% certified by VMware yet, therefore it is not downloadable thru the official site for VMware SRA’s.    

Categories: Uncategorized Tags:

Running List of “N” Storage Companies

January 25, 2013 2 comments

One of my work colleagues (Mike Ellis @v2mike) and I have been keeping track lately of any and all storage companies that have a name starting with the letter N. Whether it is purely coincidence or something else, I have yet to figure out. But, it is quite interesting how many companies have picked this letter to start their name. Any conspiracy theories out there?

NetApp
Nimble
Nexenta
Nutanix
Nirvanix
Nexsan
Netgear (a bit of a stretch)
Nasuni
Nexgen

ADDED 1/27/13
Nimbus

If you come up with any others, please comment and I will get it added!

Categories: SAN

VMFS-5 Heap Size

January 9, 2013 Leave a comment

Nick and I troubleshooted this issue for a client back in December. He beat me to the punch on blogging about it 🙂 This was the first time I had heard of heap size, and I hope it’s the last!

Categories: EMC, Uncategorized, VMware

Why enterprise-class traditional NAS products will remain

January 7, 2013 2 comments

I’ve commented before that “Unified Storage” as a differentiator is no longer the differentiator it once was, given the fact that virtually all major storage vendors are now offering a “Unified” product.  Previously, only EMC and NetApp offered a true Unified storage solution.   Now, IBM has SONAS built into the V7000, HP is building IBRIX into their products, HDS released the HUS platform that leverages BlueArc NAS, and Dell also is integrating Exanet into their products.   

 

However, it’s important to note that not all Unified storage products are the same.   Just because a manufacturer can “check the box” on a spec sheet that they have NAS doesn’t mean all NAS products work the same.    On a related note, now that EMC has acquired Isilon, which many perceive to be a superior product to Celerra, the rumors are always going around about when will VNX File be replaced with Isilon code on the VNX series.

I’m here to tell you that:

  • EMC and NetApp are still best equipped to fulfill the needs of traditional enterprise NAS use cases compared to any other vendor.
  • I don’t believe Isilon will replace VNX File (Celerra) anytime soon.
  • While Isilon, SONAS, IBRIX, etc are superior for scale-out use cases, that’s not the case for traditional enterprise NAS requirements.

Why is this the case?  First let me clarify, when I say traditional enterprise NAS requirements, I’m talking large enterprise, as in tens of thousands of users.    For a smaller shop, these don’t apply.   Here are some sample requirements:

  • Support for hundreds of file systems and/or mountpoints (much different than the big-data use case people talk of today involving a single file system that scales to petabytes)
    • Large enterprises have dozens if not hundreds of legacy file servers.   Wouldn’t it be great to consolidate these or virtualize them behind some file gateway?  Sure!  Is it realistic in a huge environment with thousands of custom applications that have hard-coded UNC paths to these locations, immense user disruption and re-education, etc?  Not really.
  • Robust NDMP support
    • Large enterprises may be using advanced features of NDMP such as volume-based backup and checkpoint/snapshot based NDMP backups.   Do all scale-out NAS offering support these?  I don’t know to be honest but I’d be surprised. 
  • Number of CIFS sessions
    • Handling 20,000 users logging in each morning, authenticating against AD, downloading user/group SIDs for each account, and handling drive map creations for each user that may be part of the login script is a unique requirement in its own right.  It’s very intensive, but not from the standpoint of “scale-out” processing intense.   Being able to open all these CIFS user sessions, maintain them, and potentially fail them over is not what scale-out NAS was designed for.
  • Multiple CIFS servers
    • Same point as above under multiple file systems.  It’s not necessarily so simple for an organization to consolidate tens or hundreds of file servers down to one name.
  • Multi-protocol support
    • Scale-out NAS was not designed for corporations that have invested a lot in making their traditional NAS boxes work with advanced multi-protocol functionality, with complex mapping setup between Windows AD and Unix NIS/LDAP to allow users to access the same data from both sides with security remaining intact.
  • Snapshots
    • Most scale-out NAS boxes offer snapshots, but make sure they are Shadow-Copy client integrated, as most large organizations let their users/helpdesk perform their own file restores. 
  • Advanced CIFS functions
    • Access Based Enumeration – hides shares from users who don’t have ACL rights.
    • Branch Cache – increases performance at remote offices
    • Robust AD integration and multi-domain support (including legacy domains)
  • Migration from legacy file servers with lots of permission/SID issues.
    • If you’re migrating a large file server that dates back to the stone age (NT) to a NAS, it most likely is going to have a lot of unresolvable SIDs hidden deep in its ACL’s for one reason or another.   This can be a complex migration to an EMC or NetApp box.  I know from experience Celerra had multiple low-level params that could be tweaked as well as custom migration scripts all designed to handle issues that can occur when you start encountering these problem SIDs during the migration.   A lot of knowledge has been gained here by EMC and NetApp over the past 10 years and built into their traditional NAS products.   How are scale-out NAS products designed to handle these issues?  I am hard-pressed to believe that they can handle it.

 

The reality is that EMC’s Celerra codebase and NetApp’s ONTAP were purpose-built NAS operating systems designed to deal with these traditional enterprise requirements.   SONAS, IBRIX, BlueArc, Exanet, and Isilon were not.    These scale-out products (which I evaluated many years ago at a former employer and even had the opportunity to watch SONAS be developed and productized) were designed for newer scale-out use cases, often involving High Performance Computing (HPC).   In fact, HPC was the sole reason my former employer looked at all of these excluding Exanet.    Many of these products use SAMBA to provide their CIFS support.  Isilon was just recently switched to a more enterprise-class custom CIFS stack.  SONAS definitely uses SAMBA because it was built upon clustered SAMBA.   HPC has completely different requirements for NAS than traditional corporate file sharing, and so companies that built products focused on the HPC market were not concerned about meeting the needs of corporate file shares.

 

Now this is slowly changing, as we see more traditional enterprise features being built into the latest Isilon “Mavericks” code release, particularly around security.  I’m sure the other vendors are rapidly making code modifications as well now that they’ve all picked the NAS technology that they will make their SAN’s “unified” with.    But it will take time to catch up to 10 years of complex Windows permission and domain integration development that Celerra/VNX and NetApp have as advantages on their side.    From a quick search, it appears Isilon does not support MS Access Based Enumeration, so to think that EMC is going to dump Celerra/VNX code and plop in Isilon code on its Unified storage arrays is silly, when there are probably thousands of customers using this functionality.

 

Categories: EMC, IBM, NAS, NetApp

Centera Update – 12TB nodes now available

December 24, 2012 1 comment

Here’s one that slipped in under the radar.  With EMC’s focus for archiving being on Isilon and Atmos these days, it wasn’t well publicized that a new Gen4LP (LP = Low Power = 5400RPM drives) node is now available.   I didn’t realize it myself until I was filling out a Centera change control form and noticed a new option.    The 12TB G4LP nodes use 3TB drives internally.  Other than that I doubt there is much change to the hardware.   One thing to note, you cannot add these to an existing cube that is built on previous G4LP nodes.   12TB nodes can only be in a cube by themselves. 

I’ve commented before that I still believe Centera is still a very legitimate platform: https://hoosierstorage.wordpress.com/2011/05/04/does-archiving-to-centera-or-cas-still-matter-2/.  Many Centera customers are struggling with the decision of whether to move to Isilon or Atmos.  While either one may make sense in some cases, in other cases there’s nothing wrong with sticking with Centera.  It’s a pretty painless migration do to something like a CICM or C2C, certainly less effort and cost will be involved than migrating to an alternative technology.   Yes, Centera is somewhat “proprietary”, especially now that EMC has ended XAM development, but if EMC is a preferred manufacturer then there isn’t much to worry about.    You can rest assured that EMC is going to support this platform for a minimum of 5 years once the node hardware goes end of sale like they do with all hardware (except Symm, which is longer).   Even in an unfathomable scenario where EMC went out of business, there are multiple 3rd parties that can migrate data off Centera now.  The 12TB node will offer a pretty attractive refresh TCO based on hardware refreshes I’ve seen going from G3/G4 to Gen4LP 8TB nodes.  Then, in 5-7 years when it’s time for another tech refresh, hopefully the debate between Isilon and Atmos as the preferred archiving platform will be over 🙂

Release announcement:

Product: Centera CentraStar v4.2 SP2 (4.2.2) and 12TB Node Hardware

General Availability Date: Nov 19, 2012

Product Overview

Centera is a networked storage system specifically designed to store and provide fast, easy access to fixed content (information in its final form). It is the first solution to offer online availability with long-term retention and assured integrity for this fastest-growing category of information. CentraStar is the operating environment that runs the Centera cluster.

New Feature Summary

The following new features and functionality are available within CentraStar v4.2.2:

  • The introduction and support for GEN 4LP 12TB nodes utilizing 3TB drives.
  • Improved security through an updated SUSE Linux platform (SLES 11 SP2) and Java updates.
  • Consolidation of the CentraStar and Centera Virtual Archive software into one package for improved installation and maintenance.

 

Categories: Archive, EMC

VMware vDP and Avamar – Blown out of Proportion

October 8, 2012 2 comments

The dust has settled a bit since the announcement of vSphere 5.1, including the new VMware Data Protector (vDP) functionality based on EMC Avamar code.   Immediately following the announcement there were:

  • EMC folks reporting this proves Avamar is the greatest thing since sliced bread because VMware chose Avamar Virtual Edition (AVE) as the basis for vDP
  • VMware folks stating vDP only leverages Avamar technology – it is a new product co-developed by VMware and EMC rather than AVE with a new GUI.
  • Critics/Competitors saying they are two completely different products and this announcement doesn’t mean anything or this announcement means the world will be running Hyper-V in 12-18 months as EMC takes over VMware and fails miserably.

What’s my opinion?  Being a middle-of-the-road guy, naturally I think both the far left and right are blowing things out of proportion and VMware employees were generally the most accurate in their assessments.    

We can hold these things to be self-evident:

  • vDP is a virtual appliance.  AVE is a virtual appliance.   One would find it highly unlikely that VMware would completely re-write the virtual appliance used for vDP, but we don’t know for sure.
  • The vDP GUI is a heck of a lot simpler to manage for the average SMB shop than AVE.  EMC needs to learn a lesson here and quickly – not just for SMB customers but also Enterprise customers running full-blown Avamar. 
  • vDR was getting a little bit better, but a scan of the VMware Community Forums quickly showed it was a poor product.  Even the smallest of SMB shops did not like it and usually ended up going the Veeam route after struggling to get vDR working.
  • Avamar does have best-in-class de-duplication algorithms so it’s not hard to accept the argument that VMware evaluated different de-dupe technologies and picked Avamar’s to the be nuts and bolts under vDP.
  • I wouldn’t try to outsmart Joe Tucci.  We might see some pushing of the envelope with regards to the EMC-VMware relationship, but he’s not going to screw this thing up. 

 

Questions in my mind…

  • AVE was very performance hungry.  In fact, before install it required a benchmark test be run for 24-48 hours that was very disk intensive.  If certain specs were not met, EMC would not support the AVE configuration.    This is why EMC almost always sells Avamar as a HW/SW appliance.   In my mind, the typical vDP user is probably going to use some very low-cost storage as the backup repository.  I wonder how this product is going to perform unless some significant performance enhancements were made to the vDP product relative to AVE. 
  • Even the smallest of SMB’s typically want their backups to be stored off-site, and vDP doesn’t offer any replication capability, nor does it offer any sort of tape-out mechanism.    Is this really a practical solution for anybody nowadays?
  • Is there an upgrade path from vDP to full Avamar?   I’ve seen EMC employees post in their blogs that there is a clear upgrade path if you outgrow vDP, every other post I’ve seen says there is no upgrade path.  I’ve not been able to find any official documentation about the upgrade path.  Which is it, and is there an expensive PS engagement involved? 

 

All in all, the providers of SMB-oriented VMware backup solutions such as Veeam don’t have much to be worried about yet.    It’s a strange world of “coopetition” that we live in today.   EMC and VMware cooperating on vDP.  VMware partnering with all storage vendors, yet being majority owned by EMC.    EMC partnering closely with Microsoft and beefing up Hyper-V support in all their products.   All storage vendors partnering closely with Oracle, but Oracle getting into the storage business.   Cisco partnering with NetApp on FlexPod and also with VCE on vBlock.  EMC pushing Cisco servers to their clients but also working with Lenovo for some server OEM business.      The list goes on and all indications are this is the new reality we will be living with for some time.  

What would I do if I were Veeam or another provider of SMB backup for virtual machines?  Keep continuing to innovate like crazy, as Veeam has done.  It’s no different than what VMware needs to keep doing to ensure they stay ahead of Microsoft.   Might I suggest for Veeam specifically, amp up the “coopetition” and build DD BOOST support into your product.    DataDomain is the best-in-class target de-dupe appliance with the most market share.  Unfortunately, the way Veeam and DD work together today is kludgey at best.   Although Veeam can write to NFS storage, it does not work well with a NFS connection directly to the DD appliance.   Rather, it is recommended to setup an intermediary Linux server to re-export the NFS export from the DD box.    A combination of Veeam with DD BOOST and something like a DD160 for the average SMB shop would be a home run and crush vDP as a solution any day of the week.    I have heard that Quest vRanger recently built support for DD BOOST into their product and it will be interesting to see if that remains now that Quest was purchased by Dell. 

 

 

A look at how Inyo (VNXOE 32) optimizes a VNX configuration

October 2, 2012 Leave a comment

I previously discussed EMC’s new release of code for VNX here.   As I catch up on writing blog articles following my wedding last month, I thought I’d highlight one real customer situation that I was working on in August where the improvements saved a considerable amount of money, increased the usable capacity percentage, and offered greater IOPS capability.   This is a modification of an email I sent to the customer explaining the changes I made in the configuration. 

The previous config consisted of a VNX5300 with 37 SAS drives and 31 NL-SAS drives.   The new config consisted of 37 SAS drives and 33 NL-SAS drives.

In short:

  • The previous config required the storage pool to use RAID6 for all drives, which is needed for the high-capacity 7200RPM drives, but is a bit overkill for 15K drives and reduces usable capacity on those drives.     This new config is able to support mixed RAID types in the same FAST storage pool.  

 

  • EMC has now blessed new RAID layouts as best-practice.    Underneath the storage pool covers, the 15K drives use an 8+1 RAID5 protection scheme instead of 4+1, and the 7200RPM drives use 14+2 RAID6 instead of 8+2 (Note: the default previously was 6+2 but there was a workaround documented by penguinpunk here to use 8+2, which I’ve confirmed with EMC still works).  It is still presented as a single storage pool. 

 

  • The previous config offered 110TB Raw and 76TB Usable, or 69% usable.   The new config is 100TB Raw and 76TB usable, or 76% usable.  
    • The previous config offered  12TB usable on the 15K tier and 64TB usable on the 7200RPM.   The new config is 13TB usable on the 15K tier and 63TB usable on 7200RPM.

 

  • The new config offers 6,300 IOPS from 15K, 2,880 from 7200RPM, and 7,000 IOPS from FAST Cache, for a total of 9,180 IOPS.   The previous config offered 6,480 IOPS from 15K and 2,700 IOPS from 7200RPM for a total of 9,180 IOPS (same number).  
    • Though this is the same number of IOPS, the new config will offer more effective IOPS because there will be less IOPS consumed by RAID6 parity calculations on the 15K drives.

 

  • The storage pools will now automatically load-balance within a tier if there is a hot spot within a 8+1 or 14+2 RAID group.  In other words, if one of the 8+1 R5 groups in the storage pool is running hot, it will move slices of data as needed to a less busy 8+1 15K group of disks. 

 

  • If you add drives, the VNX will now automatically re-balance all existing data across all the new drives to increase performance for existing data in addition to adding new capacity.     

 

There are still some things I’d like to see improved in terms of how pools work on VNX, which I have a feeling won’t come around until VNX “2”, regardless there’s lots of great stuff in this code release that gives the VNX a considerable boost in its market relevance. 

Categories: EMC