Archive

Archive for the ‘Backup’ Category

VMware vDP and Avamar – Blown out of Proportion

October 8, 2012 2 comments

The dust has settled a bit since the announcement of vSphere 5.1, including the new VMware Data Protector (vDP) functionality based on EMC Avamar code.   Immediately following the announcement there were:

  • EMC folks reporting this proves Avamar is the greatest thing since sliced bread because VMware chose Avamar Virtual Edition (AVE) as the basis for vDP
  • VMware folks stating vDP only leverages Avamar technology – it is a new product co-developed by VMware and EMC rather than AVE with a new GUI.
  • Critics/Competitors saying they are two completely different products and this announcement doesn’t mean anything or this announcement means the world will be running Hyper-V in 12-18 months as EMC takes over VMware and fails miserably.

What’s my opinion?  Being a middle-of-the-road guy, naturally I think both the far left and right are blowing things out of proportion and VMware employees were generally the most accurate in their assessments.    

We can hold these things to be self-evident:

  • vDP is a virtual appliance.  AVE is a virtual appliance.   One would find it highly unlikely that VMware would completely re-write the virtual appliance used for vDP, but we don’t know for sure.
  • The vDP GUI is a heck of a lot simpler to manage for the average SMB shop than AVE.  EMC needs to learn a lesson here and quickly – not just for SMB customers but also Enterprise customers running full-blown Avamar. 
  • vDR was getting a little bit better, but a scan of the VMware Community Forums quickly showed it was a poor product.  Even the smallest of SMB shops did not like it and usually ended up going the Veeam route after struggling to get vDR working.
  • Avamar does have best-in-class de-duplication algorithms so it’s not hard to accept the argument that VMware evaluated different de-dupe technologies and picked Avamar’s to the be nuts and bolts under vDP.
  • I wouldn’t try to outsmart Joe Tucci.  We might see some pushing of the envelope with regards to the EMC-VMware relationship, but he’s not going to screw this thing up. 

 

Questions in my mind…

  • AVE was very performance hungry.  In fact, before install it required a benchmark test be run for 24-48 hours that was very disk intensive.  If certain specs were not met, EMC would not support the AVE configuration.    This is why EMC almost always sells Avamar as a HW/SW appliance.   In my mind, the typical vDP user is probably going to use some very low-cost storage as the backup repository.  I wonder how this product is going to perform unless some significant performance enhancements were made to the vDP product relative to AVE. 
  • Even the smallest of SMB’s typically want their backups to be stored off-site, and vDP doesn’t offer any replication capability, nor does it offer any sort of tape-out mechanism.    Is this really a practical solution for anybody nowadays?
  • Is there an upgrade path from vDP to full Avamar?   I’ve seen EMC employees post in their blogs that there is a clear upgrade path if you outgrow vDP, every other post I’ve seen says there is no upgrade path.  I’ve not been able to find any official documentation about the upgrade path.  Which is it, and is there an expensive PS engagement involved? 

 

All in all, the providers of SMB-oriented VMware backup solutions such as Veeam don’t have much to be worried about yet.    It’s a strange world of “coopetition” that we live in today.   EMC and VMware cooperating on vDP.  VMware partnering with all storage vendors, yet being majority owned by EMC.    EMC partnering closely with Microsoft and beefing up Hyper-V support in all their products.   All storage vendors partnering closely with Oracle, but Oracle getting into the storage business.   Cisco partnering with NetApp on FlexPod and also with VCE on vBlock.  EMC pushing Cisco servers to their clients but also working with Lenovo for some server OEM business.      The list goes on and all indications are this is the new reality we will be living with for some time.  

What would I do if I were Veeam or another provider of SMB backup for virtual machines?  Keep continuing to innovate like crazy, as Veeam has done.  It’s no different than what VMware needs to keep doing to ensure they stay ahead of Microsoft.   Might I suggest for Veeam specifically, amp up the “coopetition” and build DD BOOST support into your product.    DataDomain is the best-in-class target de-dupe appliance with the most market share.  Unfortunately, the way Veeam and DD work together today is kludgey at best.   Although Veeam can write to NFS storage, it does not work well with a NFS connection directly to the DD appliance.   Rather, it is recommended to setup an intermediary Linux server to re-export the NFS export from the DD box.    A combination of Veeam with DD BOOST and something like a DD160 for the average SMB shop would be a home run and crush vDP as a solution any day of the week.    I have heard that Quest vRanger recently built support for DD BOOST into their product and it will be interesting to see if that remains now that Quest was purchased by Dell. 

 

 

A look at Block Compression and De-duplication with Veeam and EMC VNX

March 26, 2012 4 comments

Before I proceed any further, I want to state clearly that the testing I performed was not to pit one alternative vs. another.   Rather, I was curious to do some testing to see what type of Block LUN Compression rates I could get for backup data written to a CX4/VNX, including previously de-duped data.   At the same time, I had a need to do some quick testing in the lab comparing Veeam VSS vs. VMware Tools VSS snapshot quiescing.    Since Veeam does de-duplication of data, I ended up just using the backup data that Veeam wrote to disk for my Block LUN Compression tests.

Lab Environment

My lab consists of a VNX5300, a Veeam v6 server, and vSphere 5 running on Cisco UCS.   The VM’s I backed up with Veeam included a mix of app, file, and database VMs.  App/File constituted about 50% of the data and DB was the other 50%.   By no means will I declare this to be a scientific test, but these were fairly typical VM’s that you might find in a small customer environment and I didn’t modify the data sets in any way to try and enhance results.

Veeam VSS Provider Results

For those not aware, most VADP backup products will quiesce the VM by leveraging MS VSS.  Some backup applications provide their own VSS provider (including Veeam), and others like vDR rely on the VMware VSS provider that gets installed along with VMware tools.   With Veeam, it’s possible to configure a job that quiesces the VM with or without their own provider.   My results showed the Veeam VSS provider was much faster than VMware’s native VSS.   On average Veeam created the backup snapshot in 3 seconds with their provider, and 20 seconds without it.   I also ran some continuous ping tests to the VM’s while this process was occurring, and 1/3 of the time I noticed a dropped ping or two when the snapshot was being created with VMware’s VSS provider.   A dropped ping is not necessarily a huge issue in itself, but certainly the longer the quiescing and snapshot process takes, the bigger your window for a “hiccup” to occur, which may be noticed the applications running on that server.

De-dupe and Compression Results

I ran two tests leveraging Veeam and a 200GB Thin LUN on the VNX5300.

Test 1

The settings used for this test were:

  • ·         Veeam De-dupe = ON
  • ·         Veeam In-line compression = ON
  • ·         EMC Block LUN Compression = Off
  Backup Job Size
Backup Job 1 6GB
Backup Job 2 1.2GB
Backup Job 3 12.3GB

 

The final space usage on the LUN was 42GB.   I then turned on Block LUN Compression and no additional savings were obtained, which was to be expected since the data had already been compressed.

Test 2

The settings used for this test were:

  • ·         Veeam De-dupe = ON
  • ·         Veeam In-line compression = Off
  • ·         EMC Block LUN Compression = ON
  Backup Job Size
Backup Job 1 13.6GB
Backup Job 2 3.4GB
Backup Job 3 51.3GB

 

The final space usage on the LUN was 135GB.  I then turned on VNX Block LUN Compression and the consumed space was reduced to 60GB – a 2.3:1 compression ratio or a 56% space savings.  Not too shabby for compression.   More details on how EMC’s Block LUN Compression are available at this link: http://www.emc.com/collateral/hardware/white-papers/h8045-data-compression-wp.pdf

In short, it looks at 64KB segments of data and tries to compress data within each segment. 

Again, this post isn’t about comparing de-dupe or compression rates between Veeam’s software approach within the backup job, or letting the storage hardware do the work.   There are going to be pros and cons to both methods.   For longer retentions (30 days and beyond), I tend to recommend a Purpose-built Backup Appliance (PBBA) that does variable-length block de-duplication.  Rather, for these tests I was out to confirm:

a)      Does Block LUN Compression work well for backup data (whether it has been de-duped or not)?  The conclusion here was Block LUN Compression worked quite well.  I really didn’t know what to expect, so the results were a pleasant surprise.   In hindsight, it does make sense that the data could still compress fairly well.   Although de-dupe has eliminated redundant patterns of blocks, if the remaining post-dedupe blocks still contain data that is compressable, you should be able to squeeze more out of it. This could come in handy for situations where B2D is leveraged and your backup software doesn’t offer compression, or shorter retentions that don’t warrant a PBBA that does variable-length block de-duplication.   

 

b)      The latest version of Veeam is quite impressive, they’ve done some nice things to enhance the architecture so it can scale out as larger enterprise backup software does.   The level of de-dupe and compression achieved within the software was impressive as well.   I can certainly understand why a large number of mid-market customers I speak with have little interest in using vDR for VM image backups as Veeam is still light-years ahead.    If you’re looking at these two products and you have highly-transactional systems in your environment such as busy SQL or Exchange boxes, you’ll be better off with Veeam and its enhanced VSS capabilities. 

Categories: Backup, De-dupe, EMC, Veeam, VMware

De-duplication just got cheaper

October 11, 2011 Leave a comment

A week ago I wrote that the high cost of data de-duplication and the lack of downward movement on prices was a potential concern for the market players. I have seen two recent cases (and heard first-hand of more) where customers were seriously considering B2D on 2 or 3TB NL-SAS drives without de-dupe as they were finding it to be a cheaper acquisition.

Coincidentally, EMC responded this week by announcing a refresh of the low-end DD platforms, giving quite a lot more capacity with a much lower price point. The DD160 replaces the 140, 620 replaces 610, and 640 replaces 630. The cost of the existing DD670 was also reduced. I think this is a smart move by EMC and will make them much more competitive at the low-end, where customers would often choose an inferior technology simply because of price point and meeting “good enough” criteria.

A common scenario I found was most small-medium sized companies have at least 10TB of backup data. In the DD product line, this would put them above the previous low-end models and into the higher-end 670, making it out of reach for them financially.

What I’m seeing is the new DD640 with expansion shelf capabilities nicely solves this problem. It can scale to 32TB usable pre-deduped capacity. I just ran a sample config comparing a 670 with a 32TB shelf to a 640 with 30TB shelf and the cost is cheaper by tens of thousands. Kudos to EMC on this one. Now, if they could only do something for Avamar cost of entry at the low-end of the market…

Categories: Backup, De-dupe, EMC

The Cost of Data De-duplication

October 6, 2011 Leave a comment

Backup data-deduplication has been one of the hottest technologies of the last few years.   We’ve seen major players like EMC spend 2 billion to purchase DataDomain, and the industry in general is a 2 billion dollar market annually.   Why all the focus here?  Two reasons I believe:

1) These technologies solve an important business need to back up an ever-increasing volume of data (much of it redundant). 

2) Storage manufacturers have to find a way to maintain their growth rates to satisfy Wall St. and backup de-duplication is still one of the fastest areas of growth.

I was shocked to hear the other day when a very large client reported they were moving away from backup de-duplication and simply going to backup on SATA/NL-SAS 3TB drives.   What was the reasoning behind that decision?   The cost of SATA/NL-SAS drives is coming down faster than the cost of data de-duplication.  

That is certainly an interesting theory, and one that deserves some further consideration.  If there is a common challenge I’ve seen with customers, it’s dealing with the cost associated with next-generation backup, and prices have only come down minimally in the past 2 years.   Backup is still an oft-forgotten step-child within IT infrastructure and it’s hard to explain to corporate management why money is needed to fix backups.   Often, when I’m designing a storage and backup solution for a customer, the storage is no longer the most expensive piece of the solution.   Thanks to storage arrays being built on industry-standard x86 hardware, iSCSI taking away market share from Fibre Channel, and advancements in SAS making it the preferred back-end architecture instead of FC, the storage has become downright cheap and backup is the most costly part of the solution.   This issue affects the cost-conscious low-end of the market more than anywhere else, but nonetheless it can be a challenge across all segments.  

In the case of this particular customer who decided it was more economical to move away from de-dupe, there is certainly more to the story.   Namely, they were backing up a large amount of images and only seeing 3:1 de-dupe ratios.    However, I have recently seen another use case for a customer who only needed to keep backups for 1 week where it was more economical to do straight B2D on fat SATA/NL-SAS solution.  By layering in some software that does compression yielding 2:1 savings, it becomes even more economical.   

From the manufacturer perspective, I’m sure it’s not easy to come up with pricing for de-dupe Purpose Built Backup Appliances (PBBAs).   The box can’t be priced based on the actual amount of SATA/NL-SAS in the box as that would be too cheap based on the amount of data you can truly store on it, but it can’t priced for the full logical capacity as there less incentive to use a de-dupe PBBA vs. straight disk.   Generally speaking, to make a de-dupe PBBA a good value, you need to have a data retention schedule that can yield at least 4:1 or 5:1 de-dupe in my experience.   

Even if you can’t obtain 5:1 or greater de-dupe, there are a few additional things worth considering that may still make a PBBA the right choice instead of straight disk.   First, a PBBA with de-dupe can still offer a lot of benefits for bandwidth-friendly replication to a remote site.    Second, a PBBA with de-dupe can offer a significantly better environmental savings in terms of space, power, and cooling than straight disk.   

Categories: Backup, De-dupe

Does Archiving to Centera or CAS Still Matter?

May 4, 2011 3 comments

Over the past 2 years, I’ve noticed a rather drastic reduction in the number of archiving conversations I have with customers. Email archiving still pops up, but most of the folks who need to do it are already doing it. File system archiving seems to be even less common these days, though it still pops up occasionally. There is certainly still a market in healthcare and financials, but even that seems less prevalent than it was at one time. Archiving did come up in a recent conversation, which got me thinking about this topic again and I thought it’d make a good blog post.

Without a doubt, the archive market seems to have shrunk. I’m reminded of my time at EMC a year and a half ago when I had to go thru some training about “recapturing the archive market”. From the early-mid 2000’s until the late 2000’s, the “archive first” story was the hottest thing going. EMC built an entire business on the Backup, Recovery, and Archive story (BURA), which encompassed the idea of archiving your static and stale data first, to save money by shrinking the amount of data you need to back up and store on more expensive Tier 1 storage. As a result, they made the term Content Addressable Storage (CAS) go mainstream and be copied by others.  The Centera platform was a product EMC purchased rather than developed in-house, but they created a successful business out of it nonetheless. The predecessor of the Centera was a product called FilePool. The founders of FilePool are actively involved in another CAS startup now called Caringo.

How CAS Works

The Content Address is a digital fingerprint of the content. Created mathematically from the content itself, the Content Address represents the object—change the binary representation of the object, (e.g. edit the file in any way) and the Content Address changes. This feature guarantees authenticity—either the original document remains unchanged or the content has been modified and a new Content Address is created.

Step 1 An object (file, BLOB) is created by a user or application.
Step 2 The application sends the object to CAS system for storing.
Step 3 CAS system calculates the object’s Content Address or “fingerprint,” a globally unique identifier.
Step 4 CAS system then sends the Content Address back to the application.
Step 5 The applications store the Content Address—not the object—for future reference. When an application wants to recall the object, it sends the Content Address to the CAS system, and it retrieves the object. There is no filesystem or logical unit for the application to manage.

CAS systems also had another compelling advantage back in the day, that being there was very little storage management involved. No RAID groups, LUNs, or Storage Groups to ever build or allocate. No traditional file system to ever manage. Per IDC, a full time employee could effectively manage considerably more CAS storage than any other type (320TB vs. 25TB for NAS/SAN).

I have to admit, the CAS story was compelling. Thousands of customers signed up and bought hundreds of PB’s of CAS from multiple vendors. The Fortune 150 company I worked for in the past implemented hundreds of TB’s of Centera CAS as part of an archiving strategy. We archived file system, database, and email data to the system using a variety of ISV packages. Given that this market used to be so hot, I’ve often thought about the possible scenarios for it cooling off, and why many people now choose to use a Unified storage platform for archiving rather than a purpose-built CAS system. Here are a few of the thoughts I’ve had so far (comments welcome and appreciated):

  1. CAS wasn’t as simple as claimed. Despite the claims of zero storage management, in reality I think several of the admin tasks that were eliminated by CAS were replaced by new management activities that were required for CAS. Designing archive processes with your internal business customers, evaluating various archiving software packages, configuring those software packages to work with your CAS system, and troubleshooting those software packages can be cumbersome and time-consuming.
  2. Storage management has gotten considerably easier in the last 5 years.   Most vendors have moved from RAID groups to pool’s, LUN/Volume creation is handled via GUI instead of CLI, and the GUI’s have been streamlined and made easy for the IT generalist to use.   Although I would say a CAS appliance can still be easier to manage at scale, the difference is not near as great as it was in 2005.
  3. NetApp created a great story with their one size fits all approach when they built in WORM functionality to their Unified storage platform, which was soon copied by EMC in the Celerra product and enhanced to include compliance.
  4. Many customers didn’t need guaranteed content authenticity that CAS offers, they simply needed basic archiving. Before NetApp and EMC Unified platforms offered this capability, Centera and other CAS platforms were the only choice for a dedicated archive storage box. Once NetApp and then EMC built in archiving into the cost-effective mid-range Unified platform, my opinion is it cut Centera and other CAS systems off at the knees.
  5. CAS systems were not cheap, even if they could have a better TCO than Tier 1 SAN storage. It was primarily larger enterprises that were typically able to afford CAS, while the lower-end of the market quickly gravitated to a Unified box that had archive functionality built in.
  6. Backup windows were not always reduced by archiving. Certainly there were some cases where it could help, but also areas where it did not. As an example, many customers wanted to do file system archiving on file systems with millions and millions of files. When you archive, the data is copied to the archive and a stub is left in the original file system. Using traditional backup, these stubs still need to be backed up, and the backup application sees them as a file. This means even if the stub is only 1KB, it still causes the backup application to slow way down as part of the catalog indexing process. There are some workarounds like doing a volume-based backup, which backs up the file system as an image. However, there are caveats here as well. As an example, if you do file-system de-dupe on an EMC platform in conjunction with archiving, you can no longer do granular file-level recoveries from a volume-based backup. Only a full-destructive restore is allowed.
  7. Many customers didn’t really need to archive for compliance purposes, rather they simply wanted to save money by moving stale data from Tier 1 storage to Tier2/3 storage. This required adding in cost and complexity for a file migration appliance or ISV software package to perform the file movement between tiers, which ate away at the cost savings. Now that many storage arrays have auto-tiering functionality built-in, the system will automatically send less frequently accessed blocks of data to a lower tier of storage, completely transparent to the admin and end-user, with no file stubbing required.

To sum it up, what would I recommend to a customer today? CAS is still a very important storage product and although it’s not a rapidly growing area, it still has a significant install base that will remain for some time. There still are some things that a CAS system can do that the Unified boxes cannot. Guaranteed content authenticity with an object-based storage model is certainly one of those, and probably the most important. If you require as good of a guarantee as you can possibly get that your archive data is safe, CAS is the way to go. As I alluded to before, this still has importance in the healthcare and financial verticals, though I see smaller institutions in those verticals often choose a Unified platform for cost-effectiveness. Outside of those verticals, if your archive storage needs are <100TB, I’m of the opinion that a Unified platform is most likely the way to go, keeping in mind every environment can be unique. There may also be exceptions for applications that offer CAS API integration thru the XAM protocol. If you’re using one of those applications, then it may also make sense to investigate a true CAS platform.

Further reading on CAS:

http://en.wikipedia.org/wiki/Content-addressable_storage

Categories: Archive, Backup, EMC, NAS, NetApp

Real-world example of snaps vs. backup

April 4, 2011 Leave a comment

Following up from my last post comparing snapshots with replication to backup, one of my peers sent me the following info:

“You may have heard about the issues Google experienced with Gmail a while ago.  Data Loss.  No problem because they replicate all data to a second data center, right?   But sometimes you’re replicating bad data or a software bug, which is what Google seems to be saying here.  But they back up their data. ” 

http://gmailblog.blogspot.com/2011/02/gmail-back-soon-for-everyone.html

Pretty interesting stuff here, particularly that Gmail is backed up (kudos to Google).    It also raises a very real use-case where snapshots or data on disk doesn’t have to be corrupted, rather the OS of the system storing the snapshots could have a bug that renders the snapshots invalid. 

Typically, storage arrays that are replicating require that both sides be running the same code level.  Usually when doing an upgrade, you upgrade the remote site first, then upgrade the source side.   Running both sides at different code versions for an extended period of time isn’t an option as it causes an issue getting support from the manufacturer, but some code bugs related to the upgrade may not pop up for 30, 60, or even 90 days.

Categories: Backup, Uncategorized

When Snapshots with Replication Aren’t Enough: Benefits of a Purpose-built Backup Solution

March 24, 2011 Leave a comment

When I work with customers on a storage solution, I often end up talking about backup as well. Why is that? For starters, it’s important to remember that every GB of storage you add is another GB that will have to be backed up by your backup infrastructure.

When I worked at a large pharmaceutical company, this was a problem we often encountered when internal business units would purchase more storage. Six months later as they consumed the new storage, we’d have a disaster on our hands with not enough tapes or tape drives to get backups completed. Of course, you can ask the business for more money to buy additional backup infrastructure, but that is a difficult thing to do after the original purchase. One question I often hear from customers who are looking at a multiple vendors as part of a storage/backup project is, “Why not just use snapshots with my storage array so that I don’t need a backup solution anymore?”

There are different ways to achieve “backup.” Some customers choose to use a true purpose-built backup solution, and some have chosen to eliminate backup by using snapshots with replication. Regardless of which way you decide is the correct fit for your environment, there is a solution that can meet your requirements. The majority of customers I work with choose to implement a true purpose-built backup solution as a means to achieve data protection. The entire backup de-duplication market is experiencing high double-digit growth rate every year, or even triple-digit growth in some cases. This validates that purpose-built backup solutions, leveraging de-duplication technology, are very relevant in the marketplace today.

That being said, eliminating backups by using snaps with replication will work as a way to achieve a backup of your data. You will typically hear about this method of backup from vendors that only offer storage products rather than storage and backup solutions. However, there are caveats to using this method to eliminate backup, regardless of vendor. Like anything, there are pros and cons of different data protection methodologies. Most commonly, customers wanting to do snaps with replication as a form of backup do so because they see the potential for cost savings. Adding some incremental enhancements to a storage investment that allow it to be used as a backup solution can appear to be more cost-effective than purchasing two separate solutions for storage and backup.  However, I typically find that the majority of customers still choose to implement backup with a purpose-built solution and here are some of the top reasons why they do:

1) At least 85 percent of all restores are typical day-to-day granular file restores (compared to complete server or site recovery). With a LUN snapshot, a file restore on a server becomes more complex. For a simple file restore to an app server, you must take the snap and mount it up to a server, then drill-down into the snap to recover the file and copy it back to the original location. Then, the LUN must be unmounted.  Some vendors have tools to help automate this, but it’s still not a trivial task. Snapshots are organized by snapshot date—not by files or directories in a server. If a user asks for a file to be restored, and they don’ t know what server it lived on, how are you going to find the right snapshot? There is no equivalent of a backup catalog (some 3rd parties sell software to add catalog-like functions, but that increases cost).  Compare this to opening a backup software console, pick the server, pick the date to restore to, and select the file name (or search for a file name). The file will then be restored back to the original server. This is a simpler task, plus it is very similar to the way that backup administrators do their job today, which means less re-training for staff.

2) The “snaps with replication” methodology only covers servers that are on the SAN. Therefore, a customer may end up having to use different methods of backup for servers on the SAN vs. servers that continue to use internal storage. This makes administration more complex for a backup administrator.

3) There is a greater risk of data loss from not having a secondary copy of the data stored on a different medium. While most customers will never see this, there is always the potential for a file system or LUN to get corrupted in some way, shape, or form. If that happens, and the corruption gets replicated to the secondary side, all snapshots become worthless because snapshots contain pointers that are dependent upon the primary copy of the data. At that point, there will be data loss. A secondary copy of your data stored on a different medium, whether it be classic tape or a next-gen backup solution will not be affected by this.  Solutions like Avamar and DataDomain also do data integrity checks, which does not happen on any SAN/NAS or tape product. On a SAN/NAS or tape solution, you may not realize you have corruption until you try to restore and encounter a bad block.

4) Humans are prone to make mistakes and there is a greater risk of data loss from human error using the “snaps with replication” methodology.  If there is an accidental misconfiguration of LUN replication, it is not as apparent to administrators as a failed backup job, therefore the problem may not be detected until it is too late and there is data loss. Another example might involve a newly created LUN that does not get added to a replication policy, therefore tens of VM’s could be affected because bits and pieces of the VM’s happen to reside on that particular LUN, which is part of a VMFS datastore. While human mishaps can occur in the traditional backup world, they generally would only affect one server at a time rather than multiple servers.

5) You can achieve greater retention with a true backup solution than you can with keeping snapshots.  While you may only need to keep backups for 45 days today, if those requirements increase to one year or more in the future, the snapshot methodology becomes less attractive. The longer you keep snapshots, the more space they require, and the more overhead is required to keep track of all the delta changes in the snapshot. It is not best practice to keep snapshots for longer-term data protection (several months to 1yr+) regardless of vendor. This is best suited for a purpose-built backup solution.

Categories: Backup