Author name: Managecast Technologies

General Cloud Backup

Is Backup Tape Dead?

I just had someone contact me and ask my opinion if I thought backup tape is dead. Maybe 6 years ago I would have enthusiastically said “Yes!”, and did so many times. However, after spending the last 6 years dedicated to cloud backup and immersed in the backup industry, my views have evolved on tape. Instead of asking “Is tape dead?”, the proper question is “Has the use of tape changed?”. While tape is far from dead and very much alive, it’s use has substantially changed over the past 5 to 10 to 15 years. In the past, tape was the go-to medium for backups of all types. However, disk has certainly displaced a lot of tape when it comes to near line backup storage of recently created data. Many modern backup environments consist of disk-to-disk backup and then backup data is written to tape after some period of time for longer-term storage and archive. Disk storage is significantly higher cost than tape storage, but for near term backup data the advantages of disk outweigh the cost penalty. For long-term archive of older data, where quick access is not needed, tape is the clear winner. [Read about aligning the cost of data protection vs the value of the data] In my experience, many SMBs have shifted to a disk-to-disk-to-cloud solution with no tape. So, in the SMB one could argue that tape has largely died (or at least diminished greatly). However, at the enterprise-level or those organizations who require long-term retention of backup data, there is no better alternative to storing large amounts of data on tape, and this will probably be the case for the next 10 years or beyond. So, no, tape is not dead, but its use has changed. Interested in learning how Managecast can help your business with its cloud backup and disaster recovery solutions? Fill out this form for more information!

Asigra

Asigra Reporting “Cannot Allocate Memory” During Seed Import

We have DS-Systems running on Linux and we connect the Windows Seed backups to a Windows 7/8.1 machine and then use CIFS to mount the Windows share to Linux. The command we use on Linux to mount the Windows share is: mount -t cifs //<ipaddress of windows machine>/<sharename> -o username=administrator,password=xxxxxx /mnt/seed We were importing some large backup sets with millions of files and started noticing “cannot allocate memory” errors during the seed import process. When the import would complete it would indicate that not all files were imported. At first we thought this was an Asigra issue, but after much troubleshooting we found this was an issue with the Windows machine we were using and was related to using the CIFS protocol with Linux. A sample link to the issue we were seeing is: http://linuxtecsun.blogspot.ca/2014/12/cifs-failed-to-allocate-memory.html That link indicates to make the following changes on the Windows machine: regedit: HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\MemoryManagement\LargeSystemCache (set to 1) HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters\Size (set to 3) Alternatively, start Command Prompt in Admin Mode and execute the following: reg add “HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management” /v “LargeSystemCache” /t REG_DWORD /d 1 /f reg add “HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters” /v “Size” /t REG_DWORD /d 3 /f Do one of the following for the settings to take effect: Restart Windows Restart the Server service via services.msc or from the Command Prompt run: ‘net stop lanmanserver’ and ‘net start lanmanserver’ – The server may automatically restart after stopping it. After we made these changes the memory errors were resolved!

Asigra

Asigra Slow Seed Import

We recently discovered that Asigra DS-System v13.0.0.5 seems to have a serious problem with importing seed backups. This problem exposed itself as we attempted to import 5.5TB of seed data. We then performed additional testing by backing up a small Windows 2008 server. The seed backup was a little under 3GB. On v13.0.0.5 the seed import took 55 minutes. On the same infrastructure, the same server seed backup imported into a v12.2.1 DS-System in less than 3 minutes. In addition we are also seeing the error “cannot allocate memory” during the seed import process even though we have tons of free RAM and disk space. We have notified Asigra and they are attempting to reproduce the problem. Update 12/4/2015 In testing, and working with Asigra, we have found that if you create the seed backup without using the metadata encryption option then the seed import speed is acceptable and imports quickly. Update 12/8/2015 Asigra released DS-System v13.0.0.10 to address this issue. Testing shows it does indeed solve the speed issue. Thanks Asigra!

Asigra

Asigra BLM Archiving – Align the Value of Your Data With the Cost to Protect it

Years ago, we treated all data as being equal. Every piece of data originated on one type of storage and remained there until it was deleted. However, we now understand that not all data is created equal. Some data types are more important or accessed more frequently than others. Backup Lifecycle Management (BLM) is a concept that helps organizations manage data more efficiently by storing it on one system initially, then migrating it to lower-cost storage systems as it ages. This strategic data management approach can reduce storage costs while ensuring critical data remains accessible. Understanding Asigra Backup Tiers Data Classification and Storage Tiers DS-System – Business-Critical Operational Data Business Critical Data such as files, databases, and email systems necessary for daily operations should reside in the DS-System Tier. This tier is optimized for speed and accessibility, ensuring that your mission-critical information is always available. BLM Archiver – Policy-Based Retention for Aging Data Large file servers or repositories containing older data can be migrated to BLM Archiver. The primary advantage is cost savings, as this system automatically moves older data into lower-cost storage tiers based on pre-configured retention policies. At Managecast, we help analyze your data to identify the optimal protection methods suited to your recovery needs and budget. There are many strategies to protect your business’s data by aligning its value with the costs to protect it. BLM Cloud Storage – Low-Cost, Long-Term Data Storage BLM Cloud Storage is a cost-effective solution for rarely retrieved files, typically those older than one year. Large data sets, ranging from 250GB to multiple terabytes, can be moved to long-term storage to ensure compliance and maintain records while reducing storage expenses. Storage Solutions for Rarely Accessed Data Older data can be grouped into long-term cloud storage, making retrieval simple when necessary. Customers can choose between Amazon S3 Cloud Storage or Managecast Enterprise Cloud Storage for scalable, secure storage.

Zerto

Zerto Backup Fails Unexpectedly

We had a recent issue with Zerto backups that took some time to remedy. There was a combination of issues that exposed the problem, and here is a run down of what happened. We had a customer with about 2TB of VM’s replicating via Zerto. We wanted to provide backup copies using the Zerto backup capability. Keep in mind Zerto is primarily a disaster recovery product and not a backup product (read more about that here: Zerto Backup Overview). The replication piece worked flawlessly, but we were trying to create longer-term backups of virtual machines using Zerto’s backup mechanism which is different from Zerto replication. Zerto performs a backup by writing all of the VM’s within a VPG to a disk target. It’s a full copy, not incremental, so it’s a large backup every time it runs, especially if it’s a VPG holding a lot of VMs. We originally used a 1Gigabit network to transfer this data, but quickly learned we need to upgrade to 10Gigabit to accommodate these frequent large transfers. However, we found that most of the time the backup would randomly fail. The failure message was: “Backup Protection Group ‘VPG Name’. Failure. Failed: Either a user or the system aborted the job.” To resolve the issue we had opened up several support cases with Zerto, upgraded from version 3.5 to v4, implemented 10Gigabit, put the backup repository directly on the Zerto Manager server. After opening several cases with Zerto we finally had a Zerto support engineer thoroughly review the Zerto logs. They found there were frequent disconnection events. With this information we explored the site-to-site VPN configuration and found there were minor mismatches in the IPSEC configurations on each side of the VPN which were causing very brief disconnections. These disconnections were causing the backup to fail. Lesson learned: It’s important to ensure the VPN end-points are 100% the same. We use VMware vShield to establish the VPN connections and vShield doesn’t provide a lot of flexibility to change VPN settings, so we had to change the customer’s VPN configuration to match the vShield configuration. Even though we seemed to have solved the issue by fixing the VPN settings, we asked Zerto if there was any way to make sure the backup process ran even if there was a connection problem. They shared with us a tidbit of information that has enabled us to achieve 100% backup success: There is a tweak that can be implemented in the ZVM which will allow the backup to continue in the event of a disconnection, but there’s a drawback to this in that the ZVM’s will remain disconnected until the backup completes. As of now, there’s no way to both let the backup continue and the ZVM’s reconnect. So there is a drawback, but for this customer it was acceptable to risk a window of time that replication would stop to make a good backup. In our case we made the backup on Sunday when RPO wasn’t as critical, and even then the replication only halts if there is a disconnection between the sites which became even more rare since we fixed the VPN configuration. The tweak: On the Recovery (target) ZVM, open the file C:\Program Files (x86)\Zerto\Zerto Virtual Replication\tweaks.txt (may be in another drive, depending on install) In that file, insert the following string (on a new line if the file is not empty) t_skipClearBlockingLine = 1 Save and close the file, then restart the Zerto Virtual Manager and Zerto Virtual Backup Appliance services Now, when you run a backup, either scheduled or manual, any ZVM <-> ZVM disconnection events should not cause the backup to stop. I hope this helps someone else!

Zerto

Zerto Backup Overview

Zerto is primarily a disaster recovery solution that relies on a relatively short-term journal that retains data for a maximum of 5 days (at great expense in disk storage). Many Zerto installations only have a 4-hour journal to minimize the storage needed for the journal. Zerto is a great disaster recovery solution, but not as great as a backup solution.  Many customers will augment Zerto with a backup solution for long-term retention of past data. Long-term retention is the ability to go back to previous versions of data, which is often needed for compliance reasons. Think about the ability to go back weeks, months, and even years to past versions of data. Even if not driven by compliance, the need to go back in time to view past versions of data is very useful in situations such as: Cryptolocker type ransom-ware corrupts your data and is replicated to the DR site Legal discovery – for example, reviewing email systems as they were months or even years ago. Inadvertent overwriting of critical data such as a report that is updated quarterly. Clicking “Save” instead as “Save As” is a good example of how this can happen. Unexpected deletion of data that takes time to recognize. For reference and further clarification, check out the differences between disaster recovery, backup and business continuity. Even though Zerto is primarily a disaster recovery product, it does have some backup functions. Zerto backup functionality involves making an entire copy of all of the VM’s within a VPG. We sometimes break up VPG’s with the goal to facilitate efficient backups. One big VPG can result in making one big backup which can take many hours (or days) to complete. Since it’s an entire copy of the VPG it can take a significant amount of time and storage space to store the copy. Each backup is a full backup and currently, no incremental/differential backup capability exists within Zerto. It is also advisable to write the backups to a location that support de-duplication, such as Windows 2012 Server. It still takes time to write the backup, but the de-duplication will dramatically lower the required storage footprint for backing up Zerto VPG’s. Without de-duplication on the backup storage, you will see a large amount of storage consumed by each full backup of the VPGs. Zerto supports the typical grandfather-father-son backup with daily, weekly and monthly backups for 1 year. Zerto currently does not support backups past 1 year, so even with Zerto backups, the long-term retention of data is not as good as with other products designed to be backup products. However, Zerto really shines as a disaster recovery tool when you need quick access to the latest version of your servers. Its backup capabilities will get better with time. Interested in learning how Managecast can help your business with its cloud backup and disaster recovery solutions? Fill out this form for more information!

General Cloud Backup

The Difference Between Disaster Recovery, Backup and Business Continuity

It’s common to see terms like backup and disaster recovery (DR) used interchangeably, and sometimes even incorrectly. We often encounter customers asking for a DR solution when what they really need is both backup and disaster recovery. Some in the industry refer to this combination as BDR (Backup & Disaster Recovery). So, what’s the difference between backup and disaster recovery? And why does it matter? Disaster Recovery (DR) Disaster recovery focuses on restoring critical IT functions quickly after a disaster. Disasters can range from something as small as a critical server failure to large-scale events like fires, floods, tornadoes, hurricanes, or even man-made incidents such as construction accidents, theft, sabotage, or chemical spills. These events can render your entire site unusable. The goal of DR is to bring critical IT services back online as quickly as possible. A comprehensive DR plan may involve much more than just data recovery — it might include alternate sites, spare hardware, and other contingency measures. Backup Backup, while it can play a role in disaster recovery, serves a broader purpose. Backup not only supports rapid recovery in the event of a disaster, but it also gives you access to the historical versions of your data. That’s a key difference between backup and disaster recovery. There are DR products designed to provide a fast recovery to the most recent copy of a server, but they aren’t built to retrieve data from two weeks, six months, or years ago. With backup, you can access older versions of files, which is crucial for recovering from issues like data loss or corruption that happened in the past but are only noticed in the present. For example, in the case of a ransomware attack, your most recent backups might include infected files. In this situation, you’d need to restore data from before the infection occurred. Backup also helps in more common scenarios, like accidentally overwriting an important file. If you saved over a monthly report in Word, a backup allows you to recover the original file. Additionally, some industries are required by law to keep copies of their older data. For instance, medical providers must retain patient records for several years. In summary, backup data can be used for DR, but it also includes past versions of data, allowing you to reproduce information as it existed at any given point in time. Business Continuity Business continuity refers to how an organization continues to perform essential functions despite a disaster. It goes beyond just restoring servers and data and often involves non-IT-related concerns as well. Every organization’s business needs are unique. For example, some companies rely heavily on phone services to take customer calls, while others depend on specialized equipment that isn’t easily or quickly replaceable. When creating a business continuity plan, critical questions need to be asked: Who are the essential employees? What functions do they perform? Where will they work if the office becomes unusable? Data recovery is just one piece of a much larger puzzle in business continuity.

Asigra

Asigra BLM Archiving – Align the Value of Your Data With the Cost to Protect it

Years ago, we treated all data as being equal. All data originated on one type of storage and stayed there until it was deleted. We now understand that not all data is created equal. Some types of data are more important than others, or accessed more frequently than others. Backup Lifecycle Management (BLM), defines the BLM concept where data is created on one storage system, then migrated to less expensive storage systems as it ages. Asigra Backup Tiers For example: Data that is 2 minutes old is highly valued.Data that is 2 months old may be of interest but is not as highly valued.Data that is 2 years old may be needed for records but it is not critical to the daily functioning of the company.DS-System – Primary Storage-Business-Critical Operational Data Business Critical Operation Data contains the files, databases, email systems, etc., that are needed for operations on a day-to-day basis. All data that is critical to business operations data should be stored in the DS-System Tier. BLM Archiver – Policy based retention of older data Large file servers or other large repositories of potentially older data can be moved to BLM, Cost savings are the primary benefit by allowing storage of older data and automatic retention policies that move aged data into the lower cost tier. BLM Archiver can also be leveraged to provide storage of past generations of data while keeping the most recent version in business-critical DS-System. Managecast will help with analyzing your data to determine a protection method to best suit your recovery requirements and budget. There are many options to protect business data by strategically identifying the value and aligning the cost to protect it. BLM Cloud Storage – For Low-Cost, Rarely Retrieved Files Typically for files older than 1 year, BLM Cloud Storage is a method to cost-effectively protect large data sets that are still needed for reference, compliance, and infrequent restores. Files older than a specified age can be selected to move to long-term cloud storage and are generally grouped in large chucks from 250GB on up to multiple terabytes and then copied to long-term archive on disk. Customers can utilize Amazon S3 cloud storage or use Managecast Enterprise Cloud Storage Interested in learning how Managecast can help your business with its cloud backup and disaster recovery solutions? Fill out this form for more information!

Veeam

Veeam v 8 Certificate Error When Upgrading (Authentication Failed Because the Remote Party has Closed the Stream)

We were setting up Veeam Cloud Connect infrastructure to offer Veeam Cloud Backup, a feature many of our customers had requested. The installation was going smoothly, and we initially used a self-signed certificate for testing. Later, we applied a certificate from a well-known Certificate Authority, which also worked without any issues. However, we soon received a notification from Veeam about an available update (v8 Update 3). Since it’s important to stay on the same version or higher as our clients, we proceeded with the update. After updating to Update 3, clients were suddenly unable to connect, receiving the following error: “Error: Authentication failed because the remote party has closed the stream.” This error occurred immediately upon connection, and Veeam wouldn’t allow us to edit the Cloud repository because it no longer recognized the certificate. Steps We Took: Fortunately, we had taken snapshots of all Veeam components before updating (Backup and Replication server, Cloud Gateway, WAN Accelerator, and Repository). After reverting to the pre-update state, clients were able to connect again using either the self-signed certificate or the one from the Certificate Authority. Troubleshooting with Veeam Support: We then opened a support ticket with Veeam and provided logs from every component and the client side. After reviewing the logs, Veeam support had us install Update 2b and submit logs before and after the upgrade. Unfortunately, the issue remained. Finally, Veeam support provided a process that worked: This solution worked! After that, we took another set of snapshots and upgraded to Update 3, and everything continued to function properly. Key Takeaway: If you encounter this issue after a Veeam update, try applying a self-signed certificate first, then upgrading, and finally applying the Certificate Authority certificate. This step saved us considerable time, especially since the error wasn’t documented in Veeam’s KB articles or certificate installation documentation.