Author name: Managecast Technologies

General Cloud Backup, Veeam

Migrating and Seeding Veeam Backups to ReFS Storage

IMigrating to Veeam ReFS Volumes: Steps to Unlock Fast Cloning and Spaceless Fulls To fully realize the benefits of Veeam’s integration with Windows Server 2016 ReFS, all full and incremental backups must be created on the new ReFS volume. Simply moving existing backups over won’t immediately enable fast cloning and spaceless fulls—additional steps are required. For more details on the benefits of using Veeam with ReFS, be sure to check out our related post. Update: Make sure to use 64K Block Size when formatting the Veeam repository volumes to avoid issues with 4K Block Size and ReFS. Read this post for more information. Migrating Existing Backups to ReFS Volumes The first key point to remember is that only new full and incremental backups created on ReFS will benefit from fast cloning and spaceless fulls. After moving your data to the new ReFS volume, performance and storage efficiency improvements won’t take effect until: Dealing with Deduplicated Storage If you’re migrating from a deduplicated storage, moving your backups to ReFS can result in rehydrated data, which could significantly increase the size of your backup files and overwhelm storage. To mitigate this, you might want to: Planning for Storage Needs Remember, ReFS benefits won’t be applied until both the most recent full and all incrementals are created on the ReFS volume. This means you’ll need storage for at least two full backups plus all incremental backups during the migration process. We recommend scheduling a GFS (Grandfather-Father-Son) retention policy to create a full backup as soon as possible. This allows you to delete older full backups from the ReFS volume, freeing up space. Once the synthetic full and new incrementals have been created on the ReFS storage, you can delete the oldest archive points from ReFS, and all subsequent backups will benefit from the ReFS filesystem improvements. Seeding Offsite Backup Copy to ReFS Volumes Seeding a backup to ReFS can help reduce initial WAN utilization, as it prevents the need for a full backup over the internet. However, even after seeding, all backups must be created on the ReFS volume to benefit from the new features. Here’s a process that has worked well for us when seeding backups to ReFS (you’ll temporarily need storage for two full backups and two incremental restore points): Steps to Seed Backups to ReFS: Forcing a GFS Synthetic Full Backup: Once the GFS synthetic full is created, you can delete the archived full (…_W.vbk) to free up storage. (You can keep it, but archived fulls won’t benefit from ReFS spaceless fulls and will consume storage until deleted by retention.) Afterward, you can change the retention settings of the backup job as needed, and any new backups will benefit from the ReFS filesystem. By following these steps, you’ll ensure that your existing backup chains transition smoothly to ReFS while unlocking the powerful benefits of fast cloning and spaceless full backups.

General Cloud Backup, Veeam

Veeam 9.5 Issues Seeding Backups

After upgrading to Veeam 9.5 we had a customer needing to seed new backups to our cloud repository. We created a new backup copy job and backed up to a temporary seed repository pointing to an external drive. Once the backup completed the drive was shipped back to us and we imported the data to our cloud repository and re-scanned it on the customer side. After mapping the job to the imported data we ran the job. At this point it should have continued from the already backed up data and started an incremental backup. However, it was doing a full backup. It was creating duplicate entries for the VMs in the backup data. Veeam 9.5 update 1 was released 1 week prior to this incident and our policy is to wait at least 30 days before applying new releases. After reading through the fixes that this update would apply we were unable to verify that it would resolve our errors, however Veeam lists this update as non-breaking and after some confirmations with Veeam support we applied the update. We then started the process of re-importing the data. Instead of removing all of the data and then re-importing it from the seed drive we were able to re-import just the seeded .vbm file and leave the already imported .vbk file then re-scan from the customer side. Veeam showed 1 backup as ‘updated’ during the re-scan. Once the update was applied and the backup was re-imported, the backup continued incremental backup from the seed data as expected.

Veeam

VEEAM 9.5 ReFS Integration

Veeam 9.5 and ReFS: Fast Cloning and Spaceless Full Technology Veeam Backup & Replication 9.5 integrates with Windows Server 2016 ReFS, bringing two key benefits to synthetic and merge operations: Fast Cloning and Spaceless Full Backups. Both of these features rely on ReFS block cloning, which allows Veeam to quickly copy data blocks within files, or even between files, without creating duplicate copies of the data. Instead, ReFS creates pointers to existing data blocks, significantly improving performance and storage efficiency. Fast Cloning Fast Cloning with ReFS dramatically reduces the time and resources required for synthetic operations. Because new full backups only reference existing data blocks (using pointers), rather than duplicating the data, synthetic operations become much faster and less resource-intensive. Spaceless Full Backups Spaceless full backups are made possible through the use of pointers, allowing new synthetic backups to take up significantly less storage space. Since the majority of data remains unchanged between backup copies, spaceless fulls only reference existing data blocks rather than creating new copies. This reduces the storage required for full backups to a fraction of what would otherwise be needed. Storage Efficiency with ReFS While spaceless full backups offer tremendous storage savings, it’s important to note that global deduplication is not supported. ReFS spaceless full backups reduce storage usage within copies of the same full backup file, but they won’t deduplicate across multiple backup files like a deduplication appliance would. Still, the storage savings can be significant. For example, after migrating a customer with nearly 1TB of native backup size from an NTFS repository to an ReFS repository, the utilized storage dropped to less than half of the native file size after one month of weekly and monthly GFS backup copies. As older GFS restore points are removed and replaced with ReFS spaceless fulls, the storage utilization will continue to decrease. Encryption with Spaceless Fulls One major advantage of ReFS spaceless fulls is that encryption is fully supported. Unlike deduplication, which prevents encryption, spaceless fulls in ReFS allow for both encryption and storage efficiency. This means your backups can remain secure while still benefiting from the space-saving advantages of ReFS. Adding ReFS Volumes as Veeam Repositories To leverage the benefits of ReFS with Veeam, older repositories need to be attached to a Windows Server 2016 machine and formatted as ReFS. If you’ve previously added a Windows Server 2016 ReFS volume as a repository, it will need to be re-added after upgrading to Veeam v9.5 for the system to recognize the new features. Important:Veeam’s fast cloning and spaceless full technologies only support ReFS volumes created on Windows Server 2016. ReFS volumes formatted on Windows Server 2012 will not benefit from these features, as they use an older version of ReFS. Additionally, restore points created before the v9.5 upgrade won’t see the new benefits. To fully utilize fast cloning and spaceless full backups, all full and incremental backups involved in synthetic operations must be created using Veeam v9.5 with a Windows Server 2016 ReFS repository. The Fast Clone tag will appear in the job activity logs once the feature is active, indicating the synthetic operation is using the optimized ReFS technology. Veeam v9.5 was recently released and with it came a large number of improvements and added features. Namely, the seamless integration of Microsoft Server 2016’s Key Recommendations Update: Make sure to use 64K Block Size when formatting the Veeam repository volumes to avoid issues with 4K Block Size and ReFS. Read this post for more information.

Veeam

Issues Running Backups or Rescanning Repository After Veeam Upgrade

Just recently, right after upgrading to Veeam 9.5, we ran into an error with one of our customers that would show up whenever backups started to run and when we tried to rescan the Veeam repository. The errors messages were: Warning Failed to synchronize Backup Repository Details: Failed to synchronize 2 backups Failed to synchronize 2 backups Warning Failed to import backup path E:|PATH|TO|BACKUP|FILES|BackupJobName.vbm Details: Incorrect file system item link received: BackupJobName Based on the errors it looked like there were issues with the database entries for the backup job mentioned in the error. As a troubleshooting step we tried removing the backup files from the GUI under Backup & Replication>Backups by right clicking on the backup and selecting ‘Remove from Configuration.’ However, this ended up giving us the same error in a popup dialogue: Incorrect file system item link received: BackupJobName After opening a ticket with Veeam they informed us that this is a known issue caused by unexpected backup entries in the VeeamBackup SQL database. Namely, the issue backup jobs were listed in the database with a zero entry for the job ID or repository ID causing Veeam to not be able to locate the backup files. Because this involves the Veeam SQL database we are going to be making changes to the database so it’s best to back it up before the next steps. Here’s a knowledge base article from Veeam that shows the suggested methods to backing up the database. To determine whether there are any errant backup entries in the SQL database run the following query: SELECT TOP 1000 [id] ,[job_id] ,[job_name] ,[repository_id] FROM [VeeamBackup].[dbo].[Backup.Model.Backups] Under ‘repository_id’ you should see one or more of the backup jobs showing ‘00000000-0000-0000-0000-000000000000’ or ‘NULL’ as the id for job or repository. Any entries with this issue will need to be removed from the database in order to resolve the error. It’s best practice to save a backup of the SQL database before making any changes. If you’re unsure of how to back up the SQL database, follow Veeam’s knowledge base article.  After backing up the SQL database run the following query for each job with ‘00000000-0000-0000-0000-000000000000’ as the repository_id: EXEC [dbo].[Backup.Model.DeleteBackupChildEntities] ‘REPLACE_WITH_ID’ EXEC [dbo].[Backup.Model.DeleteBackup] ‘REPLACE_WITH_ID’ Replacing the ‘job_id’ with that of any backups with the unexpected ‘repository_id’ found with the previous query. After that issues with the local backup server was resolved. However, we were still seeing errors when trying to connect the backup server to our cloud connect repository to do backup copies. We were still getting the following errors: Warning Failed to synchronize Backup Repository Details: Failed to synchronize 2 backups Failed to synchronize 2 backups Warning Failed to import backup path E:|PATH|TO|BACKUP|FILES|BackupJobName.vbm Details: Incorrect file system item link received: BackupJobName In order to resolve this we had to remove the entries for the problem jobs from the cloud connect server database. For those using a cloud connect service provider you will have to have them make changes to the SQL database. We had 2 VMs that were giving the ‘Incorrect file system item link received: JobName” error. So we had to remove any entries for those jobs from the SQL db. We ran the following query to get the Job ID of both jobs mentioned in the errors: SELECT TOP 1000 [id] ,[job_id] ,[job_name] FROM [VeeamBackup].[dbo].[Backup.Model.Backups] Then we ran the same delete query as before using the new job_id’s: EXEC [dbo].[Backup.Model.DeleteBackupChildEntities] ‘REPLACE_WITH_ID’ EXEC [dbo].[Backup.Model.DeleteBackup] ‘REPLACE_WITH_ID’ After those entries were deleted we were able to rescan the repository. Lastly, once we rescanned our cloud repository and imported the existing backups we started getting the following error message: Warning Failed to import backup path E:|PATH|TO|BACKUP|FILES|BackupJobName.vbm Details: Path E:|PATH|TO|BACKUP|FILES|BackupJobName2016-11-21T220000.vib is absolute and cannot be added to other path This error indicates that there is a problem where the backup chain isn’t accurately resolving the location of the GFS restore points. In order to resolve this we had to manually remove the Imported backups from the cloud connect server by going to Backup & Replication>Backups and selecting the backup job and choosing ‘Remove from Configuration’ making sure to check ‘Include archived full backups.’ After the backups had been removed from the cloud connect repository we were able to manually rescan the repository on the local backup server and the backup files were imported again successfully. Update:  After deleting these affected rows using the commands above you may get the following error message: Unable to map tenant cache ID ‘ID-GOES-HERE’ to service provider side ID ‘ID-GOES-HERE’ because it is already mapped to ID ‘ID-GOES-HERE’ If you do see this error a solution can be found in this Veeam KB article. Essentially, the data for these deleted rows still exists in the Veeam DB cache table and needs to be removed. In order to do this run the following query on the VeeamBackup database: delete from [cachedobjectsidmapping] This will delete the DB cache table. In order to re-populate it you will need to rescan the service provider and the Cloud Repository.

Veeam

Veeam Backups with Long Term Retention

One of the powerful features of Veeam Backup & Replication is its ability to perform incremental backup jobs to local disk, and then schedule jobs to copy those increments to another location for additional protection. Backup Copy Jobs: Forever Forward Incremental Veeam’s backup copy jobs operate using a forever forward incremental method. After the first full backup is created, only incremental changes are copied moving forward. Once the incremental chain reaches the set limit of restore points, the oldest restore point is automatically merged into the full backup file after a new restore point is added. By default, Veeam keeps 7 incremental restore points, but this number can be adjusted. Managecast, for instance, increases this to 14 daily incrementals by default, allowing for two weeks of daily recovery points. Long-Term Retention: GFS (Grandfather-Father-Son) For longer retention, Veeam recommends using GFS retention (Grandfather-Father-Son) within backup copy jobs. GFS allows you to retain a set number of weekly, monthly, quarterly, and yearly full backups. With GFS, a backup copy job creates a new full backup file and archives it based on your retention policy. These GFS restore points are independent full backups, meaning they don’t rely on the incremental chain. This is an advantage, as they won’t be affected if the incremental chain is broken. Storage Considerations for Long-Term Retention While GFS provides robust retention capabilities, it can also quickly consume a lot of storage space. For example, if you retain 7 incremental restore points, 4 weekly backups, and 12 monthly backups, you’ll need storage for: A helpful resource to estimate storage needs is the restore point simulator. Reducing Storage Usage with Deduplication One way to cut down on storage requirements is to use deduplication on the target backup repository. Because GFS restore points are copies of the same backup files, they deduplicate efficiently. However, deduplicating the current full backup can significantly slow down the merging process for the oldest incremental restore point. To avoid this, Managecast only deduplicates files that are older than 7 days. This ensures that GFS restore points are only deduplicated a week after being copied, leaving daily incrementals and the current full backup untouched by deduplication until then. With this approach, the repository stores daily incrementals, the current full backup file, and deduplicated GFS full restore points. This setup typically requires just over 2x the full backup size plus the size of the incrementals. Important Consideration: Encryption and Deduplication It’s important to note that deduplication will not work on encrypted files. Encryption changes the file contents, making them unique and preventing deduplication from recognizing similarities across backup files. This presents a choice: Summary Veeam is an excellent product, but long-term retention requirements can quickly increase storage needs. At Managecast, we are continuously exploring new technologies to address these challenges. We’re currently reviewing Veeam v9.5 alongside Windows 2016 and the ReFS file system to evaluate if new deduplication efficiencies (with combined encryption) can help solve these storage issues. Stay tuned for more updates! Check out our post on Veeam 9.5 ReFS Integration for more details on long-term retention improvements!

General Cloud Backup

Considering a Low Cost Cloud Backup Solution?

Ouch, Carbonite is not having a good day.  I see some people choose these low cost cloud backup providers without realizing they are not the same as enterprise-class backup providers like Managecast. It would seem you get what you pay for. Carbonite Forces Password Reset After Password Reuse Attack! Interested in learning how Managecast can help your business with its cloud backup and disaster recovery solutions? Fill out this form for more information!

General Cloud Backup

Top 5 Cloud Backup Myths

As a cloud backup managed service provider, we often encounter common myths about cloud backup. Here, we aim to dispel some of the most pervasive—and incorrect—perceptions: Myth 1: Cloud Backup Isn’t Secure One of the biggest concerns people have about cloud backup is the security and privacy of their data, especially with constant news of data breaches. Ironically, the right cloud backup solution can make your data much more secure than traditional backup methods. Often, when we hear someone express concerns about cloud security, it becomes clear their existing backup solution is far less secure. Traditional, customer-managed backup systems struggle to get data offsite quickly and securely, and they often don’t follow best practices around media rotation, encryption, and data protection. Security is a top priority for cloud backup service providers, with stringent protocols to ensure data protection. For instance, we offer highly encrypted services (AES 256-bit, FIPS 140-2 certified encryption), with clients maintaining control of the encryption key. This means we do NOT store your encryption key, ensuring we don’t have access to your data unless you request it. Summary: Cloud backup providers use cutting-edge security measures to ensure your data is protected—far beyond traditional backup methods. Myth 2: Restoring Data from the Cloud Takes Too Long It’s true that restoring massive amounts of data from the cloud could take time, but most enterprise cloud backup solutions also store data locally. In fact, 99% of recoveries are made from local storage at LAN speed, so restoring from the cloud is rarely needed. In the rare case of a site-wide disaster where local backups are compromised, most business-class cloud backup providers can ship your data on portable, fully encrypted media within 24-48 hours. Some providers can even spin up recovered servers in the cloud for fast recovery. Summary: Restoring data from the cloud is rarely necessary, and cloud backup providers offer quick, alternative recovery methods. Myth 3: Too Much Data to Backup Many people believe they have too much data to back up in the cloud. However, with modern cloud backup solutions, this is rarely an issue. Traditional backup systems often rely on full backups, which can be time-consuming and impractical for cloud backups. However, cloud systems use an “Incremental Forever” approach, where only the initial full backup is performed once. After that, only incremental backups—transferring just the changed data at the block level—are made, significantly reducing the amount of data being backed up. The initial full backup is typically performed on mobile media (like a USB drive) and shipped (encrypted) to the data center, avoiding large data transfers over the internet. As a rule of thumb: for every 1TB of data, you need 1 T-1 (or 1.55Mbps) of bandwidth. A 20Mbps internet connection can support a 12TB environment. Summary: Cloud backup solutions efficiently manage large amounts of data through incremental backups, making data volume rarely a concern. Myth 4: Incremental Forever Means Hundreds of Restores A common misconception about “Incremental Forever” backups is that restoring data will require restoring hundreds of small backups. This is far from the truth. Modern incremental backup software is designed to assemble your data automatically at any point in time, allowing you to restore to any moment with just a few clicks in a single operation. Summary: Restoring with incremental backups is quick and straightforward—just one operation to restore data to any point in time. Myth 5: Cloud Backup Is Too Expensive Nothing is more costly than losing your business-critical data. Our solution is based on the size of your backups, not the number of devices or servers being backed up. Plus, data is deduplicated and compressed, reducing overall storage costs. Older, archived data can also be stored at a lower cost, helping you align backup costs with the value of your data. In many cases, we can reduce costs by moving older data to lower-cost storage tiers. Additionally, you’re getting expert management, monitoring, and support services from your cloud provider. Without a managed service, backups often go unmonitored, untested, and unrestored. With us, you receive full expert support and monitoring—ensuring your data is safe—at a much lower cost than doing it all in-house. Summary: Cloud backup costs are justified when you consider the security, management, and peace of mind that come with a managed solution.

Veeam

Managecast is Now a Proud Member of the Veeam Cloud Connect Partner Program

  Managecast is a featured partner on Veeam’s list of service providers that offer the Veeam Cloud Connect services. Veeam Cloud Connect enables you to quickly and efficiently get your Veeam backups offsite, safe and secure, so you can always recover your data no matter what! Our services powered by Veeam allow for: Managecast is offering 30 day, no-obligation, free trials and enabling your existing Veeam installation could not be easier. Get your existing Veeam backups offsite using the familiar Veeam management console. Managecast can also provide updated VEEAM software and licensing if required. Our Cloud-based disaster recovery and offsite backup powered by Veeam can now be easily used to provide offsite disaster recovery capabilities for your organization. Contact us for a free trial.

General Cloud Backup

Tape is Not Dead, and Why I Finally Bought a Tape Library

Being the “Cloud Backup Guy” I’ve made a living off replacing tape. Tape is that legacy media right? It’s true that for most small to medium businesses, tape is hard to manage, expensive to rotate offsite, and has virtually been replaced by disk-to-disk (or disk-to-disk-to-cloud) technologies. However, I am finally willing to say tape definitely has it’s place. Given that I have been so anti-tape for many years, I thought it was news to share when I finally decided that tape had it’s place. Don’t get me wrong. I’ve had nearly 30 years of IT consulting experience. In the old days I used nothing but tape as it was the only real option for data protection. I’ve also had my share of bad experiences with tape (mostly the old 4mm and 8mm drives and tapes). I hated the stuff and never wanted to rely on it. Like many seasoned IT professionals of the past, many of us had nightmares to tell about tape backup. When I got into the Cloud Backup business, the passion I had for disliking tape really helped me convince folks not to use it. Now don’t get me wrong, I think for most SMB’s tape is dead. However, as your data volume grows, and I am talking 50TB+ of data, you can not ignore the efficiency and cost effectiveness of good old tape. Tape has also come a long, long way over the years. Gone are the days of 4mm and 8mm DAT tapes.  LTO, the clear tape standard for the modern era, boasts LTO-7, now with a native capacity of 6TB+ (15TB compressed) per tape cartridge. LTO offers a reliable and cost effective method to store huge quantities of data at a much lower cost than disk storage technology. What brought about this decision to finally embrace tape?  The decision to choose tape became apparent as we were gobbling up more and more disk space for cloud backups. Our growth rate has been significant and trying to keep up with backup growth meant buying more and more disk. It’s not just the cost of disk we had to buy, but the rack space, the power, cooling, and other costs associated with hundreds of spinning disks, plus the cost of replicating the data to another data center with more spinning disks!  A significant segment of our backup storage was consumed by long-term archival storage of older data that continued to grow rapidly as data ages. Our cloud backup solution allows tiering of the data so that older, less frequently used data could be pushed to longer-term archival storage. Once I faced the decision to have to buy even more disk versus the cost of a tape solution to store the ever growing mountain of archive data, it became a no-brainer. Tape was the clear winner in that type of scenario. Allow me to stress that I am not a proponent of tape except for all but the largest of companies or others who required long-term archive of a large amount of data. It still introduces manual labor to swap and store tapes, take them offsite, etc. For near and medium term data, we still keep everything stored on disk for quick and easy access. However, for the long-term archival data, we are using tape and love the stuff. The nice thing is that our customers still don’t have to worry about using tape as we manage everything for them.

Asigra

The Requested Operation Could Not be Completed Due to a File System Limitation (Asigra)

On trying to backup an Exchange database using Asigra we were seeing the message “The requested operation could not be completed due to a file system limitation” after about 4 hours of backing up. This was an Exchange database backup (non VSS), and it was copying the database to the DS-Client buffer.  The Exchange database was 1TB+.  The DS-Client was running on Windows 8.1. The message: The requested operation could not be completed due to a file system limitation  (d:\buffer\buf\366\1\Microsoft Information Store\database1\database1.edb) Solution: There is a default limitation with the Windows file system for supporting large files. We had to reformat the buffer drive on the DS-Client using the command: format d: /fs:ntfs /L /Q After making this change we no longer experienced the error message and backups completed successfully.

Scroll to Top