Author name: Managecast Technologies

DRaaS

You Guys Can Just Restore My VMs to Your Environment for Disaster Recovery (DR), Right?

Quick Answer: Sure we can! But… This question often comes up when a client is looking for offsite backup but wants the ability to do offsite disaster recovery (DR) without having to pay for our DR as a Service (DRaaS). They want to have DR capabilities, but not the associated costs (unless a major event happened). The question is logical. Since Managecast would hold a copy of the client’s data in the form of backups it would be possible to restore these backups into our disaster recovery environment. However, there are some cautions which should be clearly understood. The first is that without actually going through the process to test, the end result will not be clear. Maybe it all works great, maybe it does not. This is why we recommend DR testing. Using backups to perform DR is a manual and time-intensive process, so testing becomes expensive and the client is asking the question because they are trying to minimize cost, so most who ask this question are not looking to pay to have DR tested. Understandable. Then a disaster hits, maybe a year or two down the road, and the boss is screaming to get the systems operational. Everyone is on edge and demanding the systems be up ASAP. Definitely a bad time to be testing DR – during an actual DR event (which is what is happening.) Testing is being performed when time is of the essence. Bad combination! Here is an example outcome: Say it takes 12 hours to restore the client’s VM’s, they turn on, but the main application does not work properly. After spending a few hours investigating we find that a dependency, maybe a DNS server, is not in the customer’s backups (maybe it was running on an onsite router), which is causing the application not to run. It takes another few hours to set up a DNS server and re-populate the entries with the needed settings for the application to run. We are now at 20 hours. We got the main application to run, but a secondary application is still not running. We spend more time to figure out why that application is not running. We finally get that application running. Then we find there is a web server that needs to be accessed from the public side. We spend time on the firewall, adding the necessary public IP’s, NAT rules, and firewall exceptions. Ok, most core applications are now running and we have been working non-stop for 30 hours. So, how are users going to access this environment? Let’s try a client to site VPN. This results in very poor performance as users wait for their applications to download over a relatively slow link. Users are upset because it takes forever to do anything. Maybe client VPN is not the best solution, but the only thing we can quickly deploy. Maybe we can manually set up a terminal server to help, then instruct users on how to access the terminal server? Then we need to install the applications to the terminal server… How is the boss going to react to getting a bill for 30+ hours at a time and a half (because of course the disaster hit on a Friday and we had to work all weekend)? The boss gets a significant bill for something that only halfway works for their users, will they gladly pay it? Of course, some simple environments may just work great on the first try, but more commonly doing DR testing during an actual DR event is NOT recommended for the above reasons. It leads to disappointment, frustration and unexpectedly high costs. Does it ever make sense to test DR during an actual DR event? Sure. It makes a lot of sense if you are willing to live with the risks and pay for it no matter the result because the alternative is far worse. Just have expectations set properly! It is for these reasons we encourage clients to perform at least annual DR tests. These DR tests can be planned for a convenient time and typically do not require working after-hours or the weekends. When there are significant issues encountered it is good to re-test until the issues are worked out and a higher confidence level is achieved. It also allows testing of user access to the DR environment to ensure feasibility. How Can We Help? At Managecast, we fully manage and monitor your backups so that you can focus on more strategic initiatives. Interested in learning how Managecast can help your business with its cloud backup and disaster recovery solutions? Fill out this form for more information!

General Cloud Backup

The Neglected Art of Data Protection

By Nathan Golden, Cloud Backup Guy Being in business for 20 years, and in the IT industry for 30, it is clear to me the art of data protection is sorely neglected in the majority of organizations from large to small. A report by US cyber-security firm Recorded Future published in May highlighted a spike in ransomware attacks targeting US cities. Previous victims include Lynn, Massachusetts, Riviera City, Florida, and Baltimore, Maryland, just to name a few. The fact these organizations were crippled by Ransomware or forced to pay a ransom, shows data protection was obviously lacking. Backup and recovery are the last line of defense against ransomware and other disasters. Why does this happen so frequently when the repercussions are so profound? Here are a few simple reasons: IT admins are not focused on data protection – they have many other jobs to do and backup is easily neglected. IT admins are not well trained or certified in data protection. Often the most junior employee manages the backups – because no one else wants to do it. Staff turn-over often impacts the monitoring and management of backups. Top management does not put emphasis on data protection or recovery testing. I often ask management this question: If you had to choose between all of the money the company has in the bank or all of your data, which would you choose if you could only keep one? Maybe in some cases, it would make sense to keep all of the money, but in most cases, data is what allows the business to stay running and to keep making money for years to come. So the questions boils down to would you want to keep the money you have now, or keep the money you will be making over the next 5, 10, or 20+ years? This question allows management to see data as being even more critical than the money in the bank, yet their safeguards for data protection are also far less than how money is protected. Getting top management to appreciate and demand comprehensive and professionally managed data protection is key. Without buy-in from management the appropriate funding levels will most likely never be met, nor will it be communicated as a priority to IT staff that data protection is a highly valued objective. Management can demand that certain controls are followed and reported. IT staff needs training, and have the time to focus on data protection even when other high priority IT initiatives invariably arise that often distract IT admins, from the daily grind of verifying and testing backups. IT staff needs to be redundant with others trained and experienced so that when someone takes a vacation, or takes another job, the data protection system is not neglected. Once management realizes more emphasis and resources are needed then the question becomes how to balance the additional resources to meet the objectives. Do you train your existing staff? Do you hire additional staff? Do you bring in outside assistance? Do you outsource the management of your backups to professionals? Consider the following: Backups will not increase revenues Backups will not increase market share Backups will not increase brand recognition Backups will not improve customer satisfaction In short, backups are not “strategic” to the business even if highly critical. Lack of backups (and the ability to restore data) can certainly lead to shrinking or even destruction of the company, but they do little to help an organization grow. IT people should be focused on technology which does add value to the organization. IT people should be dedicated to strategic tasks enabling the organization to be more competitive, more profitable, and increase customer satisfaction. So, does it make sense for IT staff to spend effort on data protection, or would this key function be better addressed by engaging outside resources with more focus and expertise? Managecast’s goal is to provide expert-level backup and recovery service that improves current data protection methods while freeing existing IT support staff to focus on being more strategic to their organization.

General Cloud Backup, Veeam

Veeam, Cloud Backup and the Insanity of Periodic Full Backups

I have been seeing a lot of attention being given to very low cost storage providers like Wasabi, Backblaze, and also public storage providers such as AWS to point Veeam offsite backups. Never do I see mentioned that these solutions require a periodic full backup to be performed over the internet. Think about that for a second. A cloud backup solution that requires you to perform full backups regularly? That is, quite frankly, INSANE! Organizations are creating ever more data, but the hours in the day remain fixed. Even 10 years ago, industry analysts like Forrester Research was advising clients to avoid backup products that require full backups (see report here), and instead recommended “incremental-forever” backup technology – and that was almost 10 years ago! Full backups across the internet might be just fine if you are a small business with a few hundred gigabytes of data, but even modest size companies are going to find it difficult to efficiently backup even a few terabytes of data on a regular basis. Veeam best practices say you should not have an incremental chain of backups over 30 incrementals long. This could be 30 days of backups if making a backup once per day. So that means at minimum, a full backup should be made once every 30 days – and even at that, a worse case scenario would be the loss of 29 days of backups if the incremental chain is corrupted. If you’re making a backup every 4 hours, then a full backup should be performed every 5 days. Full backups over the internet are problematic because they can take days to run and consume large amounts of bandwidth. Companies often pay to increase their internet bandwidth to help resolve the issue, which takes away the cost effectiveness of the low cost storage solutions! As data volumes increase, the bandwidth will also need to be increased. Moreover, while the full backup is completing, then no other offsite backups are occurring. So if it takes 3 or 4 days to perform a full backup that means you are going 3 or 4 days without an offsite backup. To solve this issue you need to run compute on the cloud side to enable “synthetic full backups”, which is a process by which Veeam re-builds the full backup on the cloud side without having to perform an actual full backup. In this configuration you can enable a true “incremental-forever” backup method that will not require periodic full backups to be made after the first full.  This can be achieved in AWS and Azure where compute resources, such as a Windows server, can be used to process the synthetic full, but then there is the added cost of compute resources and the client has another server to update, monitor, and manage, increasing costs. Low cost storage providers such as Wasabi and Back Blaze, offer no computing resources so there is no option for this on their platforms.  You might be saving in storage costs, but also experiencing other costs such as increased bandwidth requirements and delaying additional offsite backups from occurring. Veeam Cloud Connect Service Providers eliminate many of these issues as these are specialized services focused on Veeam cloud backups. The service provider performs the synthetic full backup on their side, which avoids periodic full backups over the internet. Additionally, a Veeam Cloud Connect Service provider will also have the Veeam expertise and generally better service level agreements around backup and DR services. Summary:  By leveraging a Veeam Cloud Connect Service provider you get true incremental-forever backups, minimize bandwidth usage, and get better service levels. Interested in learning how Managecast can help your business with its cloud backup and disaster recovery solutions? Fill out this form for more information!

Veeam

What is Veeam?

I’ve Heard Of Veeam, But What Is It? You may have heard of Veeam and have a vague idea of what it does. Veeam is a software company that was founded in 2006 with a great growth story. In 2008, with 10 employees, it released Veeam Backup and Replication – a software application to backup and copy virtual machines on VMware’s increasingly popular virtualization platform. Today, as of 2018, it has 2000 employees and revenue of nearly a billion dollars. The name Veeam has become ubiquitous with easy and reliable VM backup and restore and is widely deployed in companies of all sizes. It has also extended it’s backup capabilities to Microsoft Hyper-V as well as to physical Windows and Linux servers. It has extended into the cloud with Backup for Office 365.  Thousands of companies around the world trust their data protection to the simplicity and affordability of Veeam software. Backup and Replication – What’s the Difference? Backup and Replication, the flag ship product, can be thought of as two functions. “Backup” is one function, related, but separate from “replication”. Backup is what you might think in that Veeam offers backup capabilities, including the ability to see your data back in time weeks, months, years. However, from a disaster recovery perspective, you need to restore data and be back up and running as quickly as possible and you are most concerned with restoring the most recent data. Unfortunately, the restore process from backups can take many hours to days to restore the data based on how quickly the data can be “restored”. For this reason, using backup for disaster recovery is not ideal. Replication solves the recovery time problem by keeping your most recent data copied to another location that is ready to be “turned on”. For example, you can have a file server at location A that is fully copied  to location B. If location A becomes unavailable you just need to “turn on” the server at location B and it’s current as of the last replication.  No restore process was needed and recovery can usually be measured in minutes. Regular replications are added to the target in a way that never requires a full restore. By avoiding a restore process the replicated server is available quickly. Other Notable Things About Veeam • Its monitoring and management software Veeam Monitor and Veeam Reporter was combined and renamed as Veeam ONE and was first released in 2010. • It developed and released the free VM copy tool call WinSCP in 2007 and was a precursor to Backup and Replication software. • In 2014, it started the VeeamON annual conference for all things Veeam. • In 2016, it made it into the Gartner Magic Quadrant for Enterprise Backup. • 2016 also marked the year in which Vit delivered backup for Office 365. Want to Know More? Managecast has been in business for 19 years and has been a long-time Veeam partner for more than 10 years.  We welcome the opportunity to speak with you about Veeam and what it can do for you. We can set you up with free trial licenses, including Veeam Cloud Connect for easy offsite backups.  Feel free to call us at 513-735-6868, Option 2. Or visit us here.

DRaaS, General Cloud Backup

Backup VS Disaster Recovery (DR)

Several years ago your main, and usually only option for disaster recovery was to take backup media and perform a restore onto new hardware. This means copying a large amount of data from backup media back to a server. We call this “Backup Recovery” or “Standard Recovery”.  It’s the process most people are used to and the standard method for decades. As data volumes have exponentially expanded, combined with an “always on” culture of today’s society, the old means of recovery are no longer adequate for a lot of businesses. Disaster Recovery  Unfortunately it is sometimes at the worst time, during an actual disaster event, that it is fully realized the current backup solution is no longer meeting the disaster recovery requirements of the business. When management realizes it takes 48 hours to just restore (copy) the data is when the Recovery Time Objective (RTO) is more clearly appreciated! Many times after these events companies update their RTO policies and implement changes. Obviously it would be better to consider this situation and enact changes prior to the disaster. For organizations that need quicker recovery time, the option is to implement some form of “replication” in which all of the production data is replicated to another system for DR purposes. The replication could be local, and/or could be offsite to a service provider. The key to replication is that you are keeping a copy of your entire server in another place so that if your primary server is compromised in some way, you can “turn on” the DR server and be back up and running within minutes, current as of the last backup.  This typically allows the organization to resume operations much more quickly since the data did not need to be restored, but rather was already in a state to be used almost immediately. Replication is really the only way to minimize the recovery time since it eliminates the restore process. Backup On the other hand, backup still serves a valuable service that is not well addressed by disaster recovery (replication).  In a disaster situation you are typically most interested in the most recent version of data and not the data from months or years ago. Organizations with retention policies may want months, or even years of past versions of backup data. DR does not address this need, however, backup is used for retrieving data that may have changed in the past. For instance, a user may delete a file, but not notice for months. Backup would be used instead of DR for this older data since DR is focused around the most recent version of data. Typically no one wants to do a disaster recovery with 6 month old data! Backup is also usually much easier to use for granular restores of a few files or directories. DR is focused around an all or nothing approach. In a DR situation you are typically restoring entire servers. Backup makes a granular recovery easier. For this reason, backup is typically more often utilized than DR. So Which One Do You Need? There is often the need for both backup and DR solutions to address both needs. VEEAM Backup and Replication, for instance, gives you the capability of choosing Backup or Replication (DR), or you can elect to do both.  If you can not afford to do both then you will need to contrast the pros and cons of backup and DR and decide on which one best meets your requirements. For example, if you need quick recovery, but do not need long-term retention, it may be possible to use DR as a form of backup. On the other hand, if you do not require quick recovery, but need long-term retention then backup-only may be best.  However, if you need both quick recovery and long-term retention then you need both backup AND Disaster Recovery. The table below contrasts some of the differences between Backup and Disaster Recovery. Interested in learning how Managecast can help your business with its cloud backup and disaster recovery solutions? Fill out this form for more information!

DRaaS, General Cloud Backup, Veeam, Zerto

What is Disaster Recovery as a Service (DRaaS)?

Until recently, implementing quick fail-over to a remote site came with significant costs, complexity, and time commitments—making it accessible only to large companies with deep pockets. However, advancements in technology and the internet have made Disaster Recovery (DR) more affordable for businesses of all sizes. Today’s consumers expect uninterrupted service, driving companies to seek out DR fail-over solutions to avoid business disruptions. What is DRaaS? Hosting a disaster recovery site on your own can be cost-prohibitive, both in terms of money and resources. The costs of maintaining a remote site, managing servers, applications, backups, replication, and performing regular tests can add up quickly. This is where Disaster Recovery as a Service (DRaaS) comes in. DRaaS allows organizations to leverage service providers like Managecast to protect virtual servers in a cloud environment. Service providers offer the infrastructure, software, and management needed for DR solutions—helping businesses reduce costs and complexity. Failover: How It Works With DRaaS, organizations replicate their data either continuously or periodically, depending on their Recovery Point Objective (RPO), to the service provider. In the event of a disaster, businesses can fail-over all or part of their environment by powering on their virtual machines (VMs) in the service provider’s cloud infrastructure, ensuring continued operations. Organizations can access failed-over replicas through predefined methods: Once local infrastructure is restored, fail-back is possible. This involves replicating any changes made in the DR environment back to the production environment. DR Testing Regular DR testing is essential to ensure the failover process runs smoothly during an actual disaster. Most DRaaS providers allow businesses to perform their own tests. Testing can be as simple as logging into the service provider’s web console, powering on a VM, and verifying application or service functionality. Cost Structure The pricing model for DRaaS varies among providers, but a common model is usage-based billing—charging businesses only for the resources they use during a failover event. Management and Support In addition to offering the infrastructure, many DRaaS providers offer extra management services. These can include: While disaster recovery may seem like an additional cost for organizations, for DRaaS providers, backup and replication are their primary focus. By using a service provider for DRaaS, businesses gain access to expert management and support for their DR needs. In Conclusion:DRaaS has transformed disaster recovery by making it accessible and affordable for businesses of all sizes. By utilizing a service provider, companies can safeguard their operations without the massive expenses and complexity that traditionally came with DR solutions.

General Cloud Backup, Veeam

Avoid Memory Issues with Veeam and ReFS

There have been reports of issues that users have run into after incorporating ReFS repositories with Veeam Backup and Replication. Here’s how to best avoid running into them: According to reports, users are running into issues on Windows 2016 servers using drives formatted with ReFS while running Veeam synthetic operations. So far this issue has been primarily reported by users who have formatted their ReFS volumes using a 4K block size, which is set by default when formatting a new volume. Veeam spokespeople have recommended that users use a 64K block size as the primary method to avoid this issue.  Check the current allocation unit size of ReFS volumes using: fsutil fsinfo refsinfo <volume pathname> Microsoft has also released a patch(KB4013429) as well as a corresponding knowledge base article regarding this issue. The fixes include a patch + registry changes specific to the issue with ReFS. The patch adds the option to create and fine tune the following registry parameters to tweak the ReFS memory consumption: RefsEnableLargeWorkingSetTrim | DWORD RefsNumberOfChunksToTrim | DWORD RefsEnableInlineTrim | DWORD The reason for the errors during the synthetic operations performed during Veeam Backups to ReFS repositories is that Veeam synthetic operations are very I/O intensive. Users have uncovered an issue with the ReFS file system where metadata stored in memory is not released properly. This causes the utilized memory on the system to balloon and can eventually lock up the OS. Using the Windows SysInternals Tool: RAMMap, users can monitor the memory usage during synthetic fulls. This will help determine if the metafile is growing and if there will be a potential issue with memory. Finally, suggestions for avoiding running into this error: If you are currently using an ReFS volume with 4K Block Size consider migrating the repository to a new volume with 64K Block Size. This post may assist you. 64K block sizes are already largely recommended as a best practice for Veeam repositories, considering how Veeam works with large files. The issue here being that Windows sets the default allocation unit size at 4K so users may skip past changing it when formatting new volumes. Hopefully future releases of Veeam will be able to detect and warn users against 4K block sizes during the creation of ReFS repositories. Update 11/3/17: Some users with larger amounts of backup data report issues even when using 64K block size yet. More current updates of Windows Server 2016 now include additional registry settings to curb some of the issues that users have continued to report. As of the time of this update, it has been suggested that those experiencing these issues set these decimal values for the following registry keys: HKLM\SYSTEM\CurrentControlSet\Control\FileSystem\RefsEnableLargeWorkingSetTrim = 1 HKLM\SYSTEM\CurrentControlSet\Control\FileSystem\RefsNumberOfChunksToTrim = 32 HKLM\SYSTEM\CurrentControlSet\Control\FileSystem\RefsDisableCachedPins = 1 HKLM\SYSTEM\CurrentControlSet\Control\FileSystem\RefsProcessedDeleteQueueEntryCountThreshold = 512 Some Veeam technicians/architects have also suggested, for backup jobs whose retention requirements are 100 days/restore points or less, to avoid synthetic fulls unless they are specifically necessary. eg: For a retention policy going back a month, aim for 30 daily incremental restore points, rather than 7 daily and 3 weekly restore points. This will still provide the benefits of fast cloning from ReFS but avoid the issue of synthetic full merges potentially locking up the storage during the full backup file merge processes. More Info: Veeam Forums; VeeamLive; Microsoft KB4016173; Microsoft KB4035951

Office 365 Backup

Do You Need to Backup Office 365 data?

Many people assume that Microsoft fully protects their data in Office 365, so they don’t need to worry about backups. However, this is a dangerous assumption. In reality, Office 365 lacks comprehensive daily backup and archiving capabilities, which can leave you vulnerable after data is deleted or automatically removed from the recycle bin—or if a user intentionally deletes their data. Let’s explore some common scenarios where you might lose critical data: 1. Deletions Whether accidental, intentional, or malicious, users delete data. Office 365 has a default retention policy of 14 days, which can be extended to 30 days. After that, the data is permanently gone. Intentional deletions, including data deleted from the recycle bin, are unrecoverable. Additionally, if a user account is deleted, the data is lost forever. 2. Ransomware/Malware While Microsoft offers anti-malware protections, they don’t guarantee full protection against data corruption caused by ransomware or malware. Recovering from such a scenario can be painful, time-consuming, and with Microsoft’s built-in data protection measures, there’s no guarantee of successful recovery. 3. Liability Microsoft’s liability for data loss is extremely limited. In the case of Office 365, their liability is capped at $5,000. The cost of legal action alone would exceed that, making this essentially the same as having no liability for your data. 4. Compliance If your business has strict compliance requirements—such as retaining backups for 7, 10, or more years—Microsoft’s built-in tools won’t meet your needs. Even businesses without legal mandates often have lengthy retention policies, and requiring more than a month’s history may exceed what Microsoft offers. Industry Experts Recommend Backup Analysts from organizations like Gartner, Forrester, and ESG recommend that businesses review their data retention needs and determine if additional Office 365 backup solutions are necessary to meet compliance and recovery objectives. Recommendation To ensure you can always recover critical Office 365 emails, files, and SharePoint sites, use a third-party backup tool to protect your Office 365 data. This provides an extra layer of security and peace of mind, knowing your data is safe from accidental loss, malicious deletion, or compliance violations.

General Cloud Backup, Veeam

Veeam and AWS Storage

Amazon cloud storage, particularly S3 and Glacier, are popular options for offsite data storage, and Veeam is a leading solution for VM backups. You can integrate the two, but the question is: should you? The answer depends on several factors. Veeam and AWS Integration Integrating Veeam Backup & Replication with Amazon storage is done through an AWS Storage Gateway—an appliance that connects your Veeam server to the AWS Cloud. You can configure the gateway as a file server, direct attached storage, or as a Virtual Tape Library (VTL). Virtual Tape Library (VTL) Using AWS as a virtual tape library allows you to present the storage gateway to Veeam as a tape server, which lets you create tape backups in AWS. Veeam Cloud Connect: A Simpler Option Alternatively, Veeam offers Cloud Connect, an offsite backup method built into its platform. Cloud Connect partners—third-party Veeam service providers—offer storage and compute resources specifically tailored for offsite backups. Users simply input their provider’s credentials into Veeam and can start sending backups to the Cloud Repository. For most businesses, a Cloud Connect partner offers a simpler, more cost-effective solution, allowing true incremental forever backups without the hassle of setting up your own infrastructure. When to Use AWS for Offsite Backups Using AWS for offsite backups with Veeam can work, but regular full backups are required. This option might only be suitable if: Conclusion While AWS can be used for offsite Veeam backups, it’s generally not the most efficient or cost-effective option for larger datasets or frequent backups. A Veeam Cloud Connect partner provides a better solution for: Choosing a Cloud Connect provider often leads to simpler management, lower costs, and faster backups, making it a more feasible choice for most businesses.

Scroll to Top