Error ID 70094: ASR Protection cannot be enable for HYPER-V VM

I came across above error when tried to setup ASR using new portal. Every step went find until I get below error,

image

clearly this highlight I cannot replicate the VM successfully to the Azure Recovery Vault. I’ve tried re-registering the Azure Site Recovery agent on the HYPER-V host as well. Though HYPER-V host register properly on the Recovery Vault VM protection fails with above error. On the hyper-v console I can see VM replication is on error state.

So finally meddle around the host logs I found out ASR has been setup previously and has not been removed properly. This means each VM replication also not completed and hanging around on error state. Only way to proceed is to clear those unsuccessful replication data on the host side targeting individual VM’s which is effected.

You need to run below mention PS command on each host targeting the effected VMs,

“$vmName = “<VM Name>”
  $hostName  = “<Host name>”
  $vm = Get-WmiObject -Namespace “root\virtualization\v2” -Query “Select * From Msvm_ComputerSystem Where ElementName = ‘$vmName'” -computername $hostName
  $replicationService = Get-WmiObject -Namespace “root\virtualization\v2”  -Query “Select * From Msvm_ReplicationService”  -computername $hostName
  $replicationService.RemoveReplicationRelationship($vm.__PATH)“

PS: Replace the <VM Name> with your effected VM name, <Host name> with your HYPER-V server name and run on the HYPER-V host.

Once that completed go ahead and try enabling replication for each VM from Azure console side.

PS: If you need to know about how to setup ASR on the new portal you’re in luck. Stay tune for next blog article Smile

Advertisements

How to encrypt disks on Azure VM’s

“Information protection” no wonder this word has been making big buzz around the world regardless of the business size. We have seen major cyber attacks, malware attacks which even cripple the Enterprise companies finically and reputation wise. So in this article I’m looking at one area of prevention solution offered by Microsoft team long time back. Now it’s extended to Microsoft Azure VM’s as well. Disk encryption is not a new term, we always had heard under Information Security practices consultants highlight how vital to back the data and keep them offshore. Same time they request this data to be encrypted in case fall into wrong hand.

But have you thought about how to protect running VM’s in your data-center or on Azure? Actually there are couple of ways you can approach or that. I recommend all of them in phase method based on your budget and time.

Antimalware
Compliance
Hardware Security Module (HSM)
Virtual machine disk encryption
Virtual machine backup
Azure Site Recovery
Security policy management and reporting

List can be going on over the time with new addons Smile. In this article I’ll describe how we can protect virtual machines using disk encryption technology. If you’re a HYPER-V fan then read about Shielded VM’s as an additional information.

Ok back to the main topic. This technology is referred as Azure Disk encryption which leverage Microsoft Bitlocker disk encryption. (I do hope now it makes sense to you all). Azure supports encrypting Windows VM’s using Bitlocker technology as well as Linux VM’s using  dm-crypt feature which provides volume encryption for the OS and the data disks. All the disk encryption keys and secrets saved on Azure Vault on existing subscription. The data (or in our case VHD files) resides safely on the Azure storage. Read about Azure Key Vault technology here.

Disk encryption activity can be approached from several methods,

disk-encryption-fig1
Picture credits to the Azure team Smile

1. In case if you decided to upload a encrypted VM from your HYPER-V environment to Azure make sure to upload the VHD to storage account and copy the encryption key material to your key vault. Then, provide the encryption configuration to enable encryption on a new IaaS VM.
2. If you create the Azure VM from Azure marketplace template then just provide the encryption configuration to enable encryption on the IaaS VM.
3. In case if you’ve already created VM on subscription leveraging the Azure marketplace still you can follow the same steps thanks to Azure Security Center.

So let’s assume you already created the Azure VM using the marketplace and started using that for your requirement. Later stage you found out though Azure Security Center you’ve not followed the industry bet practices and it’s highlighting the potential security risk you’re exposed to. One scenario is disks are not encrypted!

image

As you can see I’ve 3 Azure hosted VM’s and they are having potential security issues and not enabling disk encryption is one of them. On this article I’ll focus on one VM (VM01) which is running server 2012 R2 enabling the disk encryption.

First things first you need to get Azure PowerShell modules setup to your desktop / laptop. You can download them from the Azure download page.

image

After that you’ll need to get a PowerShell script to do the job. You can get that script from here. Copy the script and save it with any name you prefer. Make sure it’s extension as PS1.

Now you need to open the script using PowerShell ISE.

image

When you run the script you need to provide following information (orderly manner)

Resource Group Name – This is the RG name where you’ve hosted your VMs

Key Vault Name – Place where your keys will be saved and protected. During the execution of the script it’ll ask for a Key vault. If you didn’t have one create just proceed and it will create a key vault automatically.

Location – Where you Resource Group location. In my scenario it would be “southeastasia”
Tip: notice there are no space between the name. This is very important to remember.

Azure Active Directory Application Name – This is for the Azure Active Directory application that will be used to write secrets to the Key Vault. If you haven’t created one script will create one for you.

Now you’re aware the information you need to provide. Let’s proceed with the execution of the script under PowerShell ISE

image

If you get above screen that mean phase 1 activity is completed Smile 

Now it’s time to get ready to target a VM and encrypt the disks. For this part you need to tell PowerShell which VM you’re targeting. In the PowerShell type below command

$vmName = “<VM name>”

Replace <VM Name> with your VM hosted in that resource group. In my case it’s $vmName = “VM01”

Now in the above PowerShell script line 185 highlight the command to encrypt the disks. Copy that and run it on the PowerShell window. Alternatively you can copy the command mentioned below.

Set-AzureRmVMDiskEncryptionExtension -ResourceGroupName $resourceGroupName -VMName $vmName -AadClientID $aadClientID -AadClientSecret $aadClientSecret -DiskEncryptionKeyVaultUrl $diskEncryptionKeyVaultUrl -DiskEncryptionKeyVaultId $keyVaultResourceId -VolumeType All

If things go smoothly you’ll get below message on your PowerShell window,
image

This process will take around 10-15 min time to complete. On above screenshot you can see the command execution and result completion is successful.

After that you can return the VM properties and check the disk status. you can see below both OS and Data disks has been encrypted.

image

So any given time you add more VM’s to that resource group all you have to do is target the VM name and run the command line given above.

Note: Disk encryption on Azure is a really good option but need to be weighted carefully. If you want to backup the encrypted VM’s then encrypting need to be completed using KEK method. For more in-depth of Azure IaaS disk encryption refer to this article.

Azure Site Recovery (ASR) in action to protect Azure IaaS VMs

Update 04th July 2017 11:00 p.m.: Today Microsoft ASR team allow replicating Server 2016 VM’s  (Azure-to-Azure DR) scenario as well. These VM’s can support Storage space technology. Can check my short video here.

Kindly note this feature still in preview mode. Being said that I believe this is very important option for some customers. Based on customer feedback Microsoft has identified following points to justify this feature.

  • You need to meet compliance guidelines for specific apps and workloads that require a business continuity and disaster recovery (BCDR) strategy.
  • You want the ability to protect and recover Azure VMs based on your business decisions, and not only based on inbuilt Azure functionality.
  • You need to test failover and recovery in accordance with your business and compliance needs, with no impact on production.
  • You need to fail over to the recovery region in the event of a disaster and fail back to the original source region seamlessly.

So being said that below are my observations on ASR for Azure IaaS VM’s.

  • Setup and configuration is very much easy (Of course careful planning is required)
  • VM’s with Managed disks are not supported (This option will be coming soon)
  • You Site Recovery Resource Group has to be created on different region and cannot be on the same region where you production VM’s exists.
  • Automated replication. Site Recovery provides automated continuous replication. Failover and failback can be triggered with a single click via GUI.
  • Minimum replication time interval is 5 min (Wish this will be improved soon)
  • Just like protecting and testing on-premise VM’s to Azure, you can run disaster-recovery drills with on-demand test failovers, as and when needed, without affecting your production workloads or ongoing replication.
  • You can use recovery plans to orchestrate failover and failback of the entire application running on multiple VMs. This can be controlled via runbooks (very nice feature)

Ok now let’s get back to action Smile

To make things easier I’ve went ahead and created two RG (Resource Groups) in advance in two regions. I hope name convention is easy to understand it’s purpose.

image

Inside the ASR-PROD I already created single Server 2012 R2 VM.

image

So now we have a production VM ready to b protected. Next step is to create Recovery Vault on destination RG.

image

image

Select the VMs you want to replicate, and then click OK.

image

if you want you can override the default target settings and specify the settings you like by clicking Customize.

image

Once given command to execute Azure Recovery service will go ahead and do the job Smile

image

Initial replication might take some time. It all depend on how many number of disk you have in your Iaas VM and their size. But I am pretty sure it’s lot faster than uploading your on-premise datacenter VM to Azure scenario. I have experience 3-4 days to upload single VM to Azure Smile

Finally the success results would be as follows,

image

Nice GUI work from Azure ASR team visually showing which to which region VM getting replicated to,

image

Experience the DR drill. For this under the Site Recovery click the “Test Failover” option. This will create VM on the ASR RG. Once the test is complete you can select the option called “Cleanup test failover” This will delete the VMs that were created during the test failover

image

Tips:

During my demo lab creation came-up with below mentioned error. Problem is newly added disk is not be initialized inside the guest OS. Due to that reason ASR unable to replicate that disk to DR site.

image

Azure Backup stepping in RaaS (Restore-as-a-Service) model

I do hope this blog post readers are ware Azure offering free data backup solution called “Azure Data Protection Manager (DPM)” Basically it’s same as Data Protection Manager offered via System Center suite with the exception Tape drives are not supported by AzDPM. But again who needs tape drives Smile.  Nevertheless Azure Data Protection Manager offers the solution of protecting on-premise and Azure VM’s data backup. But sad story is when it comes to restore the time and complexity. Thanks to the new RaaS method things will get dramatically change when it comes to data restoration. Some of the key benefits of this method are,

Instant recovery of files – Instantly recover files from the VM’s hosted on Azure or on-premise. Whether it’s a case of accidental file deletion or simply validating the backup, instant restore drastically reduces the time taken to recover your first file.
Open and review files in the recovery volumes before restoring them – You can mount the previous backup as a snapshot and view them and decide which files you need to recover.

Even though this is in the preview level I look forward to see this on GA very soon.

1. In my Azure test VM I’ve created couple of test folders and copied few files in it.

image image

2. Using Azure backup I’ve already taken backup of this VM

image

3. Now let me go ahead and delete the folder1 in the Important data folder. After that I’m showing the current volumes in this VM.

image image

4. Now let’s get back to Azure portal to recover the data. for this I’m logging into the Azure portal within the Azure VM, this allows me to restore the files to the same VM which I’ve deleted folder in the first place. Keep an eye on the red arrow location. This is the new feature I’m highlighting today Smile. WE can select the snapshot we want to map to the Azure VM. Once that completed we run the PS to mount the snapshot volume to the Azure VM.

image imageimageimage

imageimage

5. As you can see in the last picture we manage to see the deleted data available in the mounted volume. Now we can copy them and restore to the location where we delete them accidently. Once the restore work is completed you need to stop the PS session and unmounts he volume from the Azure portal.

imageimage imageimage 

As you can see this is very easy and useful feature. According to Microsoft Azure Backup team this feature can be used to restore up-to 10GB of files. If you want to restore more than that it’s recommend to restore the entire VM from a snapshot. By the time I’m writing this post Azure Backup team has announced the supportability of restoring files from Linux VM’s as well. You can get more information about that from here.

PS: Same steps applies when you try to restore files for on-premise VM protected by Azure Backup service. Make sure you Azure Backup agent version is 2.0.9063.0

image

Migrating DPM data from one data storage to another data storage

Recently I’ve been involved in a project to help a customer to setup DPM 2012 R2 to backup VMware environment. Yes you heard it correct DPM 2012 R2 with UR11 support VMware backup. You can read more about it here. In our initial pilot stage we used DAS storage on the DPM server itself for test backup.  Once we verify local backup and Azure backup (replicating local backup copy to Azure) successful we wanted bring a SAN storage for the DPM server. My only challenge has been how to move the existing pilot backup to new storage introduced in the DPM server since we’ve been backing up production workload and I didn’t want to re-do that job again. Prior to that let’s find out my current protection group setup for a while,

image

As you can see it’s simple PG (Protection Group) protecting two SAP VM’s. Now let’s jump into the disk group structure from Disk management perspective. There are two DAS disks being utilized for the data backup, same time you can see I have introduced 3 disks connected via SAN for the DPM server.

image

Another view from the DPM point go view,

image

Challenge is to migrate the data from Disk1 and Disk2 to Disk3 without modifying the Protection group settings. For this you can use the DPM PS MigrateDatasourceDataFromDPM.ps. But first let’s try to identify the disk structure from PS console,

Get-DPMDisk -DPMServerName <DPM Server Name>) to display the disks.

image

As you can see in the above picture Disk1 and Disk2 is occupied for holding the Data. The trick is to identify the correct disk number and not to get deviated by NtDiskId. Once identified you can use following command with parameters to transfer the data,

./MigrateDatasourceDataFromDPM.ps1 -DPMServerName <DPM Server Name> -Source $disk[n] -Destination $disk[n]

Disk [n] has to be replaced by exact disk number. Once you define and executed the command DPM will start migrating data from existing disk the targeted disk. This may take some time based on the amount of disk storage.

image

Now you’ll notice in Disk Management the DPM replica and recovery point volume information which is location on Disk 1 and Disk 2 has been migrated to Disk 3. Any new recovery points for the respective data source will now be located on the new volumes on the new disk, the original volume data on Disk 1 and Disk 2 will still need to be maintained until the recovery point on them expire. Once all recovery points expire on the old disk(s), they will appear as all unallocated free space in disk management. After that we can safely remove them from the DPM storage pool.

Note: Once this task completed you may get replica inconsistent error messages. This is normal and is expected as there has been changes made to the volume and will need to be re-synchronized by running a synchronization job with consistency.

image

In the next article let me explain how can we use Azure import/ export Azure backup workload.

PS: If you don’t want to play around with PS that much and comfortable with GUI method then you’re in luck. Refer to this link where one MVP have written a PS script to do this job in GUI level.

Hybrid data backup solution with Azure Backup Server #MSOMS

When you run your workloads in different locations (on-prem & cloud) it would be tough situation how to manage data backup. Either you’ll end up using multiple backup software or else your backup vendor will assure you can manage both worlds from their tools Smile. Luckily Microsoft step-up with their hybrid data backup solution under the name tag “Azure Backup Server” In their own words “With Azure Backup Server, you can protect application workloads such as Hyper-V VMs, Microsoft SQL Server, SharePoint Server, Microsoft Exchange and Windows clients from a single console.”

Today I’ll take you through the journey of Azure Backup Server. Microsoft initially ran project with code name “Venus”. Now this is part of the OMS Suite (Operations Management Suite). For people who out there familiar with System Center Data Protection Manager think this is as DPM minus Tape drive support (and it’s free too Smile).

Most of the time I also really don’t entertain the idea of using Tape drives and I’m glad Microsoft team also carries same opinion as mine Smile. Some of the great features of Azure Backup Server service are,

Feature

Benefit

Automatic storage management

No capital expenditure is needed for on-premises storage devices. Azure Backup automatically allocates and manages backup storage, and it uses a pay-as-you-use consumption model.

Unlimited scaling

Take advantage of high availability guarantees without the overhead of maintenance and monitoring. Azure Backup uses the underlying power and scale of the Azure cloud, with its nonintrusive autoscaling capabilities.

Multiple storage options

Choose your backup storage based on need:

· A locally redundant storage block blob is ideal for price-conscious customers, and it still helps protect data against local hardware failures.

· A geo-replication storage block blob provides three more copies in a paired datacenter. These extra copies help ensure that your backup data is highly available even if an Azure site-level disaster occurs.

Unlimited data transfer

There is no charge for any egress (outbound) data transfer during a restore operation from the Backup vault. Data inbound to Azure is also free. Works with the import service where it is available.

Data encryption

Data encryption allows for secure transmission and storage of customer data in the public cloud. The encryption passphrase is stored at the source, and it is never transmitted or stored in Azure. The encryption key is required to restore any of the data, and only the customer has full access to the data in the service.

Application-consistent backup

Application-consistent backups on Windows help ensure that fixes are not needed at the time of restore, which reduces the recovery time objective. This allows customers to return to a running state more quickly.

Long-term retention

Rather than pay for off-site tape backup solutions, customers can back up to Azure, which provides a compelling tape-like solution at a low cost.

 

Even if you’re running your VM’s in VMware environment till you can leverage this backup solution. I guess high level picture will provide you’ll more sense by now Smile

azure-backup-overview

Of course in this solution we’re leveraging Azure Backup vault to retain the data. With the introduction of “Cool Storage” you can further reduce your storage cost for long term archival storage. (refer to my previous article to get more information about Cool storage”

Sounds cool and you want to get your hands dirty by trying this out? Step by step article will arrive soon. So stay tuned and hungry Smile

Reduce your cloud storage cost with “Azure Cool Blob Storage”

When I have conversation with my customers I recommend them to consider Azure storage as a best option to keep their data backup. So the rule of thumb goes up to 30 days data on-prem and rest are on the cloud (at least from my point of view)

All these times for data storage Microsoft had one storage option. But now they have introduced storage type called “Cool blob Storage” Basically this is a type with lower cost when you agree you’re not accessing the data stored on those storage accounts frequently.Example use cases for cool storage include backups, media content, scientific data, compliance and archival data. In general, any data which lives for a longer period of time and is accessed less than once a month is a perfect candidate for cool storage.

  • Cost effective: You can now store your less frequently accessed data in the Cool access tier at a low storage cost (as low as $0.01 per GB in some regions), and your more frequently accessed data in the Hot access tier at a lower access cost. For more details on regional pricing, see Azure Storage Pricing.
  • Compatibility: We have designed Blob storage accounts to be 100% API compatible with our existing Blob storage offering which allows you to make use of the new storage accounts in existing applications seamlessly.
  • Performance: Data in both access tiers have a similar performance profile in terms of latency and throughput.
  • Availability: The Hot access tier guarantees high availability of 99.9% while the Cool access tier offers a slightly lower availability of 99%. With the RA-GRS redundancy option, we provide a higher read SLA of 99.99% for the Hot access tier and 99.9% for the Cool access tier.
  • Durability: Both access tiers provide the same high durability that you have come to expect from Azure Storage and the same data replication options that you use today.
  • Scalability and Security: Blob storage accounts provide the same scalability and security features as our existing offering.
  • Global reach: Blob storage accounts are available for use starting today in most Azure regions with additional regions coming soon.

So how to create “Cool Storage”? Well not that big deal you have to log into your Azure portal and then go to “New” and select “Data + Storage” option

image

image

Under the storage account “Account kind” select “Blob storage”

image

After that you should be able to see the “Cool Storage” option,

image

By the time I’m writing this article sever data backup vendors has already started working with Microsoft to intergrade this feature with their backup products. (CommVault, Veritas NetBackup, SoftNAS, CloudBerry…etc) We would see this list growing really fast.