Event ID 19050 on HYPER-V 2016 cluster

Recently one of my customer’s HYPER-V environment recovered from power failure. Despite having secondary power to maintain the servers up and running switches redundancy power has not been overlooked Winking smile. Nevertheless after the power restore some of the VM’s in the cluster has been started acting funny inside the HYPER-V manager console.

image

When I overlook at the cluster MMC VM’s status is in good mode without any errors. In the particular node event viewer throwing the event id 19050. Further searching on the search engine gave me this result, not much help though. Based on the scenario we knew VM is functioning properly and this has to be intermittent problem. Our final resolution has been restart the VM and let the VM move to correct host based on the priority order.

Migrating DPM data from one data storage to another data storage

Recently I’ve been involved in a project to help a customer to setup DPM 2012 R2 to backup VMware environment. Yes you heard it correct DPM 2012 R2 with UR11 support VMware backup. You can read more about it here. In our initial pilot stage we used DAS storage on the DPM server itself for test backup.  Once we verify local backup and Azure backup (replicating local backup copy to Azure) successful we wanted bring a SAN storage for the DPM server. My only challenge has been how to move the existing pilot backup to new storage introduced in the DPM server since we’ve been backing up production workload and I didn’t want to re-do that job again. Prior to that let’s find out my current protection group setup for a while,

image

As you can see it’s simple PG (Protection Group) protecting two SAP VM’s. Now let’s jump into the disk group structure from Disk management perspective. There are two DAS disks being utilized for the data backup, same time you can see I have introduced 3 disks connected via SAN for the DPM server.

image

Another view from the DPM point go view,

image

Challenge is to migrate the data from Disk1 and Disk2 to Disk3 without modifying the Protection group settings. For this you can use the DPM PS MigrateDatasourceDataFromDPM.ps. But first let’s try to identify the disk structure from PS console,

Get-DPMDisk -DPMServerName <DPM Server Name>) to display the disks.

image

As you can see in the above picture Disk1 and Disk2 is occupied for holding the Data. The trick is to identify the correct disk number and not to get deviated by NtDiskId. Once identified you can use following command with parameters to transfer the data,

./MigrateDatasourceDataFromDPM.ps1 -DPMServerName <DPM Server Name> -Source $disk[n] -Destination $disk[n]

Disk [n] has to be replaced by exact disk number. Once you define and executed the command DPM will start migrating data from existing disk the targeted disk. This may take some time based on the amount of disk storage.

image

Now you’ll notice in Disk Management the DPM replica and recovery point volume information which is location on Disk 1 and Disk 2 has been migrated to Disk 3. Any new recovery points for the respective data source will now be located on the new volumes on the new disk, the original volume data on Disk 1 and Disk 2 will still need to be maintained until the recovery point on them expire. Once all recovery points expire on the old disk(s), they will appear as all unallocated free space in disk management. After that we can safely remove them from the DPM storage pool.

Note: Once this task completed you may get replica inconsistent error messages. This is normal and is expected as there has been changes made to the volume and will need to be re-synchronized by running a synchronization job with consistency.

image

In the next article let me explain how can we use Azure import/ export Azure backup workload.

PS: If you don’t want to play around with PS that much and comfortable with GUI method then you’re in luck. Refer to this link where one MVP have written a PS script to do this job in GUI level.