Azure Site Recovery–Story revamped using new portal

In this blog post I’ll guide how to setup Azure Site Recovery (ASR) on the new portal using ARM model. If you’re not familiar with the ASR concept you can refer here. Compared to setting up ASR on old Azure portal, Microsoft ASR team carried out significant enhancement on the new portal and make it very much UI friendly.

In this blog post I’ll explain how to protect HYPER-V VM’s. You can protect VM’s hosted on single HYPER-V (Stand alone) or HYPER-V cluster (without VMM) using these steps. Few things I won’t cover in this blog post are how to create resource group, Virtual network….etc. I’ll provide relevant links for that for you to get in depth idea.

1. Wow to create a resource group in Azure –

2. How to setup networking for ASR –

So with the assumption you have HYPER-V server with bunch of VM’s (on-premise) and have a Azure tenant and in that you’ve created,

  • Resource Group
  • Created Virtual network
  • Created storage account to hold replicated VM’s data

Now let’s go ahead and create a Recovery Vault in the Resource Group you have created. In my case I’ve pre created a RG name as ASR-DR. Inside that I’m going to create the Recovery Vault name “ASR-RV”



Once the RV created we can follow the step-by step guide or based on your experience jump straight into the relevant steps. In below screenshot I’ve demonstrated the step by step method.

I’m selecting the option to protect the hyper-v vm’s which is not managed by VMM environment.


Now you need to create a “HYPER-V site” and then click on the “+ Hype-v server” and register the nodes. Once you complete that task of setting up agents into the on-premise server you’ll be registering your HYPER-V servers with the RV. In below picture you can see I’ve added two HYPER-V hosts.


in the next step you’ll need to define the Azure subscription. RV will read the resources in that vault and will highlight what is usable for ASR purpose.

PS: But I warn you to create the resources earlier for ASR purpose and not to borrow Smile


Now you need to define replication policy and associate. If you have done this step previously you only have to associate that, if not create a one. You can go ahead and create a new one keeping the defaults value and change them later.


Step 5 I’ve skipped that since I’ve make sure planning has been carried out previously.


Now the basic steps are completed and real game begins Smile

Go to “Replicate Application” section and start highlighting the VM’s you need to replicate to Azure for protection.


In the next step you need to map the Azure resources you created previously very carefully. I’ve highlighted the areas which need your special attention. Careful planning becomes a virtue in this scenario.


Now if everything goes smoothly you’ll be able to see the VM’s on the HYPER-V host server name list populated on Azure side. Go ahead and select the VM’s you need to protect,

image image

Finally you need to review the summary and approve to proceed for replication process to execute against the VM you select.


This will take little time to complete. After that for full sync will occur. For that time depend on your disk size and your internet connection speed Smile. I’m in the process of helping a client to upload over 2 TB data.


If you have very slow internet links (Like I’ve Smile) you can use Microsoft import/export method to export the VHD files to nearest Azure data-center via courier. Once Azure team upload your VHD to Azure storage account all you have to do is replicate the difference. Sounds easy? Well it is not! there are few steps you need to follow and it will cost you additional money but it all depend on the situation. You can find more information about it here.

My two cents advise is go ahead and setup the Recovery Vault and check the new options in the RV,


You’ll find new GUI and options given are so rich. In my future article I’ll cover more details about them and also the recovery procedure.

Affordable Disaster Recovery Solution for every organization

January 2015

Last month I had the opportunity to present above topic during local ITPro community event. With the recent announcement if Azure Site Recovery enhancement it is very clear Disaster Recovery is no longer only Enterprise level only solution. Now this is available even for SMB customers with very low price tag.

Some of the key questions are raised on multi hypervisor support. It is no surprise Microsoft has not left those customers alone. With the acquisition of Image Scout solution we now can offer DR solution for VMware, Citrix & Physical servers as well. Very soon Microsoft will focus on providing VMware to Azure site recovery solutions as well.

Extending on premise Active Directory to Azure

Microsoft Azure is one of the biggest buzz word in the technical world (at least in my world Smile ) Whenever I have conversation about this with my customers some of the questions and concerns they have as follows,

1. Why should I care about another directory service when I already have Active Directory to manage my users and computers

2. How can I extend my Active Directory

3. Can I dump my on-prem Active Directory and use 100% Azure active directory?

Most of the time I end up explaining Azure Active Directory using couple of pictures,


Above picture gives an idea about similarity between Azure AD and On-prem AD. This is an easy way to give someone an idea what is AD normally do (I’m talking about business owners)

Next picture about how Azure ID can be used in hybrid method and open whole new world on Cloud based Apps to an organization.


Now that is all about some nice icing layer before we start the work Smile

My first attempt is to help you guide through how we can setup Azure AD and then integrate that with you local Active Directory.

First you need to have an Azure subscription. If you already have Azure subscription then login to the main portal,


On the right hand side scroll down until you find the section called “Active Directory”


You can see couple of Active Directories created by my in the right hand side. Please note Default directory is pre-created by Microsoft Azure. You can start using that or create your own Azure Directory. to create you own AAD (Azure Active Directory) click new,


Select directory and click “Custom”


Put your own values for this, (Note: make sure the Domain name you provide is a unique one)


Once you complete the wizard you’ve completed with creating your AAD Smile


In the above picture you’ll spend time creating users and groups for the new AD. For more information about this area please visit here. In the next article we’ll talk about how to integrate Azure AD with on-prem AD.

Active Directory monitoring and health checkup

As system administrators most of us spend time on end user problem troubleshooting and forget to oversee the Active Directory services. We only concern about the AD server when we’re getting problems and then we see all sort of problems related to DNS, replications…etc. This guide is focus on providing proactive monitoring of the Active Directory so as system administrators you will have better understanding of your infrastructure.

Is it best recommended to do the following test once a month and keep the log files for trend analysis as well. To make thing easier I’ve provided the necessary urls of individual commands pointing to the TechNet so you can get more compressive details,

Dcdiag.exe /v >> c:\temp\pre_dcdiag.txt

This is a must and will always tell you if there is trouble with your DCs and/or services associated with it

Netdiag.exe /v >> c:\temp\pre_Netdiag.txt

This will let us know if there are issues with the networking components on the DC. This along with the post test also is a quick easy way to ensure the patches installed is really installed (just check the top of the log)

Repadmin /showreps >> c:\temp\pre_rep_partners.txt

This shows all the replication and if it was successful or not. Just be aware that Global Catalogs will have more info here than a normal domain controller.

repadmin /replsum /errorsonly >> c:\temp\pre_repadmin_err.txt

This is the one that always takes forever but will let you know who you are having issues replicating with.

Apart from that Microsoft offers another tool called MPSRPT_DirSvc.exe. You can run this tool in the dc’s and it’ll run most of the above mention commands and provide you the output into log files. Very handy I would say. You can download it from here.

Hopefully this helps you when you troubleshoot your domain controllers but by no way is this all encompassing list of things to do. These are the standard steps normally I take but I would love to hear what you all do as well.

sysvol and netlogon shared folders missing after a non-authoritative restore

This is an issue I face with a client side and had to spend hours time to sort it out. Thought of sharing my experience with other fellow minded techies.

First let’s have a look into the issue, Client has a non functional Domain controller due to a power failure. Basically Domain controller has lost it’s database and other critical data (Eg: DNS records, wins records..etc)

Even though additional domain controller has been existed FMSO roles has been assigned to the failed domain controller. Moving forward when we reach the site as a solution they have already restored the domain controller with a system state backup, and then move forward restoring the system state backup to the second domain controller as well. This has caused issues to bring both DC’s to a halt.

Looking into the event viewer found out both DC’s couldn’t find a proper DC’s to sync the sysvol contents though both are trying to find a health DC. To make things shorter I’ve tried to set one DC to set as authoritative and not look for another DC to get the sysvol contents by following the kb290762. After that brought the second DC online and set the “BurFlags” value to D2 in the registry path.

(HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NtFrs\Parameters\Backup/Restore\Process at Startup)

Found out after some time both DC’s got the sysvol folder shared without any contents in it. Netlogon folder also not appearing! Another frustration on the way!!

Next step restore the sysvol to alternative location and reterive the contents in the sysvol folder and then copy to one DC’s “C:\Windows\SYSVOL\sysvol\<Domain Name”\” One that complete following instruction been followed,

Stop File Replication Service in that particular DC, change the following registry key:

HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NtFrs\Parameters\Backup/Restore\Process at Startup

Key: BurFlags

Value: D4(hexadecimal)

Start File Replication Service, after we see the event ID 13516 in FRS event log.

Restart Netlogon service, then the NETLOGON is shared out.

Stop File Replication Service in the other DC, change the following registry key:

HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NtFrs\Parameters\Backup/Restore\Process at Startup

Key: BurFlags

Value: D2(hexadecimal)

Start File Replication Service, after we see the event ID 13516 in FRS event log.

Once that complete both DC’s has same contents in the sysvol folder and the netlogon has been restarted as well. Confirmed users can authenticate and rest of the applications are working fine Smile

Almost everything is running perfectly but as a precaution requested to take full backup of the DC’s. Time for a beer but again it’s midnight so no way to make that as well Smile

Summary: Above mention effected domain controllers are Windows 2003 R2. But as a thumb rule one thing to keep in mind is AD replication is multi-threaded, multi-master replication engine and it can take time and patient is a virtue.

Following links has been referred during the troubleshooting process,

Virtualizing Active Directory service

Most of the time we recommend for customers and partners not to virtualizes the AD server. the explanation we give for this is due to the time sync issue there might be problem. So what is this time sync issue and why we should give more consideration about this too much? In this article I’m going to talk about it little bit and explain a solution for that. As a thumb rule I’ve to update you’ll this is according to my 2 cents knowledge 🙂

Normally Active Directory heavily depend on the accurate time for various services (Eg: Authentication, replication, records updates..etc) When the AD is in a physical machine it will use the interrupt times set by the CPU clock cycles. Since it have the direct access to this time can be accurate.

When you try to virtualized the main problem you face is the virtualized environment behavior. Virtual PC’s are created to save the CPU clock cycles and when one OS is idling then CPU cycles send to that VM will be reduced. Since AD heavily depend on this CPU cycle missing them randomly means the time won’t be accurate. This problematic behavior is same either you’re using VMware, HYPER-V or any other third party virtualization technology. Once the clients and server having mismatch of time sync more than 5 minutes authentication and network resource access will be difficult. (Windows AD environment uses Kerberos authentication and by default time difference allowed is 5 minutes)

So one method is allowing the PDC emulator service holder AD server to sync time with an external time source instead of depending of the CPU clock cycles. To do that you have to edit the registry on the PDC emulator holding server. (As usual I assume you guys will take the necessary precautions like backing up server, registry…etc)

1. Modify Registry settings on the PDC Emulator for the forest root domain:
In this key:
• Change the Type REG_SZ value from NT5DS to NTP.
This determines from which peers W32Time will accept synchronization. When the REG_SZ value is changed from NT5DS to NTP, the PDC Emulator synchronizes from the list of
reliable time servers specified in the NtpServer registry key.
• Change the NtpServer value from,0x1 to an external stratum 1 time source—for example,,0x1. More time servers information can be found over here.

This entry specifies a space-delimited list of stratum 1 timeservers from which the local computer can obtain reliable time stamps. The list can use either fully-qualified domain
names or IP addresses. (If DNS names are used, you must append ,0x1 to the end of each DNS name.) In this key:
• Change AnnounceFlags REG_DWORD from 10 to 5. This entry controls whether the local computer is marked as a reliable time server (which is only possible if the previous registry entry is set to NTP as described above). Change the REG_DWORD value from 10 to 5 here.
2. Stop and restart the time service:
net stop w32time
net start w32time
3. Manually force an update:
w32tm /resync /rediscover
(Microsoft KB article # 816042 provides detailed instructions for this process.) Apart from that you can refer to this link as well.

As a thumb rule test this before applying for the production network. This is recommend if your organization preparing to move to 100% virtualization environment. If not at all cost try to have one DC in a physical server 🙂

Update: I found out Microsoft has already released an article about running Domain controller in HYPER-V. You can download the document from here.

AD DS: Database Mounting Tool (Snapshot Viewer or Snapshot Browser)

With Windows 2008 Microsoft introduce a new tool called Active Directory database mounting tool (Dsamain.exe) This was referred as Snapshot viewer and Active Directory data mining tool during the early release of the Windows 2008. The cool thing about this tool is you can take snapshots of your AD database and view it offline.

As for Microsoft explanation this is really helpful in Forest recovery and AD auditing purpose. In the case of AD objects deletion you can load a snapshot and compare your current AD alone with it.

Before the Windows Server 2008 operating system, when objects or organizational units (OUs) were accidentally deleted, the only way to determine exactly which objects were deleted was to restore data from backups. This pain behind this is:

  • Active Directory had to be restarted in Directory Services Restore Mode to perform an authoritative restore.
  • An administrator could not compare data in backups that were taken at different points in time (unless the backups were restored to various domain controllers, a process which is not feasible).

but one thing to notice is this is not a method to recover deleted objects but merely a method to show to you what has happened by doing a comparison. Apart from that you’ll need to be a member of the Enterprise admins or domain admins group, or else given particular rights for a user account.

Now getting back to the actions, to get snapshot, mount them and view them you need to know about 3 tools,

1. NTDSUTIL – Create, delete, mount, list the snapshot.

2. Dsamain.exe – This will allow us to expose snapshot to LDAP servers.

3. LDP or Active Directory Users and Computers MMC to view the mounted snapshot.

So the steps going to be as follows,

1.    Manually or automatically create a snapshot of your AD DS or AD LDS database.
2.    Mount the snapshot.
3.    Expose the snapshot as an LDAP server.
4.    Connect to the snapshot.
5.    View data in the snapshot.

Manually creating the snapshot of the AD DS

1. Logon to a Windows Server 2008 domain controller.
2. Click Start, and then click Command Prompt.
3. In the Command Prompt window, type ntdsutil, and then hit Enter.
4. At the ntdsutil prompt, type snapshot, and then hit Enter.
5. At the snapshot prompt, type activate instance NTDS, and then hit Enter.
6. At the snapshot prompt, type create, and then hit Enter.
7. Note down the GUID return by the command.

1-28-2010 11-05-13 AM 1-28-2010 11-07-43 AM

1-28-2010 11-08-27 AM

Mount the snapshot

1. If you didn’t close the previous window just go for it again and type list all and press enter.
2. Once you get the list of the snapshots you can select a snapshot to mount. In this scenario type mount 2 and press enter.
3. If the mounting was successful, you will see Snapshot {GUID} mounted as PATH, where {GUID} is the GUID that corresponds to the snapshot, and PATH is the path where the snapshot was mounted.
4. Note down the path

1-28-2010 11-11-35 AM 1-28-2010 11-13-14 AM

1-28-2010 11-13-23 AM

Expose the snapshot as an LDAP server

Ok so far we manage to create a snapshot and mount it. Now we need to expose the snapshot so we can view it from LDP utility or by using ADUC mmc. In this scenario we’re going to use the second utility (Active Directory Users and Computers)

1. Open a new command prompt

2. In the Command Prompt window, type dsamain /dbpath C:\$SNAP_201001281107_VOLUMEC$\WINDOWS\NTDS\ntds.dit /ldapport 51389 (instead of using the default 389 port we’re using a alternative port the snapshot to minimize any conflicts with the live AD DS)
note: “C:\$SNAP_201001281107_VOLUMEC$” is the path we got few steps before and represent the snapshot mounted path in our system.

3. "Microsoft Active Directory Domain Services startup complete" will appear in the Command Prompt window after running the above command. This means the snapshot is exposed as an LDAP server, and you can proceed to access data on it. NOTE: Do not close the Command Prompt window or the snapshot will no longer be exposed as an LDAP server. 

1-28-2010 11-31-58 AM 1-28-2010 11-32-11 AM

Connect to the snapshot

We can use any utility which can read the LDAP data. In this demonstration as I mention earlier I’ll go ahead and use the Active directory Users and Computers snappin.

1. Open the ADUC.
2. Right click the ADCU and select “Change domain controller” option.
3. Type the domain name with the custom port number eg “CONTOSO-DC:51389”
4. Now you’re looking at the data in the snapshot. Go ahead and open a another ADCU window and that will open the current AD DS.
5. Go ahead and do a change on the live AD DS and then check the 2 MMC’s again. You’ll see the snapshot data is not getting changed.

1-28-2010 11-32-42 AM 1-28-2010 11-33-03 AM

1-28-2010 11-34-11 AM

So as I mention this is really cool feature and saves lot of time. If you don’t like creating snapshots manually you can create a schedule task and automate this to create snapshot automatically. Once concern is these snapshot are not encrypted so if this gets to wrong hand it is bad for you guys. So try to keep them in a safe location and try to encrypt them for added security.

Giving attention to good old redirusr and redircmp commands

I’ve been meddling with some GPO issues and then came across these 2 commands. These commands has been the with Windows 2000 and 2003. So what bring my attention to these commands is how can you use them to comply with Security auditing. More information about how to use this commands can be found over here.

Well first we’ll take an example about an Enterprise company. Most of the time AD admin will get a mail or a request from HR or from a relevant department requesting to create a new user account. Once you get that request you’ll create those user accounts and by default they will be going to the Users section in ADUC. Due to your busy schedule you’ll forget to transfer the relevant user account to the correct OU. Event though this will be a matter of few hours or few days delay moving the account to relevant OU in computer security wise big risk!

One way I can think of eliminating or minimizing is whenever you create new user account or new computer added to the domain they will be moved to a different OU which have unique GPO’s assign to them. So in that particular GPO you can edit the security setting which will comply with the company IT security policy and give minimal user rights until user account moved to correct OU 🙂

In a nutshell this will be seen as a simple thing but overall compared to IT security a big step. So go ahead roll your sleeves and give it a try in your company network and be safe!

Active Directory Administrative Center

Windows 2008 R2 has significant improvements over the previous windows version (it should be isn’t it?). Some of the key improvements in the area of Virtualization, Power Saving, PowerShell, RDS and also in Administrative management side. In this article I’m going to focus on one of the new feature in Windows 2008 R2.

The new management tool is “Active Directory Administrative Center”  This is a new Microsoft Management Console (hmm..MMC 4.0 future version to come (I’m pretty sure)).

Microsoft sees this as a centralize console for administrators to use to mange different domains centrally. We’ll are familiar with Active Directory Users and computers. So think this as additional tool for the management perspective drive by PowerShell technology.

You can use Active Directory Administrative Center to perform the following Active Directory administrative tasks:

  • Create new user accounts or manage existing user accounts
  • Create new groups or manage existing groups
  • Create new computer accounts or manage existing computer accounts
  • Create new organizational units (OUs) and containers or manage existing OUs
  • Connect to one or several domains or domain controllers in the same instance of Active Directory Administrative Center, and view or manage the directory information for those domains or domain controllers
  • Filter Active Directory data by using query-building search

What you cannot do is install ADAC on windows 7 (still doubtful) or windows 2000,2003. Currently ADAC only supports in windows 2008 R2 🙂


The interface of the ADAC can be customized for certain level. If you’re a administrator who is managing considerable number of domains and servers this will be a great tool to do management tasks centrally.

Hint : Without even knowing what you’re really using in the background is PowerShell. So if you moan about saying PowerShell is difficult stop that 🙂

AD Organizational Unit Design Best Practices

Organizational Units (OU’s) are containers within domains. They are the elements of hierarchical structure within domains. The OU hierarchy does not need to reflect the departmental hierarchy of the organization or group. OUs are created for a specific purpose, such as the delegation of administration, the application of Group Policy, or to limit the visibility of objects.

Sample methods of how can you create your company OU levels.

Characteristics of the Organizational Unit (OU)

· OU’s offer the best method to organize the hierarchical structure in Active Directory. There is a big temptation to reflect the organizational hierarchy in the domain, but as we learned in the Domain Design section, this is not a good idea. The Organization Unit would be best suited for the job.

· OU’s can easily be renamed, moved, and deleted. Using Active Directory Users and Computers (ADUC), manipulating OU’s can easy. Renaming the OU does not affect the objects inside that OU. Moving the OU moves all objects and containers inside that OU. Here’s the tough one: deleting the OU deletes all containers and objects inside that OU. So better be careful!

· Maintaining the organizational hierarchy using OU’s has less impact on performance compared with maintaining it in domains. While the domain requires at least an addition of at least two domain controllers per additional domain, the OU does not have that requirement. Additional OU’s also don’t have additional replication overhead, well, enough replication overhead to replicate the hierarchy of the OU in the domain, but that’s it.

· Organizational Units are bound within the domain. All organizational units do not exceed the domain boundary. Similarly named OU’s in different domains are independent of each other. To put it in another way, all domain controllers in the domain contain the same set of OU’s for that domain, and only for that domain.

· The OU offers a good administrative boundary. Permissions to Active Directory objects can be delegated at the OU level, and have they inherited in the containers and objects inside that OU.

Reasons to create Organizational Units

· Delegate administrative control. Administration of Active Directory objects can be done in a per OU level, and these permissions by default are inherited by containers and objects in the said OU.

· Implementation of Group Policies. Group Policies can be implemented, among others, on the OU level. Like administrative permissions, they are also inherited by down-level OU’s and objects. We will take a look at group policies in the next section.

· Object organization in Active Directory. The Active Directory domain can contain millions of objects. It may be very hard to locate for a specific object among millions if there was no mechanism to organize them.

Some OU design principles

· Simplicity is (still) the key. Although we can create as many OU’s as we need, it would be important to make sure that they are in the simplest way possible. A domain with hundreds of OU’s may no longer be supportable. Also, the deeper the OU structure, the longer it takes for a computer to start up or a user to log on because of processing of Group Policies in the depth structure of the containers. A general rule of thumb is an OU structure that does not exceed a depth of 5 OU’s (3 is a conservative figure).

· Have knowledge of the Customer’s political and organizational structure and boundaries. It is important that the organizational and political structure of the Customer is to be understood by the infrastructure architect from day one. As mentioned, we can move objects from one OU to another. However, doing so would change the object’s group policies applied to it, and may not be a wise move after rolling out the said GPO’s.

· Consider separating the user from the workstation. In Group policy there are separate sections for computers and users. This makes it possible to also separate the computer objects from the user objects accessing them, since there might be a separate group of administrators managing them anyway.

· Consider separating the service from the server. In the same way that user objects can be separated from their workstations, the services can also be can be separated from the server. This is because Group Policies can also control which services are running on a specific machine. Example, all computer objects of web servers running IIS can be placed on one OU, and apply to that OU a Group Policy Object that ensures that the World Wide Web Publishing Service starts automatically on those servers, while is Disabled for the rest.

Be careful with complex OU structures

Have a principle on OU design, at least on the top levels of the OU. This way, objects won’t get "lost" in an intricate and highly complex OU design. It’s very easy to "lose" an object after creating a complex OU hierarchy with matching delegated permissions to boot, by successfully finding an object in the Find function in ADUC, but not being able to access the same because delegated administrative control bars the currently logged on user from accessing either the object itself or the container (or one of the containers of the container) holding the said object. In other words, the object exists, but is not accessible, and uniqueness rules prevent us from creating a similarly-named object. Apart from that this will simply nightmare to mange by the administrator J

A popular OU Design

With the number of companies I have work on designing OU structure one simple rule is to keep OU structure simple and not to let it go too deep level. Try to have maximum of 3-4 sub OU level. This can be categorized in Geographical level or Department level or unit level.

The same is true for Group Policy implementation. A central group policy applied at the domain level or separate group policies applied separately for OU’s in either the geographical or administrative OU levels make it centralized, or decentralized respectively.

In short, this model enjoys the possibility of having either centralized or decentralized modes of administration and group policy application. If your organization is one that has multiple geographical locations per domain, consider this model.