VMware or HYPER-V or be in heterogeneous environment?

We are in the IT era of moving towards virtualization due to several factors. But from the IT business Decision makers are in the doubtful position of deciding which virtualization platform to consider. now it has become a boiling question of VMware ESX, Vsphere or Microsoft HYPER-V  v2.0.

Unfortunately for IT managers this is not a black and white question or a decision to take (at least most of them). So what the expert advise for this? Try and see both environment before moving to one platform, but this is not an easy task for busy admins. but if you carefully consider it makes real sense, most of the blogs and web sites talk about the product in biased method instead of been unbiased. Even a person who tried to be fail will be falling into biased method sine he must have been practically using one version more than the other.

The reasons for a heterogeneous environment are, well, heterogeneous. Some companies are in transition, moving their virtual environments from VMware to Hyper-V. Others choose to segment their servers according to geography or department. Still others pick their virtualization platform according to the underlying physical platform or application. As you can see moving to a vitalization platform is not all about the features but balance and reducing of the cost as well.

Most of the early virtualization adopt customers are coming from VMware background. Some are considering moving to HYPER-V  due to the software licensing cost in VMware. Some deciding keep existing VMware environment as it is and move new requirements to HYPER-V  to reduce cost and balance everything. So in that case heterogeneous environment come it picture automatically. So next question is how to manage heterogeneous environment? Well that is not that difficult as we think, since most of the ISV’s has already predict that and given solutions for that. As an example Microsoft Virtual Machine Manger (VMM 2008) is already supporting VMware ESX management as well. But of course this manageability will be depend on how the IT manger want to accept it (Hope you get what I mean 🙂 )

So cross virtualization platform is not a bad thing from my point of view. You’ll get your hands dirty on both technologies and will get the chance to evaluate them. But you may decide to move to one platform later since heterogeneous is a temporary transition period for most of the companies. So for the summarization I want to point out selecting the virtualization platform is not based on the features itself but various other factors as well.

HAPPY VIRTULIZATION TO ALL!

Advertisements

Virtualizing Active Directory service

Most of the time we recommend for customers and partners not to virtualizes the AD server. the explanation we give for this is due to the time sync issue there might be problem. So what is this time sync issue and why we should give more consideration about this too much? In this article I’m going to talk about it little bit and explain a solution for that. As a thumb rule I’ve to update you’ll this is according to my 2 cents knowledge 🙂

Normally Active Directory heavily depend on the accurate time for various services (Eg: Authentication, replication, records updates..etc) When the AD is in a physical machine it will use the interrupt times set by the CPU clock cycles. Since it have the direct access to this time can be accurate.

When you try to virtualized the main problem you face is the virtualized environment behavior. Virtual PC’s are created to save the CPU clock cycles and when one OS is idling then CPU cycles send to that VM will be reduced. Since AD heavily depend on this CPU cycle missing them randomly means the time won’t be accurate. This problematic behavior is same either you’re using VMware, HYPER-V or any other third party virtualization technology. Once the clients and server having mismatch of time sync more than 5 minutes authentication and network resource access will be difficult. (Windows AD environment uses Kerberos authentication and by default time difference allowed is 5 minutes)

So one method is allowing the PDC emulator service holder AD server to sync time with an external time source instead of depending of the CPU clock cycles. To do that you have to edit the registry on the PDC emulator holding server. (As usual I assume you guys will take the necessary precautions like backing up server, registry…etc)

1. Modify Registry settings on the PDC Emulator for the forest root domain:
In this key:
HKLM\System\CurrentControlSet\Services\W32Time\Parameters\Type
• Change the Type REG_SZ value from NT5DS to NTP.
This determines from which peers W32Time will accept synchronization. When the REG_SZ value is changed from NT5DS to NTP, the PDC Emulator synchronizes from the list of
reliable time servers specified in the NtpServer registry key.
HKLM\SYSTEM\CurrentControlSet\Services\W32Time\Parameters\NtpServer
• Change the NtpServer value from time.windows.com,0x1 to an external stratum 1 time source—for example, tock.usno.navy.mil,0x1. More time servers information can be found over here.

This entry specifies a space-delimited list of stratum 1 timeservers from which the local computer can obtain reliable time stamps. The list can use either fully-qualified domain
names or IP addresses. (If DNS names are used, you must append ,0x1 to the end of each DNS name.) In this key:
HKLM\System\CurrentControlSet\Services\W32Time\Config
• Change AnnounceFlags REG_DWORD from 10 to 5. This entry controls whether the local computer is marked as a reliable time server (which is only possible if the previous registry entry is set to NTP as described above). Change the REG_DWORD value from 10 to 5 here.
2. Stop and restart the time service:
net stop w32time
net start w32time
3. Manually force an update:
w32tm /resync /rediscover
(Microsoft KB article # 816042 provides detailed instructions for this process.) Apart from that you can refer to this link as well.

As a thumb rule test this before applying for the production network. This is recommend if your organization preparing to move to 100% virtualization environment. If not at all cost try to have one DC in a physical server 🙂

Update: I found out Microsoft has already released an article about running Domain controller in HYPER-V. You can download the document from here.

Virtualizing Active Directory service

Most of the time we recommend for customers and partners not to virtualizes the AD server. the explanation we give for this is due to the time sync issue there might be problem. So what is this time sync issue and why we should give more consideration about this too much? In this article I’m going to talk about it little bit and explain a solution for that. As a thumb rule I’ve to update you’ll this is according to my 2 cents knowledge 🙂

Normally Active Directory heavily depend on the accurate time for various services (Eg: Authentication, replication, records updates..etc) When the AD is in a physical machine it will use the interrupt times set by the CPU clock cycles. Since it have the direct access to this time can be accurate.

When you try to virtualized the main problem you face is the virtualized environment behavior. Virtual PC’s are created to save the CPU clock cycles and when one OS is idling then CPU cycles send to that VM will be reduced. Since AD heavily depend on this CPU cycle missing them randomly means the time won’t be accurate. This problematic behavior is same either you’re using VMware, HYPER-V or any other third party virtualization technology. Once the clients and server having mismatch of time sync more than 5 minutes authentication and network resource access will be difficult. (Windows AD environment uses Kerberos authentication and by default time difference allowed is 5 minutes)

So one method is allowing the PDC emulator service holder AD server to sync time with an external time source instead of depending of the CPU clock cycles. To do that you have to edit the registry on the PDC emulator holding server. (As usual I assume you guys will take the necessary precautions like backing up server, registry…etc)

1. Modify Registry settings on the PDC Emulator for the forest root domain:
In this key:
HKLM\System\CurrentControlSet\Services\W32Time\Parameters\Type
• Change the Type REG_SZ value from NT5DS to NTP.
This determines from which peers W32Time will accept synchronization. When the REG_SZ value is changed from NT5DS to NTP, the PDC Emulator synchronizes from the list of
reliable time servers specified in the NtpServer registry key.
HKLM\SYSTEM\CurrentControlSet\Services\W32Time\Parameters\NtpServer
• Change the NtpServer value from time.windows.com,0x1 to an external stratum 1 time source—for example, tock.usno.navy.mil,0x1. More time servers information can be found over here.

This entry specifies a space-delimited list of stratum 1 timeservers from which the local computer can obtain reliable time stamps. The list can use either fully-qualified domain
names or IP addresses. (If DNS names are used, you must append ,0x1 to the end of each DNS name.) In this key:
HKLM\System\CurrentControlSet\Services\W32Time\Config
• Change AnnounceFlags REG_DWORD from 10 to 5. This entry controls whether the local computer is marked as a reliable time server (which is only possible if the previous registry entry is set to NTP as described above). Change the REG_DWORD value from 10 to 5 here.
2. Stop and restart the time service:
net stop w32time
net start w32time
3. Manually force an update:
w32tm /resync /rediscover
(Microsoft KB article # 816042 provides detailed instructions for this process.) Apart from that you can refer to this link as well.

As a thumb rule test this before applying for the production network. This is recommend if your organization preparing to move to 100% virtualization environment. If not at all cost try to have one DC in a physical server 🙂

Windows server 2008 R2 failover clustering

People who attend to the hands on workshop in the above topic in Tech.Ed 2010 I do hope you found my demonstration is valuable and got something out of it. During that time I’ve demonstrated how simplified process in creating basic cluster scenario in Windows 2008. Entire lab has been carried out in one single laptop and I know patient is a virtue at that time 🙂

During that time I used SatrWind product for the software based iSCSI solution which has been working as charming as it can be. I have been using this product for demos most of the time and really amazed by it’s simplified GUI console. But don’t think this is a simple software, underneath you’ll find some advance features built inside it. I’ve been blogging about this product for several times since I see a growth software based SAN solutions in the market.

So if anyone interested in demonstrating the clustering features in Windows 2008 you can download that slide deck from here. I have to admit I use various resources and slides from the other people as well. So I thank them all for that as well.

So as I always mention do contact me if you’ll need more support to build a affordable SAN solutions.

Windows server 2008 R2 failover clustering

People who attend to the hands on workshop in the above topic in Tech.Ed 2010 I do hope you found my demonstration is valuable and got something out of it. During that time I’ve demonstrated how simplified process in creating basic cluster scenario in Windows 2008. Entire lab has been carried out in one single laptop and I know patient is a virtue at that time 🙂

During that time I used SatrWind product for the software based iSCSI solution which has been working as charming as it can be. I have been using this product for demos most of the time and really amazed by it’s simplified GUI console. But don’t think this is a simple software, underneath you’ll find some advance features built inside it. I’ve been blogging about this product for several times since I see a growth software based SAN solutions in the market.

So if anyone interested in demonstrating the clustering features in Windows 2008 you can download that slide deck from here. I have to admit, I’ve used various resources and slides from the other people as well. So I thank them all for that as well.

So as I always mention do contact me if you’ll need more support to build a affordable SAN solutions.