sysvol and netlogon shared folders missing after a non-authoritative restore

This is an issue I face with a client side and had to spend hours time to sort it out. Thought of sharing my experience with other fellow minded techies.

First let’s have a look into the issue, Client has a non functional Domain controller due to a power failure. Basically Domain controller has lost it’s database and other critical data (Eg: DNS records, wins records..etc)

Even though additional domain controller has been existed FMSO roles has been assigned to the failed domain controller. Moving forward when we reach the site as a solution they have already restored the domain controller with a system state backup, and then move forward restoring the system state backup to the second domain controller as well. This has caused issues to bring both DC’s to a halt.

Looking into the event viewer found out both DC’s couldn’t find a proper DC’s to sync the sysvol contents though both are trying to find a health DC. To make things shorter I’ve tried to set one DC to set as authoritative and not look for another DC to get the sysvol contents by following the kb290762. After that brought the second DC online and set the “BurFlags” value to D2 in the registry path.

(HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NtFrs\Parameters\Backup/Restore\Process at Startup)

Found out after some time both DC’s got the sysvol folder shared without any contents in it. Netlogon folder also not appearing! Another frustration on the way!!

Next step restore the sysvol to alternative location and reterive the contents in the sysvol folder and then copy to one DC’s “C:\Windows\SYSVOL\sysvol\<Domain Name”\” One that complete following instruction been followed,

Stop File Replication Service in that particular DC, change the following registry key:

HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NtFrs\Parameters\Backup/Restore\Process at Startup

Key: BurFlags

Value: D4(hexadecimal)

Start File Replication Service, after we see the event ID 13516 in FRS event log.

Restart Netlogon service, then the NETLOGON is shared out.

Stop File Replication Service in the other DC, change the following registry key:

HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NtFrs\Parameters\Backup/Restore\Process at Startup

Key: BurFlags

Value: D2(hexadecimal)

Start File Replication Service, after we see the event ID 13516 in FRS event log.

Once that complete both DC’s has same contents in the sysvol folder and the netlogon has been restarted as well. Confirmed users can authenticate and rest of the applications are working fine Smile

Almost everything is running perfectly but as a precaution requested to take full backup of the DC’s. Time for a beer but again it’s midnight so no way to make that as well Smile

Summary: Above mention effected domain controllers are Windows 2003 R2. But as a thumb rule one thing to keep in mind is AD replication is multi-threaded, multi-master replication engine and it can take time and patient is a virtue.

Following links has been referred during the troubleshooting process,

http://support.microsoft.com/kb/315457

http://support.microsoft.com/kb/257338

http://support.microsoft.com/kb/229896

Dynamic Memory allocation with HYPER-V R2 SP1

It’s been some time Microsoft has released the Windows 2008 R2 SP1 RC (Release Candidate) in that on of the a killer feature is “Dynamic Memory” allocation. So what exactly is Dynamic memory? is it similar to VMware memory overcommit? Dynamic Memory is a way for the hypervisor to over-subscribe the memory resources to virtual machines, not overcommit them.   You can find more information about the term overcommit in here.

It is not a way for virtual machines to use more memory than is in the box.  It is essentially a way for the virtual machines to share the memory resources of the hardware in a more effective way.   It is essentially allowing the Hyper-V platform to dole out resources as virtual machines require, vs. being constrained to fixed resources.

So how does it work? Before jumping in to that question let’s have a quick understanding how it works. First of all there will be certain amount of memory will be allocated to the host PC and this will be not released for the guest PC’s usage. Second using the Microsoft latest HYPER-V drivers (aka enlighten drivers) guest PC’s and host PC’s constantly communicate about the memory requirements. This addition or removal of memory is implemented using the driver enlightened architecture (VSP/VSC/VMBus) of Hyper-V. On the host side, the Virtual Service Provider (VSP) arbitrates the allocation of physical memory resources between the virtual machines running on the host. On the virtual machine side, the Virtual Service Consumer (VSC) collects the information to determine virtual machine’s memory needs and executes necessary operations to add or remove memory.

Dynamic memory

Dynamic memory architecture

In order to be able to dynamically add memory to a virtual machine, Dynamic Memory requires that the virtual machine’s guest operating system include a kernel enlightenment that supports Dynamic Memory.

So what Operating systems will support Dynamic Memory feature?
· Windows Server 2008 R2 Standard Edition SP1*

· Windows Server 2008 R2 Enterprise Edition SP1

· Windows Server 2008 R2 Datacenter Edition SP1

· Windows Server 2008 R2 Web Edition SP1*

· Windows Server 2008 Standard Edition SP2*

· Windows Server 2008 Enterprise Edition SP2

· Windows Server 2008 Datacenter Edition SP2

· Windows Server 2008 Web Edition SP2*

· Windows Server 2003 R2 Standard Edition SP2 or higher*

· Windows Server 2003 R2 Enterprise Edition SP2 or higher

· Windows Server 2003 R2 Datacenter Edition SP2 or higher

· Windows Server 2003 R2 Web Edition SP2 or higher*

· Windows Server 2003 Standard Edition SP2 or higher*

· Windows Server 2003 Enterprise Edition SP2 or higher

· Windows Server 2003 Datacenter Edition SP2 or higher

· Windows Server 2003 Web Edition SP2 or higher*

· Windows® 7 Enterprise Edition

· Windows 7 Ultimate Edition

· Windows Vista® Enterprise Edition SP2

Note: According to Microsoft documentation the Beta release of Service Pack 1 does not support Dynamic Memory for the operating systems marked with an asterisk (*) above. However, support for Dynamic Memory for these operating systems will be added in a future release of SP1

Once you’ve applied the SP1 on a Windows 2008 R2 host and look into the guest machine’s settings page it would be as follows,

image

As you can see in there are few changes in the memory allocation area.  To enable the dynamic memory feature you need to select the relevant tab and select the minimum and maximum memory for the guest PC.
The Memory Buffer setting specifies the percentage of memory, based on the workload of the virtual machine, that Hyper-V should try to reserve as a buffer.

Where as memory priority will consider about the which VM can get the additional memory in which priority order. If you have several VM’s you can select which VM should get additional memory initially with highest priority and which one should be least priority considered.

Once these features are enabled you can view the memory usage by each VM’s by following methods,

· Using the two new columns available in the Virtual Machines pane of Hyper-V Manager.
image

· Using the new performance counters included in Service Pack 1 for Windows Server 2008 R2.

Performance Counter

Description

Added Memory

The cumulative amount of memory added to VMs.

Available Memory

The amount of memory left on the node.

Average Pressure

The average pressure on the balancer node.

Memory Add Operations

The total number of add operations.

Memory Remove Operations

The total number of remove operations.

Removed Memory

The cumulative amount of memory removed from VMs.

Performance Counter

Description

Added Memory

The cumulative amount of memory added to VMs.

Available Memory

The amount of memory left on the node.

Average Pressure

The average pressure on the balancer node.

Memory Add Operations

The total number of add operations.

Memory Remove Operations

The total number of remove operations.

Removed Memory

The cumulative amount of memory removed from VMs.

Dynamic memory feature is not something you should keep on enabling for all the VM’s. Certain application may perform poorly under this feature enabled. If you know the exact usage amount of memory by an application or OS then don’t change it to Dynamic which may not give any advantage. As for now top of my mind I can see Exchange and SQL as such applications.

VDI solutions can greatly benefit from this option. So if you’re planning to implement VDI solution this is a killer feature.

As of now Microsoft keep on improving the features offered in the HYPER-V  hyper visor. This is a good news for the customers who are in the stage moving to virtualization and also customers who are in mixed mode.

Public Beta of service pack for windows 2008 r2 and windows 7 is on the way…

dynamic memory allocation, 3-D graphical experience for remote users via Remote FX and preparation for Cloud computing are few of the major promises to come along with it. In an opening-day keynote speech at Microsoft Corp.’s Tech•Ed 2010 North America conference they has release the official note saying public can expect this service pack on end of the July. Stay tune for this and see what experiment you can experience.

I would be more happy to see the Dynamic memory allocation feature which will allow the VPC to dynamically borrow the memory from other VPC’s when they’re under utilized.

Windows server 2008 R2 failover clustering

People who attend to the hands on workshop in the above topic in Tech.Ed 2010 I do hope you found my demonstration is valuable and got something out of it. During that time I’ve demonstrated how simplified process in creating basic cluster scenario in Windows 2008. Entire lab has been carried out in one single laptop and I know patient is a virtue at that time 🙂

During that time I used SatrWind product for the software based iSCSI solution which has been working as charming as it can be. I have been using this product for demos most of the time and really amazed by it’s simplified GUI console. But don’t think this is a simple software, underneath you’ll find some advance features built inside it. I’ve been blogging about this product for several times since I see a growth software based SAN solutions in the market.

So if anyone interested in demonstrating the clustering features in Windows 2008 you can download that slide deck from here. I have to admit I use various resources and slides from the other people as well. So I thank them all for that as well.

So as I always mention do contact me if you’ll need more support to build a affordable SAN solutions.

Which version of HYPER-V should I use?

Normally when you have top many options in the same products it makes too much confusing. Sometime this is given for you to make your life easier but still there are chances it can burden you when you don’t have proper instructions and guidance. Same story goes in HYPER-V as well. Microsoft offer HYPER-V in several editions and knowing which version to purchase or get free depend on what are you going to do with it. Apart from that I wanted to highlight the new command available in HYPER-V configuration in server core edition. “sconfig.cmd” is a graphical command available in the server core to configure server. This is updated with new sets of commands which make HYPER-V managing administrator’s life easier.
Now without further due let me introduce one of the charts available in the Microsoft web site which explains which edition to choose.

Apart from server consolidation some of the other areas where you can use HYPER-V are,

* Test and Development
* Server Consolidation
* Branch Office Consolidation
* Hosted Desktop Virtualization (VDI)

Microsoft free HYPER Visor is good option for testing and R&D. If you are planning to consolidate more than 4 servers in one physical server then moving to Data center version will do huge cost saving to you. More information of these licensing and how to maximize your investment on this HYPER-V can be get on Tech.Ed 2010. Look forward to see ya in there.

SCVMM 2008 R2 documentation update is out!

If you’re using Microsoft HYPER-V  as you main stream virtualization platform then you know SCVMM is the centralized management console to mange several HYPER-V hosts.  Apart from that is have the capability to manage cross different virtualization technology hosts as well (Eg: ESX)

Since SCVMM is a dynamic product which keeps on evolving all the time new updates and hot-to guides appear frequently. Microsoft team recently released some of the documentation updates. You can reach them here. Apart from that one of the best place to hang around and get the latest info would be HYPER-V @ TechNet.

As per my personal view year 2010 –2012 would be the peak time Sri Lankan market would adopt Virtualization. Most of the time Enterprise companies has been in the observation and internal review about virtualization and how to adopt for that. Since virtualization is a vast area ISV’s will have a great opportunity to provide the ideal solutions.

HAPPY VIRTULIZATION YEAR TO ALL!

SystemCenterEssentialsLogo

Few IT Solutions for SMB/SME market

Despite of the number of people in a company business perspective SMB and Enterprise have similar requirements request from the Information Technology. They all expect the service continuity, anywhere access and low cost! During this time period every company dream is to get maximum out of the IT investment and still reduce the cost without loosing the functionality. Business continuity is a key factor for survival of any business. Service disruption for few minutes to few days impact can be devastating depend on the business nature. So how can SMB market segment overcome these limitations with fraction of the cost where Enterprise companies invest on?

To make things simple in this article I’ll focus on Microsoft products and the features offered by them. But as usual hints will be provided for the similar feature products as well 🙂

1. Which Operating Systems to invest on by SMB customers – My 2 cents advise goes for SBS 2008 or EBS 2008. There are significant advantages on these operating systems once properly configured and used. Less attention is been given due to the nature of the product names. Small business Server itself is not a product to be taken lightly, the solution is far more complex than the out of the box. If you’re company fallen under SME segment then consider the scale out product like Essential Business Server which can be spanned into 3 physical servers or virtual servers. Again these are Enterprise class ready product which has been limited only be the CALS and not by reducing any FEATURES. (Period)

2. Cost cutting on Hardware and software purchases – Consider HYPER-V for server virtualization. It will be ideal if you can consider few of your legacy applications to run in their own OS environment to make them less conflict with the latest operating system. Believe me Virtualization will be the ideal solution for this.

What ever your next purchase make sure it is 64bit and Virtualization capable. Always make sure you have enough hardware expansion room. (Eg: Buy 2 processor socket system with one physical processor, buy RAM with enough RAM slots.) Make sure your existing hardware can be utilized as Storage systems. There are easy ways to convert your existing servers into cost effective SAN storage and make maximum out of it. Microsoft offering of SAN software will be coming on OEM so you can consider a product like StarWind iSCSI storage. (more information about how-to articles in future)

3. Backup and Protect you data – This is part of your service continuity and availability plan. If you’re going to have HYPER-V as your virtualization option consider how to backup the virtualized environments as well. From Microsoft point of view DPM 2007 (Data Protection Manager) will be the ideal solution to protect your physical and virtual environments. DPM 2010 can be expected around Q2 in year 2010 with lots of new improvements along with desktop backup and offline laptop backup as well.
when it comes to DR solution and high availability options SMB market has been backed away by the pricy hardware devices and software. Thanks for various replication technologies and offline backup options this is becoming reality to SMB market as well. Microsoft is working closely with ISV partners to make sure software solutions exist for data replication with DR sites. As I mention StarWind is a very popular company coming up with these solutions. Best of all these solutions are costing a fraction of price of DAS or Hardware SAN with HBA adapters.

Let me know if anyone interested on these solutions and would be glad to provide more information.

windows 2008 Failover cluster setup (101 guide)

Before jumping into the High availability it would be a really good if all of readers can sit on the same level about the clustering technology as well. Recently enough I went through the history of the clustering to get an idea about it, interestingly enough there are lot more than meets the eye on clustering 🙂 Some history info about clustering can be found over here
What is clustering – In its most elementary definition, a server cluster is at least two independent computers that are logically and sometimes physically joined and presented to a network as a single host. That is to say, although each computer (called a node) in a cluster has its own resources, such as CPUs, RAM, hard drives, network cards, etc., the cluster as such is advertised to the network as a single host name with a single Internet Protocol (IP) address. As far as network users are concerned, the cluster is a single server, not a rack of two, four, eight or however many nodes comprise the cluster resource group.

Why cluster – Availability:  Avoids problems resulting from systems failures.
                    Scalability: Additional systems can be added as needs increase.
                    Lower Cost:  Supercomputer power at commodity prices.

What are the cluster types

  • Distributed Processing Clusters
    • Used to increase the speed of  large computational tasks
    • Tasks are broken down and worked on by many small systems rather than one large
    • system (parallel processing).
    • Often deployed for tasks previously handled only by supercomputers.
    • Used for scientific or financial analysis.
  • Failover Clusters
    • Used to increase the availability and serviceability of network services.
    • A given application runs on only one of the nodes, but each node can run one or more applications.
    • Each node or application has a unique identity visible to the “outside world.”
    • When an application or node fails, its services are migrated to another node.
    • The identity of the failed node is also migrated.
    • Works with most applications as long as they are scriptable.
    • Used for database servers, mail servers or file servers.
  • High Availability Load Balancing Clusters
    • Used to increase the availability, serviceability and scalability of network services.
    • A given application runs on all of the nodes and a given node can host multiple applications.
    • The “outside world” interacts with the cluster and individual nodes are “hidden.”
    • Large cluster pools are supported.
    • When a node or service fails, it is removed from the cluster.  No failover is necessary.
    • Applications do not need to be specialized, but HA clustering works best with stateless applications that can be run concurrently.
    • Systems do not need to be homogeneous.
    • Used for web servers, mail servers or FTP servers.

Now coming back into the Microsoft clustering clustering it goes back to good old NT 4.0 era with the code name “wolf pack” After that  Microsoft clustering technology came all the way step by step growing and improving. Windows 2000 period giving the confidence for customers on the stability of the Microsoft clustering technology. If there are filed engineers who have configured the Windows 2003 clustering will know the painful steps they have to follow configure the clustering. When it comes to Windows 2003 R2 Microsoft offered various tools and wizards to make the clustering process less painful process to engineers. If you’re planning to configure windows 2003 clustering one place you definitely look into is this site.

Now we’re in the windows 2008 era and clustering has been improved dramatically in the configuration side and as well as in the stability wise. Windows 2008 clustering has given the code name as “Windows failover clustering

As I always have been updating the audience in public sessions clustering is no longer going to be a technology focus by Enterprise market. Clustering can be utilized by SMB and SME market as well with a fraction of the cost.  As usual I will be focusing on the HYPER-V  and how combine with clustering can help the users to get the maximum benefits out for virtualization and high availability. HYPER-V  been Microsoft flagship technology for the virtualization. It’s a 100% bare metal hyper visor technology. There are lot of misguided conception on HYPE-V is not a true hypervisor, the main argument point highlighted is you need to have windows 2008 to run the HYPER-V. This is wrong!!! You can setup on the HYPER-V hyper visor software in bare metal server and setup the virtual pc’s. HYPER-V only free version can be download from here. Comparisons on HYPER-V can be found over here.

So now we have somewhat idea about the clustering technology so how can it applied to the HYPER-V environment and have a high available virtual environment? We’ll have a look at a recommended setup for this scenario,

Hyper-v-3

According to the picture we’ll need 2 physical servers. We’ll call them Host1 and Host2. Each host must 64bit and have Virtualization supported processor. Apart from that Microsoft recommended to have the certified hardware. Base on my knowledge I would say minimum environment should be as follows,

1. Branded servers with Intel Xeon Quad core processor. (better 2 have 2 sockets M/B for future expansion.)
2. 8 GB memory and minimum 3 nics. always better to have additional nics.
3. 2*76 GB SAS or SATA HDD for the Host operating system.
4. SAN Storage. (Just hold there folks, there are easy way to solve this expensive matter….:)

Now the above system has the full capability to handle decent amount workload. Now the configuration part 🙂 I’ll try to summarize the steps along with additional tips when necessary. Following steps will help you to configure a Windows 2008 File server cluster. HYPER-V high availability will be followed the same steps. Due to hardware limitation I have decided to demonstrate Windows 2008 File server clustering.

1. Install windows 2008 Enterprise or Datacenter edition to each Host computer. Make sure both of them get the latest updates and both host will have same updates for all the software.

2. Go ahead and install the HYPER-v role.

3. Configure the NIC’s accordingly. taking one host as the example NIC configuration will be as follows,
    a) One NIC will be connected to your production environment. So you can add the IP, DG, SB and DNS
    b) Second NIC will be the heartbeat connection between the 2 host servers. So add IP address and the SB only. Make sure it will be totally     different IP class.
    c) Third NIC will be configured to communicate with the SAN storage. I’m assuming we’ll be using iSCSI over IP.

4. Now for the SAN storage you can go ahead and buy the expensive SAN storage for HP, DELL or EMC (no offence with me guys 🙂 ) but their are customers who can’t afford that price tag. For them the good new is you can convert your existing servers into a SAN storage. We’re talking about converting you’re existing x86 systems into Software based SAN storage which use iSCSI protocol. There are third party companies which provide software for this. Personally I prefer StarWind iSCSI software.
So all you have to do is add enough HDD space to your server and then using the third party iSCSI software convert your system to SAN storage. This will be the central storage for the two HYPER-V  enabled host computers.

4. Go ahead and create the necessary storage at the SAN server. How to create the cluster quorum disk and other disk storage will be available from the relevant storage vendor documentation. When it comes to quorum disk try to make it 512MB if possible but most SAN storage won’t allow you to create a LUN below 1024 MB so in that case act accordingly. (Anyway here goes few steps how to create relevant disks under StarWind)

Starwind-2 Starwind-3 Starwind-7

Starwind-8 Starwind-10

5. Go to one host computer and then add the Clustering feature.

Cluster feature

6. Go to the iSCSI initiator in the Host1 and then connect to the SAN storage.  As seen on the picture click add portal and enter the IP address of SAN storage.  One connected it’ll show the relevant disk mappings. (That easy in Windows 2008 R2 now)

iscsi-vista-initiator

iSCSI-4 iscsi-win7-init

7. Once that complete go to Disk management and unitize the disk and format them and assign drive letters accordingly. (Eg: Drive letter Q for Quorum disk…etc)

12-21-2009 5-01-46 PM 12-21-2009 5-02-03 PM 12-21-2009 5-02-40 PM

12-21-2009 5-05-07 PM

8. Go to Host2 open iSCSI imitator and add the SAN storage. Go to Disk management and add the same drive letters to the disks as configured on Host1.

9. Go to cluster configuration and start setting up the cluster. One cool thing about Windows 2008 cluster setup is cluster validation wizard. It will do a serious of configuration checkup to make sure if you have configured the cluster setup steps correctly. This wizard is a must and you need to keep this report safely in case if you need to get Microsoft support or a technical personas support. One the cluster validation completed we can go ahead add the cluster role. In this case we’ll be selecting File Server as our cluster role.

12-21-2009 5-11-18 PM 12-21-2009 5-11-40 PM 12-21-2009 5-11-46 PM

12-21-2009 5-12-08 PM 12-21-2009 5-17-54 PM 12-21-2009 5-20-38 PM

10.  Once the cluster validation is completed, go ahead and create a cluster service. In this demonstration I’ll use clustered file server feature.

12-21-2009 5-25-35 PM 12-21-2009 5-25-51 PM

Go ahead and give a cluster administration name for the cluster, and after that select a disk for the shared storage. for this we’ll use a disk created in the SAN storage,

12-21-2009 5-26-21 PM 12-21-2009 5-27-07 PM 12-21-2009 5-29-49 PM

11. Once that step is completed you’ll be back in the cluster management console. Now you’ll be able to see the cluster server name you’re created. So we have created cluster but still we didn’t share any storage. Now we’ll go ahead and create shared folder an assign few files so users can see them,

12-21-2009 5-35-20 PM 12-21-2009 5-35-57 PM 12-21-2009 5-37-16 PM

12-21-2009 5-39-01 PM 12-21-2009 5-40-23 PM

Now once we login from a client PC we can type the UNC path and access the shared data in the clustered file server 🙂

12-21-2009 5-54-10 PM 12-21-2009 5-55-24 PM

Phew…!! that was a long article I’ have every written 🙂 Ok I guess by now you’ll have the idea Windows 2008 clustering is not very complicated if you have the right tools and the resources. Now that is the out layer internally to secure the environment we’ll need to consider about either CHAP authentications, IPSec…etc. Since this is 101 article i kept everything is simple manner.

Let me know your comments (good or bad)about the article so I’ll be able to provide better information which will be helpful for you all.

Upgrading your Domain controllers to Windows 2008 or Windows 2008 R2

So you have been running on Windows 2000 or Windows 2003 AD environment for quite some time and prefer a change. Windows 2008 has been out there for almost 16 months now, including the release of Windows 2008 R2. In this article we’ll discuss some of the key facts you need to consider before you jump into upgrade process and some of the pit falls you need to avoid.

What are the upgrade options available for me?

In-Place upgrade – In this method you can upgrade your existing server to Windows 2008 or Windows 2008 R2. But the key thing is you can’t in-place upgrade your windows 2000. You need to upgrade that to Windows 2003. (Do you really need to install windows 2008 on that old hardware 🙂 There are few caveats you need to take into consideration before going this path,

  • The Windows Server 2003 patch level should be at least Service Pack 1
  • You can’t upgrade across architectures (x86, x64 & Itanium)
  • Standard Edition can be upgraded to both Standard and Enterprise Edition
  • Enterprise Edition van be upgraded to Enterprise Edition only
  • Datacenter Edition van be upgraded to Datacenter Edition only

Apart from that consider your Domain and Forest functional level as well. In windows 2008 R2 you’ll have some cool roles and features but to get that you need to upgrade the functional levels to R2. Consider the following facts.

Transitioning – Migrating this method means you’ll be adding Windows Server 2008 Domain Controllers to your existing Active Directory environment. After that migrate the FSMO roles to the new server and safely demote the existing windows 2003 domain controllers. You’ll have to purchase new hardware for this. In case if you’re planning to use your existing hardware then temporary you’ll have to bring a new server with windows 2008 to get the roles transferred. Few things to remember at that time is,

  • Global Catalog availability
  • Enable your new 2008 DCs as DNS servers (if using Microsoft DNS)
  • PDC Emulator sync with external time source
  • Ensure the demotion of your existing DCs is fully replicated to all your other DCs before promoting the replacement (if re-using the same name and IP address).
  • Changes to your backup and recovery procedures
  • Anti-virus software compatibility with 2008
    Monitoring software compatibility with 2008
  • Any other services/applications running on your existing DCs (e.g. CA, WINS, DHCP, File and Print).
  • Applications and systems that may be impacted during the outage of your DCs during the demotion/promotion (i.e. those that may be hard-coded to the name or IP address).

Transitioning is possible for Active Directory environments which domain functional level is at least Windows 2000 Native. In a way this is my favorite method considering the risky method of in-place upgrade.

Restructuring – In this method you’re going to create a total different domain and transfer the existing domain details (Eg: user accounts, passwords, profiles…etc) to the new domain. One good example is when a company having two or three domains and they wanted to merge to a one domain name. Microsoft ADMT is one of the useful tool in this scenario(Active Directory Migration Tool) Apart from that there are third party tools available to this kind of transition.

when it comes to upgrade your domain environment careful planning is vital in the beginning to avoid unnecessary problems which can be lead to un-reversible. So take good time to read the documentation and do the lab environment tests. Here is a one good article which can give you some useful information.

PM me if you need any assistance on migrating your company domain environment to windows 2008

Better together with Windows 2008 R2 and Windows 7

Few weeks back I had the privilege to conduct a session on Windows 2008 R2 and it’s new features. This session has been combined with Windows 7 and it’s new features. Windows 7 session has been conducted by Sabeshan. He is one of the Microsoft Certified Trainer in NetAssist.

We have conducted this session to audience who comes from filed engineer to IT manger level. So instead being too techie of the product features we highlighted the technology and how they can implement and get quick ROI from their network. When it comes to new technology some companies are slow adaptors and especially with few Enterprise companies. With Business perspective side there are few reasons for that. We wanted to break that barrier and demonstrate how effectively they can use the technology and get their expectations with less complex setup. Though we didn’t went on deep technical level on that day our future sessions will be deep dive into each product feature with live demonstrations.

We kept on demonstrating Windows 7 bit locker features, VHD boot up, Windows 2008 Active Directory Administrative Center, Recycle bin, PowerShell, Group Policy new feature usage….etc. Direct access could have been preferred one but with limited time frame I had to keep that away for a later time.

10 12 14 16

3 8

Well there will be no fun without some introduction on Microsoft licensing 🙂 So we introduce one of Sri Lanka’s distributor as well to do some introduction. So keep in touch guys for more updates on future sessions. Some of the contents has been uploaded to the NetAssist training institute web site which can be reached from following links,

http://netassist.com.lk/windows7.html