• Chapter 1. Installing and Configuring Windows Server 2003
  • software development Company Server 2003
  • Chapter 1. Installing and Configuring Windows Server 2003
  • New Features in Windows Server 2003
  • Best Practices
  • Moving Forward
  • Version Comparisons
  • Hardware Recommendations
  • Installation Checklist
  • Functional Overview of Windows Server 2003 Setup
  • Installing Windows Server 2003
  • Post Setup Configurations
  • Functional Description of the Windows Server 2003 Boot Process
  • Correcting Common Setup Problems
  • Chapter 2. Performing Upgrades and Automated Installations
  • New Features in Windows Server 2003
  • NT4 Upgrade Functional Overview
  • Upgrading an NT4 or Windows 2000 Server
  • Automating Windows Server 2003 Deployments
  • Moving Forward
  • Chapter 3. Adding Hardware
  • New Features in Windows Server 2003
  • Functional Description of Windows Server 2003 Architecture
  • Overview of Windows Server 2003 Plug and Play
  • Installing and Configuring Devices
  • Troubleshooting New Devices
  • Moving Forward
  • Chapter 4. Managing NetBIOS Name Resolution
  • New Features in Windows Server 2003
  • Moving Forward
  • Overview of Windows Server 2003 Networking
  • Name Resolution and Network Services
  • Network Diagnostic Utilities
  • Resolving NetBIOS Names Using Broadcasts
  • Resolving NetBIOS Names Using Lmhosts
  • Resolving NetBIOS Names Using WINS
  • Managing WINS
  • Disabling NetBIOS-over-TCP/IP Name Resolution
  • Chapter 5. Managing DNS
  • New Features in Windows Server 2003
  • Configuring a Caching-Only Server
  • Configuring a DNS Server to Use a Forwarder
  • Managing Dynamic DNS
  • Configuring Advanced DNS Server Parameters
  • Examining Zones with Nslookup
  • Command-Line Management of DNS
  • Configuring DHCP to Support DNS
  • Moving Forward
  • Overview of DNS Domain Structure
  • Functional Description of DNS Query Handling
  • Designing DNS Domains
  • Active Directory Integration
  • Configuring DNS Clients
  • Installing and Configuring DNS Servers
  • Configuring Secondary DNS Servers
  • Integrating DNS Zones into Active Directory
  • Chapter 6. Understanding Active Directory Services
  • New Features in Windows Server 2003
  • Active Directory Support Files
  • Active Directory Utilities
  • Bulk Imports and Exports
  • Moving Forward
  • Limitations of Classic NT Security
  • Directory Service Components
  • Brief History of Directory Services
  • X.500 Overview
  • LDAP Information Model
  • LDAP Namespace Structure
  • Active Directory Namespace Structure
  • Active Directory Schema
  • Chapter 7. Managing Active Directory Replication
  • New Features in Windows Server 2003
  • Replication Overview
  • Detailed Replication Transaction Descriptions
  • Designing Site Architectures
  • Configuring Inter-site Replication
  • Controlling Replication Parameters
  • Special Replication Operations
  • Troubleshooting Replication Problems
  • Moving Forward
  • Chapter 8. Designing Windows Server 2003 Domains
  • New Features in Windows Server 2003
  • Design Objectives
  • DNS and Active Directory Namespaces
  • Domain Design Strategies
  • Strategies for OU Design
  • Flexible Single Master Operations
  • Domain Controller Placement
  • Moving Forward
  • Chapter 9. Deploying Windows Server 2003 Domains
  • New Features in Windows Server 2003
  • Preparing for an NT Domain Upgrade
  • In-Place Upgrade of an NT4 Domain
  • In-Place Upgrade of a Windows 2000 Forest
  • Migrating from NT and Windows 2000 Domains to Windows Server 2003
  • Additional Domain Operations
  • Moving Forward
  • Chapter 10. Active Directory Maintenance
  • New Features in Windows Server 2003
  • Loss of a DNS Server
  • Loss of a Domain Controller
  • Loss of Key Replication Components
  • Backing Up the Directory
  • Performing Directory Maintenance
  • Moving Forward
  • Chapter 11. Understanding Network Access Security and Kerberos
  • New Features in Windows Server 2003
  • Windows Server 2003 Security Architecture
  • Security Components
  • Password Security
  • Authentication
  • Analysis of Kerberos Transactions
  • MITv5 Kerberos Interoperability
  • Security Auditing
  • Moving Forward
  • Chapter 12. Managing Group Policies
  • New Features in Windows Server 2003
  • Group Policy Operational Overview
  • Managing Individual Group Policy Types
  • Moving Forward
  • Chapter 13. Managing Active Directory Security
  • New Features in Windows Server 2003
  • Overview of Active Directory Security
  • Using Groups to Manage Active Directory Objects
  • Service Accounts
  • Using the Secondary Logon Service and RunAs
  • Using WMI for Active Directory Event Notification
  • Moving Forward
  • Chapter 14. Configuring Data Storage
  • New Features in Windows Server 2003
  • Functional Description of Windows Server 2003 Data Storage
  • Performing Disk Operations on IA32 Systems
  • Recovering Failed Fault Tolerant Disks
  • Working with GPT Disks
  • Moving Forward
  • Chapter 15. Managing File Systems
  • New Features in Windows Server 2003
  • Overview of Windows Server 2003 File Systems
  • NTFS Attributes
  • Link Tracking Service
  • Reparse Points
  • File System Recovery and Fault Tolerance
  • Quotas
  • File System Operations
  • Moving Forward
  • Chapter 16. Managing Shared Resources
  • New Features in Windows Server 2003
  • Functional Description of Windows Resource Sharing
  • Configuring File Sharing
  • Connecting to Shared Folders
  • Resource Sharing Using the Distributed File System (Dfs)
  • Printer Sharing
  • Configuring Windows Server 2003 Clients to Print
  • Managing Print Services
  • Moving Forward
  • Chapter 17. Managing File Encryption
  • New Features in Windows Server 2003
  • File Encryption Functional Description
  • Certificate Management
  • Encrypted File Recovery
  • Encrypting Server-Based Files
  • EFS File Transactions and WebDAV
  • Special EFS Guidelines
  • EFS Procedures
  • Moving Forward
  • Chapter 18. Managing a Public Key Infrastructure
  • New Features in Windows Server 2003
  • Moving Forward
  • PKI Goals
  • Cryptographic Elements in Windows Server 2003
  • Public/Private Key Services
  • Certificates
  • Certification Authorities
  • Certificate Enrollment
  • Key Archival and Recovery
  • Command-Line PKI Tools
  • Chapter 19. Managing the User Operating Environment
  • New Features in Windows Server 2003
  • Side-by-Side Assemblies
  • User State Migration
  • Managing Folder Redirection
  • Creating and Managing Home Directories
  • Managing Offline Files
  • Managing Servers via Remote Desktop
  • Moving Forward
  • Chapter 20. Managing Remote Access and Internet Routing
  • New Features in Windows Server 2003
  • Configuring a Network Bridge
  • Configuring Virtual Private Network Connections
  • Configuring Internet Authentication Services (IAS)
  • Moving Forward
  • Functional Description of WAN Device Support
  • PPP Authentication
  • NT4 RAS Servers and Active Directory Domains
  • Deploying Smart Cards for Remote Access
  • Installing and Configuring Modems
  • Configuring a Remote Access Server
  • Configuring a Demand-Dial Router
  • Configuring an Internet Gateway Using NAT
  • Chapter 21. Recovering from System Failures
  • New Features in Windows Server 2003
  • Functional Description Ntbackup
  • Backup and Restore Operations
  • Recovering from Blue Screen Stops
  • Using Emergency Management Services (EMS)
  • Using Safe Mode
  • Restoring Functionality with the Last Known Good Configuration
  • Recovery Console
  • Moving Forward
  • Who Should Read This Book
  • Who This Book Is Not For
  • Conventions
  • Acknowledgments
  • About the Author
  • About the Technical Reviewers
  • Index
  • Index A
  • Index B
  • Index C
  • Index D
  • Index E
  • Index F
  • Index G
  • Index H
  • Index I
  • Index J
  • Index K
  • Index L
  • Index M
  • Index N
  • Index O
  • Index P
  • Index Q
  • Index R
  • Index S
  • Index SYMBOL
  • Index T
  • Index U
  • Index V
  • Index W
  • Index X
  • Index Z
  • Preface
  • Previous Section Next Section

    Replication Overview

    It helps to manage Active Directory replication if you have a road map of how the domain controllers connect to each other and what information they exchange. In this section, we'll take a look at what Active Directory components get replicated, where the replication traffic goes, how that traffic is managed, and what happens when conflicting updates collide with each other.

    Replication and Naming Contexts

    Domain controllers are like hard-working parents. They can only give what they have. Each domain controller has at least three naming context replicas:

    • Configuration. All domain controllers in a forest have a read/write copy of this naming context.

    • Schema. All domain controllers in a forest have a read-only copy of this naming context.

    • Domain. Each domain controller in a domain has a read/write copy of that domain's naming context.

    In addition, Global Catalog servers host partial naming contexts for domains other than their own. You can also create Application naming contexts for holding DNS zone objects and place those naming contexts on domain controllers running DNS. Figure 7.1 shows a three-domain forest and the naming contexts that would be found on a Global Catalog server in one of those domains.

    Figure 7.1. Diagram of three-domain forest and the naming contexts hosted by a GC in the root domain.

    graphics/07fig01.gif

    As you build a mental image of replication, keep in mind that each naming context constitutes a separate replication unit. Domain controllers must propagate changes made to their replica of a naming context out to other domain controllers hosting a replica of the same naming context.

    Connections

    Domain controllers replicate with specific partners. These partners are defined by Connection objects in Active Directory. The map of domain controllers and their connections is called a topology.

    The service responsible for handling replication between two domain controllers is the Directory Replication Agent, or DRA. The DRA depends on the Connection objects in the topology map to determine which partners to contact when replicating updates to a naming context.

    Connection objects define inbound replication paths. Domain controllers pull updates from their partners. When a domain controller needs to update its copy of a naming context, the DRA sends a replication request to its partners. The DRAs on the partners respond by assembling a replication packet containing updates to the naming context then delivering the packet to the requesting partner.

    This replication packet varies in size depending on the memory in the domain controller. The packet size is 1/100 of the amount of RAM. For this reason, it is advantageous to add memory to a DC. A heavily loaded DC would also benefit from a second processor.

    Watching domain controllers select their partners is like watching teenagers pick seats in the cafeteria at lunchtime. The DRA prefers to use a single Connection object to define the end points for all the naming contexts hosted by a domain controller. For this reason, domain controllers prefer to replicate with other domain controllers in their own domain. If necessary, a domain controller will replicate its Configuration and Schema naming contexts with one partner and its Domain naming context with another partner, but only if no other options are available.

    Global Catalog servers have a special challenge when selecting replication partners. GC servers need a partial replica of every Domain naming context. They can replicate the partial naming context replicas from another GC or directly from domain controllers in the source domain. Keep this behavior in mind as you lay out your architecture. Make sure that GC servers can link to other GC servers to prevent a server from snaking out links to multiple domain controllers in other domains.

    Property Replication

    It would be a tall order to replicate the entire contents of a naming context each time a domain controller updates its partners. It's more efficient to replicate only the items that change.

    The Exchange directory service from which Active Directory was derived takes the approach of replicating an entire object when any property of that object changes. This makes for a simple replication mechanism, because the DRA simply copies an entire row out of the table holding the object's information. Replicating entire objects abuses the network with unnecessary traffic, though, and complicates the collision handling mechanism if conflicting changes are made in the same replication interval.

    The Active Directory engine replicates individual properties rather than entire objects. This conserves bandwidth at the expense of a little added complexity. It's more difficult to ensure database consistency with lots of individual properties flying around the network.

    To help control property replication, each property contains a set of information that defines when the property was last modified, where the modification originated, and how many total revisions have been applied to the property. This is called the property metadata. The metadata is stored right along with the property's primary value, such as Name or CN or Department. See the "Property Metadata" section later in this chapter.

    Sites

    Replication reliability is heavily dependent on the underlying infrastructure. If a network link is slow or unreliable, the replication connections using that link will fail.

    Microsoft defines a site as an area of reliable, high-speed network communications. Replication within a site is called intra-site. Replication between sites is called inter-site.

    Deciding where to create sites and how to define and provision the links between those sites constitutes a critical part of laying out the Active Directory architecture.

    Measuring Link Performance

    Unlike classic NT, Windows Server 2003 dynamically measures link performance to determine if a slow or fast link exists. The calculation goes like this:

    1. Ping a server with 0 bytes of data and time the round trip. If the time is less than 10ms, it's a fast link.

    2. Ping the same server with 4KB of data and time the round trip.

    3. Calculate the delta between the 4KB round trip and the 0KB round trip. This results in the time necessary to move 4KB of data.

    4. Repeat 3 times and get an average 4KB transfer time.

    5. Convert to bits-per-second and compare to benchmark. The default benchmark is 500Kbps.

    A site is usually a LAN or MAN. It can also be a campus network if you have sufficient bandwidth between buildings. You should have at least 500Kbps of bandwidth to support full speed replication within a site. See the "Measuring Link Performance" sidebar. Even if the links are fast, though, if they regularly become oversubscribed or demonstrate long periods of high latency, you may experience replication problems if you do not define separate sites.

    Sites are also used to limit network traffic caused by LDAP searches and Kerberos authentication. Details of this localization are in the "Localizing Active Directory Access" topic.

    Replication Frequency

    Active Directory uses a loosely coupled replication mechanism. This means an interval of some duration exists between the time a modification is made to a property in one replica and the time the modified property appears in all replicas. During this interval, an LDAP query to one domain controller could produce a different result than the same query submitted to another domain controller. Keep this behavior in mind when troubleshooting problems.

    The time it takes for a modified property to replicate to all domain controllers is called convergence time. Ideally, changes would propagate nearly instantaneously so that convergence time would be zero. That ideal cannot be obtained in a practical network. Convergence time is always a compromise between low network traffic and fast update propagation. Active Directory uses two methods for controlling convergence time: notification and polling.

    Notification

    When a domain controller modifies a property in one of its naming contexts, it notifies its replication partners within a site that a change has been made. The partners then pull a copy of the changed property and apply it to their naming context replica. Those domain controllers, in turn, notify their own replication partners and the change propagates in stages around the site.

    Urgent Replication

    Three items are replicated immediately, regardless of the notification interval setting. These are

    • Account lock-outs

    • Changes to LSA secrets

    • Changes to the RID Manager

    Urgent changes are only replicated quickly within a site. Because inter-site replication partners do not use notification, they cannot propagate urgent replication packets. This can affect lockout handling because the user who entered the wrong password several times might be in a different site than the administrator who needs to reset the lockout.

    You might expect password changes to also be replicated urgently, but they are handled using a different mechanism. Password changes are sent directly to the PDC Emulator using a secure channel rather than standard replication. The PDC Emulator acts as a second-check for all denied passwords.

    Short notification intervals will propagate changes more quickly than long intervals, but generate more traffic to carry the same amount of information. (Each replication packet is smaller.) The default notification interval is 15 seconds.

    Notification is only used between domain controllers in the same site. Replication between bridgehead servers in different sites uses polling only, not notification. This permits the system to accumulate sufficient changes (more than 50KB) to warrant compression.

    Polling

    Domain controllers periodically query their replication partners to see if any changes have occurred. Shorter polling intervals reduce convergence time.

    The polling interval between domain controllers in the same site is set to 1 hour. This intra-site polling is not intended to propagate changes. It simply acts as a status check to ensure that the replication partner is available in the event that no Active Directory changes are made during that hour.

    The default polling interval between domain controllers in different sites is set to 180 minutes, or 3 hours. This is a long time to wait for updates to propagate. You can set it to a shorter interval.

    Keep these replication intervals in mind. You'll use the numbers over and over as you set up your sites and configure replication parameters. They also affect daily operation. For example, a Help Desk technician responsible for changing group members needs to remember that a change made to a user's group membership could take three hours (or longer) to replicate to the site containing the user who was just added to the group.

    Urgent replication items are propagated between sites using the normal polling frequency. You can enable notification between sites but this is not recommended. See "Controlling Replication Parameters" for details.

    Replication Methods

    Most communication between network entities uses an application-layer protocol. For instance, when a Windows network client copies a file from a Windows server, it uses the Server Message Block (SMB) protocol. When an Internet email client wants to send a message to a post office, it uses Simple Mail Transport Protocol (SMTP). Active Directory replication can use one of two high-level protocols.

    Remote Procedure Calls

    The primary protocol used by Active Directory replication is the Remote Procedure Call, or RPC. RPC transactions are simple to code and have a robust set of tools for creating and managing a connection. RPCs are especially attractive for Active Directory replication because they have a straightforward encryption methodology. Encryption is an essential component of replication. You do not want someone with a packet sniffer to view sensitive directory information as it transits the network.

    In an RPC transaction, an RPC client issues a function call to the complementary RPC server without much regard for the state of the intervening network. This greatly simplifies the way applications are coded. On the other hand, the application can get impatient if it waits too long for a response. This can cause a loss of connection if the client gives up.

    Here's the bottom line: RPCs make for a great data communication tool but they are finicky over wide area connections. For this reason, Active Directory uses two forms of RPC: a high-speed form for use in a local network and a low-speed form for use across a WAN. The low-speed form has higher latency (longer timeouts) and will suffer through multiple connection losses before giving up.

    SMTP

    Active Directory can also use Simple Mail Transport Protocol (SMTP) for transferring replication packets. SMTP is a robust protocol, well suited for use across uncertain network connections. SMTP also permits asynchronous communication, making it possible to transfer replication packets in bulk.

    Unfortunately, SMTP has a couple of serious drawbacks when it comes to Active Directory replication. The first is structural. SMTP transfers messages in clear text. For this reason, the system automatically encrypts SMTP messages using a proprietary form of secure messaging. This form of encryption uses certificates, so you must have a Certification Authority. Encryption puts a significant load on a server, so ensure that the bridgeheads are especially fast with multiple processors to share the workload.

    The second drawback of using SMTP is a limitation of the File Replication Service (FRS). Recall that FRS is used to sync the contents of Sysvol between domain controllers. FRS can only use RPCs to carry replication traffic. In addition, FRS uses the same replication topology (including the same connection options) as those specified for Active Directory replication, so you cannot specify one transport for Active Directory replication and another for FRS.

    Because of this limitation, SMTP cannot be used to replicate the contents of a Domain naming context because the contents of Sysvol cannot be kept in sync. SMTP can be used for all other naming contexts, including the Configuration, Schema, and Application naming contexts and the partial naming contexts that make up the Global Catalog.

    If you have a remote location with a slow, unreliable connection that calls for the queuing capabilities of SMTP, you'll need to create a separate domain for that location.

    Replication Topology

    Domain controllers know each other's location and the connections between them. The LDAP term for this topology information is knowledge. The service responsible for tailoring the replication topology is the Knowledge Consistency Checker, or KCC.

    The KCC treats the domain controller topology like a game of K*Nex. Every 15 minutes, it surveys the domain controllers in the domain and decides where to place Connection objects so that each domain controller gets its updates in a reasonable amount of time. The KCC on a bridgehead server includes the bridgehead servers in other sites in its calculations.

    The KCC makes its decisions based on a spanning tree algorithm. One of the improvements made in Windows Server 2003 is a streamlining of this algorithm that enables the KCC to handle more sites and larger topologies. In Windows 2000, there was a limit of approximately 100 sites and domain controllers before an administrator would be forced to intervene and create manual connections. Using Windows Server 2003, a much larger number of sites and domain controllers are supported. Microsoft has not specified a limit.

    Intra-Site Topology

    If you're an Exchange administrator, you'll be pleased with the changes made to the KCC in Windows Server 2003.

    When it comes to selecting replication partners, the Exchange directory service behaves like a sailor on a 24-hour pass. It creates point-to-point replication connections between every domain controllers in a site. The Active Directory KCC is much more discriminating. It selects a limited number of partners to structure a tightly controlled topology. For intra-site replication, the KCC builds a replica ring. See Figure 7.2 for an example.

    Figure 7.2. Simple replication ring.

    graphics/07fig02.gif

    When constructing a replica ring, the KCC follows a 3-hop rule: no domain controller is more than 3 hops from any other domain controller. Recall that a domain controller can wait up to 15 seconds to notify its replication partners following a change to one of its naming contexts. By limiting the hop count, the KCC ensures that changes converge quickly.

    Replica Ring Formation

    When a new domain controller is promoted, the KCC on that domain controller gets a copy of Active Directory in much the same way that aliens invading Earth get the name and location of the White House. They land furtively and use a slimy tendril to suck the brains out of an innocent human being who wasn't doing them any harm at all. (Excuse the emotion. I was born and raised in Roswell, where we're a little sensitive about this sort of treatment.)

    During a domain controller promotion, the Active Directory Promotion Wizard creates a connection to an existing domain controller then uses that connection to pull a full copy of Active Directory. When the next KCC on the existing domain controller runs (sometime in the next 15 minutes), it sees the new connection and builds a complementary connection to the new domain controller. They are now full-fledged replication partners.

    The KCCs on the other domain controllers take note of these changes and proceed to break and make their own connections to insert the new domain controller into the replica ring. This happens without any administrative intervention.

    If the ring gets more than six domain controllers, such as that in Figure 7.3, the KCC running on each domain controller realizes that there are more than three hops in the ring. It sets to work building optimizing connections between domain controllers to reduce the hop count. Remember that the domain controllers share common knowledge about connections, so they eventually work out a mutually agreeable topology.

    Figure 7.3. Meshed replication ring.

    graphics/07fig03.gif

    Replica Ring Repair

    If a domain controller does not respond to a replication request, the DRA wakes up the KCC. The KCC takes over and builds new connections to bypass the failed domain controller, like a heart muscle healing itself after a heart attack.

    The DRA keeps trying to contact the lost domain controller. When the domain controller comes back online again, the KCC sets to work restructuring the connections to reintroduce it back into the ring.

    Under normal circumstances, all this repair work happens automatically. The only time an administrator should need to do any manual configuration is in the event that the KCC is unable to find a suitable replication partner due to a Domain Name System (DNS) failure. This generally occurs when a failed domain controller is also the DNS server for a site. If you always specify multiple DNS servers in your TCP/IP configuration, you should avoid this problem.

    Inter-Site Topology

    The replication picture changes considerably when the domain controllers are in different sites. Let's consider for a moment what would happen if there were such thing as inter-site replication. Figure 7.4 shows what this would look like.

    Figure 7.4. Replica ring without special site configurations.

    graphics/07fig04.gif

    In this configuration, the Directory Replication Agents running on the domain controllers have no way of knowing that the intervening network connections are slow and prone to oversubscription and potential failure. They blithely replicate as fast and as often as they would for normal network connections.

    That's when trouble begins. The high-speed RPC connections begin to fail when the WAN links become oversubscribed and latency increases. The symptoms of RPC failures include persistent differences between replicas, DRA and KCC errors in the Event log, and eventually fatal RPC end-point errors when the connections fail repeatedly.

    Active Directory avoids this carnage by building connections between sites that use special, low-speed RPCs. For this reason, inter-site replication uses an entirely different topology. See Figure 7.5 for an example.

    Figure 7.5. Inter-site replication topology.

    graphics/07fig05.gif

    Inter-Site Replication Compared to Intra-Site Replication

    Several features differentiate inter-site replication topology from its intra-site cousin:

    • Replication between sites occurs only between two domain controllers, called bridgeheads.

    • Notification is disabled between bridgehead servers. Replication is controlled solely by polling. The default inter-site polling interval is 180 minutes (3 hours).

    • Replication packets between sites are compressed to conserve bandwidth. Compression puts more of a CPU load on the domain controller, so bridgeheads should be capable machines with sufficient speed and processors to handle their duties.

    Bridgehead Server Selection

    The KCC selects a server to act as the bridgehead for a site. It makes this decision using the following criteria:

    • It looks to see if an administrator has selected any preferred bridgehead servers. If so, it uses these as a selection pool.

    • If there are no preferred bridgehead servers, any domain controller in the site is a candidate.

    • The KCC lines up the candidates in order of their Globally unique Identifier (GUID). The domain controller with the highest GUID wins.

    Inter-Site Topology Generator

    The bridgehead selection is something of a secret in that domain controllers in other sites don't know the results until they are told, something like waiting for the College of Cardinals to select a pope.

    Rather than watching for the color of the smoke from the chimney of the Sistine Chapel, the sites wait for a Connection object between the bridgeheads to appear in the Configuration naming context. This Connection object is created by a domain controller designated as the Inter-Site Topology Generator, or ISTG.

    There is only one ISTG in a site. It is selected using the same criteria as the bridgehead server—that is, the domain controller with the highest GUID. For this reason, the ISTG is often a bridgehead server, but it doesn't have to be. For instance, the ISTG might not be on the list of preferred bridgehead servers.

    Identifying the Bridgehead Server and ISTG

    You can identify the server designated as the bridgehead for a site using the REPADMIN utility with the following syntax:

    
    C:\>repadmin /bridgeheads /verbose
    Gathering topology from site Phoenix (s1.company.com):
    Bridgeheads for site Phoenix (s1.company.com):
    Source Site   Local Bridge        Trns     Fail. Time   #    
    graphics/ccc.gifStatus
    Houston             S1             RPC       (never)    0    
    graphics/ccc.gifOperation completed successfully.
    
    Naming Context   Attempt Time          Success Time         
    graphics/ccc.gif#Fail  Last Result
    subsidiary       2002-02-07  08:53:01   2002-02-07 08:53:01    0 
    graphics/ccc.gif   Operation completed successfully.
    Configuration    2002-02-07  09:04:42   2002-02-07 09:04:42    0 
    graphics/ccc.gif   Operation completed successfully.
    Schema           2002-02-07  08:49:19   2002-02-07 08:49:19    0 
    graphics/ccc.gif   Operation completed successfully.
    

    You can identify the ISTG for a site by opening the Properties window for the NTDS Site Settings object in the AD Sites and Services console.

    The ISTG runs as a separate function from the KCC because one site can have more than one bridgehead if there are multiple domains. Each of these bridgeheads has a copy of the Schema and Configuration naming contexts and may have a copy of the Global Catalog partial naming contexts, as well. Inter-site replication would turn into anarchy if all those bridgeheads made independent decisions about where to create Connection objects.

    Inter-Site Topology Highlights

    If you're experiencing a little anarchy of your own right now in trying to construct a mental picture of all this, here are some highlights (refer to Figure 7.5)

    • There is only one ISTG per site. It creates the Connection objects that define the replication path between bridgeheads in different sites.

    • There is one bridgehead server for each Domain naming context in each site. One Domain bridgehead server is designated as the bridgehead for the Configuration and Schema naming contexts. Another bridgehead will be responsible for the DomainDNSZones and ForestDNSZones naming contexts if the Configuration and Schema bridgehead is not a DNS server.

    • The KCC selects the bridgehead by picking the domain controller with the highest GUID from a list of candidate domain controllers. If an administrator has selected preferred bridgehead servers, they become the only candidates.

    • If you create an Application naming context and designate domain controllers in different sites to host a replica, each site will have a bridgehead for this naming context. This may or may not be the same bridgehead used by another naming context.

    Failure of a Bridgehead or ISTG

    If a bridgehead fails, its partners in other sites will be unable to complete replication transactions. The Directory Replication Agents on the bridgehead's local replication partners will notice that the bridgehead has stopped responding. They snitch to the KCC, which sets to work selecting a replacement. The KCC waits a period of time (two hours by default) before transferring responsibility to the new bridgehead

    If an administrator has selected a set of preferred bridgehead servers and none of these servers is available, the KCC will not select a replacement bridgehead and inter-site replication will fail. For this reason, it is very important that you select multiple preferred bridgehead servers for each Domain naming context.

    If a failed bridgehead comes back on line, it does not reassume its old responsibilities. It gets in line as a candidate for replacing the new bridgehead should the new bridgehead ever fail.

    Detecting an ISTG failure is a little trickier. The ISTG is like an emeritus professor; it only shows up at ceremonial occasions and funerals. To make sure everyone knows it's still alive, the ISTG periodically updates an attribute called Inter-siteTopologyGenerator in its NTDS Settings object. By default, it does this update every 30 minutes. The update replicates to the rest of the domain controllers so they know the ISTG is on the line. If an hour passes without this attribute being updated, the KCC on the other domain controllers select a new ISTG using the highest GUID rule.

    Site Objects in Active Directory

    Active Directory stores the objects that control replication under the Sites container in the Configuration naming context. Because every domain controller hosts a copy of the Configuration naming context, every domain controller has the same information about site names, locations, and connections. This is how the KCC services on separate domain controllers all come to the same conclusion about replication topology. They all work from the same crib sheet.

    Figure 7.6 shows how Active Directory objects represent the various components of a replication topology. Here is a list of the objects and their functions:

    • Site. This object acts as a placeholder for the objects underneath.

    • <sitename>. This object represents a specific site. The object contains a Site-Object-BL attribute that points to a Subnet object such as 10.3.20.0/24. Every site must be linked to at least one Subnet object.

    • Subnet. This object contains a Site-Object attribute that points at the linked Site object. A site can be linked to more than one subnet, but a subnet can be linked to only one site.

    • <servername>. This object represents a specific server. It contains attributes that define the DNS host name of the server and its Globally Unique Identifier (GUID). This information is included in each property that is changed on the server and acts as a marker when the changed properties are replicated to other servers.

    • NTDS Settings. This object lists the naming contexts hosted by the associated domain controller. For example, a Global Catalog server in domain DomA in a two-domain forest with DomB would have these entries:

      hasMasterNCs: CN=Schema,CN=Configuration,DC=DomA,DC=com
      hasMasterNCs: CN=Configuration,DC=DomA,DC=com
      hasMasterNCs: DC=DomB,DC=com
      hasPartialReplicaNCs: DC=DomB,DC=com
      
    • NTDS Site Settings. This object has an attribute called Schedule that determines the default replication schedule for the Connection objects in the site. It also has an Inter-site-Topology-Generator attribute that identifies the ISTG for the site.

    • Site Link. This object contains a Site-List attribute that shows the sites that act as end-points for the link. The system defines a default Site Link object called Default-IP-Site-Link.

    • Connection. This object defines parameters for inbound replication to the domain controller. It has a From-Server attribute that identifies the replication partner. A Transport-Type attribute specifies the transport used by the connection. A Schedule attribute defines how often to poll for updates. By default, a Connection object uses the schedule defined by its parent NTDS Site Settings object.

    Figure 7.6. Active Directory objects representing replication topology.

    graphics/07fig06.gif

    The "Configuring Inter-site Replication" section later in this chapter describes how these objects are used when configuring sites and inter-site replication.

    Replication Topology Summary

    Here are the important points to remember when you begin detailing your Active Directory site architecture:

    • Active Directory is divided into separate naming contexts. Each naming context forms a discrete replication unit.

    • Only changed properties are replicated, not entire objects. Properties contain special metadata information used to validate database consistency and control replication traffic.

    • Sites define areas of reliable, high-speed network communication. A high-speed link is 500Kbps or faster.

    • Domain controllers replicate from specific partners. Connection objects in Active Directory define replication partners. All replication is inbound across the connection.

    • The KCC is responsible for mapping out replication topology by creating Connection objects between domain controllers in the same site. The ISTG creates Connection objects between bridgehead servers in different sites.

    • Intra-site topology uses a ring with sufficient meshed connections to maintain a hop count of 3 or fewer. Inter-site replication uses bridgeheads.

    • Within a site, domain controllers notify their partners when updates are pending. The default notification interval is 15 seconds. Between sites, bridgeheads wait for polling. The default polling interval is 3 hours.

    • There are three replication transports: high-speed RPCs used within a site, low-speed RPCs used between sites, and SMTP that can be used between domains. SMTP requires IPSec and a Microsoft Certificate Authority.

      Previous Section Next Section