• Chapter 1. Installing and Configuring Windows Server 2003
  • software development Company Server 2003
  • Chapter 1. Installing and Configuring Windows Server 2003
  • New Features in Windows Server 2003
  • Best Practices
  • Moving Forward
  • Version Comparisons
  • Hardware Recommendations
  • Installation Checklist
  • Functional Overview of Windows Server 2003 Setup
  • Installing Windows Server 2003
  • Post Setup Configurations
  • Functional Description of the Windows Server 2003 Boot Process
  • Correcting Common Setup Problems
  • Chapter 2. Performing Upgrades and Automated Installations
  • New Features in Windows Server 2003
  • NT4 Upgrade Functional Overview
  • Upgrading an NT4 or Windows 2000 Server
  • Automating Windows Server 2003 Deployments
  • Moving Forward
  • Chapter 3. Adding Hardware
  • New Features in Windows Server 2003
  • Functional Description of Windows Server 2003 Architecture
  • Overview of Windows Server 2003 Plug and Play
  • Installing and Configuring Devices
  • Troubleshooting New Devices
  • Moving Forward
  • Chapter 4. Managing NetBIOS Name Resolution
  • New Features in Windows Server 2003
  • Moving Forward
  • Overview of Windows Server 2003 Networking
  • Name Resolution and Network Services
  • Network Diagnostic Utilities
  • Resolving NetBIOS Names Using Broadcasts
  • Resolving NetBIOS Names Using Lmhosts
  • Resolving NetBIOS Names Using WINS
  • Managing WINS
  • Disabling NetBIOS-over-TCP/IP Name Resolution
  • Chapter 5. Managing DNS
  • New Features in Windows Server 2003
  • Configuring a Caching-Only Server
  • Configuring a DNS Server to Use a Forwarder
  • Managing Dynamic DNS
  • Configuring Advanced DNS Server Parameters
  • Examining Zones with Nslookup
  • Command-Line Management of DNS
  • Configuring DHCP to Support DNS
  • Moving Forward
  • Overview of DNS Domain Structure
  • Functional Description of DNS Query Handling
  • Designing DNS Domains
  • Active Directory Integration
  • Configuring DNS Clients
  • Installing and Configuring DNS Servers
  • Configuring Secondary DNS Servers
  • Integrating DNS Zones into Active Directory
  • Chapter 6. Understanding Active Directory Services
  • New Features in Windows Server 2003
  • Active Directory Support Files
  • Active Directory Utilities
  • Bulk Imports and Exports
  • Moving Forward
  • Limitations of Classic NT Security
  • Directory Service Components
  • Brief History of Directory Services
  • X.500 Overview
  • LDAP Information Model
  • LDAP Namespace Structure
  • Active Directory Namespace Structure
  • Active Directory Schema
  • Chapter 7. Managing Active Directory Replication
  • New Features in Windows Server 2003
  • Replication Overview
  • Detailed Replication Transaction Descriptions
  • Designing Site Architectures
  • Configuring Inter-site Replication
  • Controlling Replication Parameters
  • Special Replication Operations
  • Troubleshooting Replication Problems
  • Moving Forward
  • Chapter 8. Designing Windows Server 2003 Domains
  • New Features in Windows Server 2003
  • Design Objectives
  • DNS and Active Directory Namespaces
  • Domain Design Strategies
  • Strategies for OU Design
  • Flexible Single Master Operations
  • Domain Controller Placement
  • Moving Forward
  • Chapter 9. Deploying Windows Server 2003 Domains
  • New Features in Windows Server 2003
  • Preparing for an NT Domain Upgrade
  • In-Place Upgrade of an NT4 Domain
  • In-Place Upgrade of a Windows 2000 Forest
  • Migrating from NT and Windows 2000 Domains to Windows Server 2003
  • Additional Domain Operations
  • Moving Forward
  • Chapter 10. Active Directory Maintenance
  • New Features in Windows Server 2003
  • Loss of a DNS Server
  • Loss of a Domain Controller
  • Loss of Key Replication Components
  • Backing Up the Directory
  • Performing Directory Maintenance
  • Moving Forward
  • Chapter 11. Understanding Network Access Security and Kerberos
  • New Features in Windows Server 2003
  • Windows Server 2003 Security Architecture
  • Security Components
  • Password Security
  • Authentication
  • Analysis of Kerberos Transactions
  • MITv5 Kerberos Interoperability
  • Security Auditing
  • Moving Forward
  • Chapter 12. Managing Group Policies
  • New Features in Windows Server 2003
  • Group Policy Operational Overview
  • Managing Individual Group Policy Types
  • Moving Forward
  • Chapter 13. Managing Active Directory Security
  • New Features in Windows Server 2003
  • Overview of Active Directory Security
  • Using Groups to Manage Active Directory Objects
  • Service Accounts
  • Using the Secondary Logon Service and RunAs
  • Using WMI for Active Directory Event Notification
  • Moving Forward
  • Chapter 14. Configuring Data Storage
  • New Features in Windows Server 2003
  • Functional Description of Windows Server 2003 Data Storage
  • Performing Disk Operations on IA32 Systems
  • Recovering Failed Fault Tolerant Disks
  • Working with GPT Disks
  • Moving Forward
  • Chapter 15. Managing File Systems
  • New Features in Windows Server 2003
  • Overview of Windows Server 2003 File Systems
  • NTFS Attributes
  • Link Tracking Service
  • Reparse Points
  • File System Recovery and Fault Tolerance
  • Quotas
  • File System Operations
  • Moving Forward
  • Chapter 16. Managing Shared Resources
  • New Features in Windows Server 2003
  • Functional Description of Windows Resource Sharing
  • Configuring File Sharing
  • Connecting to Shared Folders
  • Resource Sharing Using the Distributed File System (Dfs)
  • Printer Sharing
  • Configuring Windows Server 2003 Clients to Print
  • Managing Print Services
  • Moving Forward
  • Chapter 17. Managing File Encryption
  • New Features in Windows Server 2003
  • File Encryption Functional Description
  • Certificate Management
  • Encrypted File Recovery
  • Encrypting Server-Based Files
  • EFS File Transactions and WebDAV
  • Special EFS Guidelines
  • EFS Procedures
  • Moving Forward
  • Chapter 18. Managing a Public Key Infrastructure
  • New Features in Windows Server 2003
  • Moving Forward
  • PKI Goals
  • Cryptographic Elements in Windows Server 2003
  • Public/Private Key Services
  • Certificates
  • Certification Authorities
  • Certificate Enrollment
  • Key Archival and Recovery
  • Command-Line PKI Tools
  • Chapter 19. Managing the User Operating Environment
  • New Features in Windows Server 2003
  • Side-by-Side Assemblies
  • User State Migration
  • Managing Folder Redirection
  • Creating and Managing Home Directories
  • Managing Offline Files
  • Managing Servers via Remote Desktop
  • Moving Forward
  • Chapter 20. Managing Remote Access and Internet Routing
  • New Features in Windows Server 2003
  • Configuring a Network Bridge
  • Configuring Virtual Private Network Connections
  • Configuring Internet Authentication Services (IAS)
  • Moving Forward
  • Functional Description of WAN Device Support
  • PPP Authentication
  • NT4 RAS Servers and Active Directory Domains
  • Deploying Smart Cards for Remote Access
  • Installing and Configuring Modems
  • Configuring a Remote Access Server
  • Configuring a Demand-Dial Router
  • Configuring an Internet Gateway Using NAT
  • Chapter 21. Recovering from System Failures
  • New Features in Windows Server 2003
  • Functional Description Ntbackup
  • Backup and Restore Operations
  • Recovering from Blue Screen Stops
  • Using Emergency Management Services (EMS)
  • Using Safe Mode
  • Restoring Functionality with the Last Known Good Configuration
  • Recovery Console
  • Moving Forward
  • Who Should Read This Book
  • Who This Book Is Not For
  • Conventions
  • Acknowledgments
  • About the Author
  • About the Technical Reviewers
  • Index
  • Index A
  • Index B
  • Index C
  • Index D
  • Index E
  • Index F
  • Index G
  • Index H
  • Index I
  • Index J
  • Index K
  • Index L
  • Index M
  • Index N
  • Index O
  • Index P
  • Index Q
  • Index R
  • Index S
  • Index SYMBOL
  • Index T
  • Index U
  • Index V
  • Index W
  • Index X
  • Index Z
  • Preface
  • Previous Section Next Section

    Resource Sharing Using the Distributed File System (Dfs)

    Networks are confusing places for the average end user. To tell users that their files are "on the network" is like telling them, "There's a $10 bill waiting for you somewhere in Chicago."

    Figure 16.9 shows a typical set of network resources. Connecting to these shared resources requires mapping a network drive at the clients. Organizations spend lots of time and energy and ingenuity figuring out how to get a consistent set of drive mappings to their users. The users aren't knowledgeable enough about the network structure to correlate a drive mapping to a server, so these network drives become the servers in the user's mind. You hear this in conversation all the time. "My K drive went down again yesterday. Those IT people really need to get their act together, don't you think?" (UNIX administrators aren't free from this, either. Network File System (NFS) mount points take on a life of their own, as well.)

    Figure 16.9. Standard shared folder structure. User must map to each shared resource.


    Another problem with this way of sharing resources is that you quickly run out of drive letters. Large organizations shuffle logical drive letters constantly. "If you're in Engineering, the H drive points at the Drawings share on server S23. If you're in Accounting, the H drive points at the Financials share on server S13."

    What is needed is a structure where shared directories throughout the organization can be displayed in a single, logical format. A user who wants accounting information goes to an Accounting folder inside this virtual structure and not to some server in Cincinnati.

    The technology to do this aggregation of share points is called the Distributed File System, or Dfs. (Microsoft uses a mix of upper- and lowercase letters to differentiate its product from IBM's Distributed File System, DFS.) This topic examines how Dfs works and how you can use it effectively in a production network to simplify resource access.

    Dfs Structure

    In a nutshell, Dfs defines a hierarchy of shared folders. The Dfs structure mimics a standard directory structure. You can think of Dfs as being a file system made up of share points instead of folders.

    Figure 16.10 shows how the shares in Figure 16.10 would look under Dfs. A user enters the structure at the top via a single mapped drive and navigates through the folders just as if they were inside a big file system on a single server.

    Figure 16.10. Dfs structure that replaces server-centric model.


    Using Dfs, you can organize information in a way that complements the activities of your organization, not your IT department. For instance, a law firm can structure its Dfs by litigation type. An oil company can structure Dfs by business unitDownstream and Upstream and Midstream and the like. None of the users know or care about the names of the servers or shares. They figure out where to find their information and they're happy.

    Figure 16.11 shows a Dfs console window with the various architectural elements exposed. If Dfs were a real file system, you would call this structure a volume. Dfs refers to it as a namespace. Here are the major elements of a Dfs namespace:

    • Dfs Root. Every file system needs a root. The root of a Dfs virtual file system resides at a shared folder on a server. In a domain Dfs, you can place copies of the Dfs root on multiple servers for fault tolerance.

    • Links. The virtual file system represented by Dfs consists of a set of virtual folders that represent links to real share points at real servers. A link appears in Explorer as a standard folder.

    • Partition Knowledge Tables. When a Dfs client connects to a Dfs link, it gets a referral to the real share point at a target server. This referral takes the form of a Partition Knowledge Table, or PKT, which contains the identity of the target server and its true share name. If a link has multiple target servers, the PKT sorts the servers in order by site then hands out the referral.

    • Dfs Targets. This is a new term in Windows Server 2003 Dfs. It refers to the server or servers that host the shares that are the target of a Dfs link. A link containing multiple targets is said to be a replicated link. The File Replication Service is responsible for keeping the data in sync between the targets of a replicated link.

    Figure 16.11. Dfs console showing architectural elements.


    Let's take a closer look at these elements so we can get an idea how clients navigate through the Dfs structure.

    Dfs Root

    The defining structure of a Dfs namespace takes the form of a set of folders on a Dfs root server. In addition, the root server has a Partition Knowledge Table, or PKT, that contains pointers to the folders in this structure along with the names and shares they represent. There are two types of Dfs roots, distinguished by where the root folder structure and PKT are stored:

    • Standalone root. The folders representing the Dfs namespace reside on a single root server. The PKT information resides in the Registry of that server. If a standalone root server is unavailable, the Dfs structure it hosts is also unavailable.

    • Domain root. The folders representing the Dfs namespace can reside on multiple servers. The PKT information resides in Active Directory where it is replicated to every domain controller in a domain. This provides fault tolerance because one root server can go down and the others can still pass out Dfs referrals to clients.

    As you can probably guess, the domain root is preferred. To have a domain root, though, you need an Active Directory domain. The domain controllers can be running a mixture of Windows Server 2003 and Windows 2000. Dfs is fully compatible on both platforms. If you have a classic NT4 domain, or a simple workgroup, you must use a standalone root and take measures to protect it against downtime or network failures.

    Dfs Root Limitations

    Windows Server 2003 Dfs improves on Windows 2000 Dfs by permitting a server to host multiple domain Dfs roots. This is not true for standalone Dfs servers. A standalone Dfs server can only host one root.

    A Dfs root must be hosted on an NTFS volume. Dfs links can point to shared FAT or FAT32 volumes, but this is not recommended due to security considerations.

    Although this is not a root limitation, per se, it affects your naming scheme so I'll mention it here. Dfs is subject to the same maximum path length that affects all Windows operating systems. A path cannot be longer than 260 bytes. Keep your link names as short as possible so that the deepest link won't exceed this path length.

    Registry Tip: Dfs Registry Information

    Dfs information is stored in the Registry under these keys:

    • HKLM | Software | Microsoft | DfsHost. This contains a flag indicating that the server hosts a Dfs root.

    • HKLM | Software | Microsoft | Dfs. This contains the names of the roots, their logical shares, and in the case of standalone root servers, the binary information needed to build the PKT.

    • HKLM | System | CurrentControlSet | Services | Dfs. This contains configuration and parameter information for the Dfs provider, Dfssvc.exe.

    • HKLM | System | CurrentControlSet | Services | DfsDriver. This contains configuration and parameter information for the Dfs file system driver, Dfs.sys.

    Dfs Links

    When a Dfs client connects to a Dfs namespace, it sees a folder structure similar to a file structure. Figure 16.12 shows an example.

    Figure 16.12. Explorer view of a Dfs namespace.


    The "folders" are actually pointers, called links, to the shared folders hosted by servers around the enterprise. Think of Dfs as the 411 of networking. "Dfs Root, may I help you? Accounting files? Those are at the Acct shares on server S73."

    When a Dfs client opens a Dfs folder, the client gets a referral to the target server hosting the shared resource. The client follows up on the referral and makes a connection directly to the target server. This keeps Dfs from becoming a bottleneck to network traffic. The client caches the referral locally so it does not need to go back to Dfs each time it connects to the same folder. The referral has a default timeout of 30 minutes. You can clear a referral manually from the Properties window in the Dfs tab. Click Clear History.

    The user can map a drive to the root of the namespace and that is the only drive letter that needs to be expended. For standalone roots, the UNC path would be \\<server_name>\<dfs_root_name>. A domain Dfs, however, can use the name of the domain in the UNC path, such as \\<domain_name>\<dfs_root_name>. The client resolves the domain name into the name of a server hosting a replica of the root so it can read the folder structure.

    For example, the UNC path to the top of the Dfs namespace in Figure 16.12 is \\company.com\dfsroot. You could include this as a drive mapping in a logon script, completely eliminating the need for the user to do any mapping. This makes them happy and your Help Desk happy.

    Because Dfs just gives a referral, not an actual connection, the target server does need not to be running Windows Server 2003. It doesn't even need to be a Windows server at all. If the client has the proper redirector, the target resource can be a NetWare volume, a Banyan drive, an NFS mount on a UNIX host, and on and on.

    The exceptions to this are downlevel Windows 9x/ME clients. These clients can only accept a Dfs referral to an SMB server. To connect a downlevel client to a NetWare server, for example, you would need to use a Windows server as a NetWare gateway. Gateways are invariably slow and often finicky, so you should limit this kind of connection. NT4 clients can accept a non-SMB referral, and so do not need a gateway.

    Dfs Link Limitations

    The most significant limitation with links is that you cannot create child links from an existing link. For instance, if you have a link called HR that points at a target called \\S23\HR, you cannot create another link under HR that points at some other share. In essence, the basic Dfs namespace is one layer deep.

    However, you can create a link from a Dfs root to another Dfs root or a folder in the Dfs namespace under another Dfs root. This permits you to create a master Dfs namespace that encompasses other Dfs namespaces. Figure 16.13 shows an example.

    Figure 16.13. Using links to another Dfs root to create a multilevel Dfs namespace.


    The figure shows a top-level Dfs root called Dfsroot rooted at server S1. A link called Acct points at another Dfs root called Accounting rooted at server S20. When a user connects to the \\company.com\dfsroot namespace, Explorer will display the Accounting root as a folder in the namespace. When the user double-clicks the Accounting folder, the Dfs client receives a referral to the root hosted by server S20.

    This ability to link to other Dfs roots permits you to build hierarchy into your Dfs namespace. However, it also means that you'll need to design your roots carefully so that they mimic your organization. For instance, a university might have separate Dfs roots for Undergrad, Grad, Faculty, and Admin along with a master Dfs root that contains links to these specialized roots. Users could be given a logon script that maps to the top of the master Dfs namespace or to the specialized root, depending on their needs and circumstances.

    Creative Namespace Structures

    If you want to retain a single Dfs namespace (no alternate roots), you can finesse the single level naming limitation somewhat by creatively naming your links.

    For instance, let's say you have an HR department with three divisions: Americas, Asia, and Europe. The three divisions have their files on separate servers and you want to aggregate the share points under a single Dfs folder called HR.

    You can do this by assigning hierarchical names to the links. For instance, you could name the links as follows:

    HR\Americas -> linked to -> \\S23\HR-Americas
    HR\Asia -> linked to -> \\S372\HR-PacRim
    HR\Europe -> linked to -> \\S105\HR-EU

    When these links are displayed to the user, they will appear as one common parent folder called HR under the Dfs root with three subfolders.

    This trick will not work to build a hierarchy that is three layers deep. In the example, you could not create a link with the name HR\Americas\Contractors. The system would error out, informing you that a link of that name already exists.

    For this reason, you need to give lots and lots of thought to your naming scheme before implementing Dfs. It might take months to prepare for an afternoon's worth of actual configuration work. This will be time well spent, though, because users will love the new, streamlined file access points.

    Partition Knowledge Table

    The PKT contains the Dfs root name and the names of the root servers. It also contains a list of the Dfs links and their target servers. As you can imagine, this list can get fairly long if you have a lot of links and replicas. The information is stored in Unicode, so each entry takes double the number of bytes you would expect.

    On a standalone Dfs root server, the PKT is stored in the Registry. A domain Dfs stores PKT information in Active Directory. Each Dfs root is represented by an FTDfs object that has a PKT attribute.

    The size of a Dfs structure is limited by the maximum size of the PKT. For a domain Dfs, Microsoft recommends keeping the PKT attribute smaller than 5MB. This gives room for about 5000 links at the maximum path length. With reasonable path lengths, you can get a lot more links, but take it from me, having thousands and thousands of links is very difficult to manage. Just opening the Dfs console takes a long, long time.

    A big Dfs structure can also cause replication headaches. Each time you make any change at all to Dfs that affects the PKT, Active Directory must replicate the entire PKT attribute. If the attribute is 3MB or 4MB because you have thousands of links and link replicas, the replication will take a considerable amount of bandwidth as it travels to every domain controller in the domain. It also takes a long time to start a Dfs root server when you have thousands of links.

    The size of a standalone Dfs is limited by the space the PKT takes in the Registry. In previous versions of Windows, this limit was about 13MB based on the maximum System hive. The System hive cannot be larger than 16MB because of a limitation imposed by the nature of the Windows boot process. A 13MB size limit corresponds to about 10,000 links at the maximum path length.

    In Windows Server 2003, the PKT information now resides in the Software hive. Microsoft still recommends a maximum of 10,000 links for a standalone root.

    When a client connects to Dfs, the referral it gets comes from the PKT. The entire PKT is not transferred, just the portion referencing the link the client touched. The client caches this information to speed up subsequent connections to the same link.

    You can view the contents of the PKT cache at a client using DFSUTIL from the Support Tools. The syntax is dfsutil /pktinfo. Here is a sample listing where the user has navigated to a Dfs link called Highways that has two replicas; the Active flag points at the replica the client is using:

    C:\Program Files\Support Tools>dfsutil /pktinfo
    1 entries...
    Entry: \Company.com\Engineering\Highways
    ShortEntry: \Company.com\Engineering\Highways
    Expires in 300 seconds
    UseCount: 0 Type:0x1 ( DFS )
       0:[\S2\highways] State:0x21 ( )
       1:[\S4\highways] State:0x31 ( ACTIVE )

    You won't need this PKT information very often, but it is good to remember that it's there when you're troubleshooting. You might be able to resolve a Dfs problem quickly when others are scratching their heads because you can take a quick look at the PKT information and figure out that a client is not getting the information it needs, or has timed out a referral, or has some other problem getting link state information.

    Multiple Dfs Targets

    I said previously that Dfs acts like a directory assistance operator. In actuality, it acts more like a receptionist because it is able to decide where to route an incoming caller. "You want the Sales Vice President? That would be Ms. Proctor, but she isn't in. I'll give you the number of her assistant, Mr. Gamble." (Unlike a real receptionist, Dfs doesn't make the connection for the client. The client must follow up on the referral.)

    The target information is stored in the PKT. If a particular link has more than one target, Dfs returns the PKT information for all the targets. It is up to the client to decide which to select. There are two versions of Dfs and they handle this differently. I'll give you more on this in a moment.

    If a link points at multiple targets, this implies that each target has the same information. Dfs gives the chore of keeping the targets in sync to the File Replication Service, or FRS. You may recall that FRS is the service that keeps the contents of SYSVOL in sync between domain controllers in the same domain.

    File Replication Service and Dfs

    FRS is a general-purpose file synchronization service. Here are a few highlights of the service:

    • The FRS engine is multithreaded. By default, eight files can be transferred at once between replication partners.

    • Replication is based exclusively on notification. Replication partners do not poll each other looking for changes.

    • FRS servers notify their replication partners immediately when a file changes. There is no five-minute delay as there is in Active Directory. Replication partners pull the change immediately upon being notified.

    • An entire file is transferred if any part of the file is changed.

    • FRS uses Remote Procedure Calls (RPCs) for both inter-site and intra-site replication. There is no SMTP option.

    In Windows 2000, FRS was forced to use the same topology as Active Directory replication for all links, which was often not efficient for moving large volumes of data between Dfs replicas in different sites. Windows Server 2003 improved FRS considerably for use with Dfs. It is now possible to control replication topology for each Dfs link.

    FRS topology can be configured in two ways. First, when you define additional targets for a link, the system automatically brings up a Configure Replication Wizard for FRS configuration. This wizard obtains two major pieces of information from you:

    • The identity of the "master" server for the initial replication.

    • Your desired replication topology.

    The "master" server is the one from which the initial file replication will be pulled. After the files have been transferred, there is no subsequent "master" or "secondary" relationship. Changes made at any server are replicated to the remaining servers.

    If the secondary server already has files in the target folder, you may get a surprise when you configure replication to that folder. To prevent the files from being overwritten, FRS moves them to a hidden folder called NTFrs_PreExisting___See_EventLog. Because this is a hidden folder, you may think that FRS has deleted your files. The files in NTFrs_PreExisting___See_EventLog are not staged for deletion, although the Event log entry makes it appear so. Figure 16.14 shows a sample Event log entry.

    Figure 16.14. Event log entry documenting the move of existing files to the NTFrs_PreExisting___See_EventLog folder.


    If you want to preserve the files that were originally on the secondary server, copy them into the main folder following initial replication. From there, FRS will replicate them to the master.

    FRS Replication Topology

    The second function of the Configure Replication Wizard is to select a replication topology. This is a new feature in Windows Server 2003. You can set a unique replication topology for every replicated link.

    Like Active Directory replication, all FRS connections represent inbound data flows. A server notifies its partner that it has files waiting. The partner pulls the files during a replication transaction. Figure 16.15 shows the replication topology options from the Properties window of a multitargeted Dfs link. The options are as follows:

    • Ring. This is the same topology used by Active Directory for intra-site replication, but the FRS connections may not match the KCC connections. Rings make efficient use of bandwidth with a slight delay in full convergence.

    • Full Mesh. This topology minimizes convergence time at the expense of bandwidth. Use it only if you have connections that can handle the traffic.

    • Hub and spoke. If you select this topology, you must identify one server as the "hub" server. The remaining servers replicate from this hub. Use this topology to make best use of WAN bandwidth. If the hub server goes down, though, replication fails throughout the system.

    • Custom. This is the Burger King option. You can design your own topology based on your specific network layout. Be careful to build sufficient connections to prevent a downed link from disrupting too many servers.

    Figure 16.15. FRS replication topology options.


    Active Directory replication across a WAN is managed using sites. A site is an area of reliable, high-speed network connections. Connections between sites are defined by site links that manage bandwidth.

    The topology you define for FRS understands when servers are in different sites, but it makes no allowances for them. If you define a fully meshed topology for a particular link, replication will propagate between all servers regardless of their site affiliation. FRS replication uses RPC for all replication.

    Analysis of Dfs Target Server Failure

    Having multiple copies of the same data on different target servers increases fault tolerance. If any server goes down, the Dfs clients are sent an alternate server with the same data.

    The actual sequence of events that occurs when a target server goes down varies depending on the nature of the failure and what the client was doing at the time the server failed. Here's what happens:

    • If the client has already connected to a server when the server fails, the system recognizes the problem immediately and selects a new target from the referral list in the cached PKT.

    • If the client is navigating the Dfs namespace when the target failure occurs, the Dfs client waits for a timeout on the assumption that the delay is due to network latency. After the timeout occurs, the client selects another target from the PKT referral list.

    • If a user has a data file open and attempts to save it when the target server fails, the system sets to work retargeting the client to an alternate server. During this period, the application shows an hourglass. After the client has been retargeted, the file is saved to the new location and the client behaves normally once again.

    You can see the target server a Dfs client has selected for a particular Dfs link by opening the Properties window for the link in Explorer and selecting the Dfs tab. This tab is only exposed for Dfs links. Figure 16.16 shows an example.

    Figure 16.16. Dfs link properties in Explorer showing the Dfs tab.


    If you want to change the target server, close all files, highlight the desired target in the Properties | Dfs window, then click Clear History followed by Set Active. The next file you open under the link will come from the new target.

    Limitations of Multiple Dfs Targets

    Before you get too excited at the possibilities of building huge, fully redundant network file systems based on Dfs, you need to know about a few limitations.

    First, FRS only works between servers running Windows Server 2003 and/or Windows 2000. You cannot include a third-party server or a downlevel Windows server in an FRS topology.

    Second, FRS keeps data in sync between replicas but it does not replicate the security descriptors. You must manage access permissions at each replica. This is generally done by establishing a set of permissions at the root folder. These are inherited by the child folders and files. Be sure to document your changes carefully because you must make the same changes at every replica.

    Third, don't permit client-side caching at a Dfs folder. We'll get to details of client-side caching in Chapter 19, "Managing the User Operating Environment," but here's why it should be avoided with Dfs.

    Using client-side caching, a user can "pin" files so they are cached locally and available when the user is offline. Let's say the user pins a few offline files from what appears to be a folder but is actually a Dfs link. The files originated at server S1 with a replica on S2. Now, assume server S1 goes down. A Synchronization Manager service running at the client sees that the connection has been lost and retargets the client to the local file cache. At the same time, the Dfs client sees that the connection has been lost and retargets the user to server S2.

    When S1 finally comes back online again, the Synchronization Manager will replicate any local changes, so there is no data loss, but it is a confusing time for the user and the Help Desk technician who must soothe the user's fears.

    The final and most important limitation when using replicated Dfs targets involves concurrent file use. FRS does a great job of keeping data in sync, but it is not designed to keep file status in sync. This means that file locks are not replicated between targets. Also, there is no central database of file locks or other file status information. This means that concurrent file use should be avoided. Users might have the same file open on different target servers. As they update their files, they overwrite each other's changes.

    The lack of distributed file locks is a serious deficiency in Dfs only when you want to share files used by multiple users. For single-access files, or files that don't change (such as executables), replicated targets is a great tool for distributing files around your organization. You can even use Dfs to distribute RIS files to make sure each office has a consistent set of installations.

    Designing Multiple Dfs Targets

    As you decide what kind of data is right for use with multiple Dfs targets, try to ensure that users do not have concurrent access to the files. If that is not possible, keep the number of potential concurrent users down to a select minimum whom you can train about the eccentricities of the file locking limitations.

    Here are some excellent candidates for multiple Dfs targets:

    • Web pages (as long as they are controlled by a small group of webmasters)

    • Executable files

    • Company policies, standards, and procedures (with owners who understand the concurrent use limitations)

    • User home directories, roaming profiles, and redirected folders (with the exception of laptop users and other users with offline files in their personal folders)

    Here are items that should never be placed in Dfs links with multiple targets:

    • Databases (unless it is a personal database only accessible by a single user)

    • Collaborative files such as workflow documents and linked files in Lotus Notes or SharePoint Portal Server

    • Data files accessed by multiple members of the same department or workgroup

    Functional Overview of Dfs Referrals

    When a network client touches a Dfs link, the Dfs service running on the host returns a referral to the client. The referral contains the Partition Knowledge Table entry for the link. The entry defines where the true target or targets reside. The client selects a target, if more than one are presented, and goes there to make the connection.

    There are two versions of Dfs. NT4 and Window 9x/ME use Dfs revision 2. Windows 2000 and later use Dfs revision 3. The versions are backward compatible, so a Windows 2000 client can access a Dfs root hosted by an NT4 server and vice versa.

    The Dfs version affects how clients initially access the Dfs volume. This can affect load sharing, as well. Here's how.

    MUP Polling

    When an SMB network client reaches out to touch a folder based on its UNC path, the Multiple UNC Provider (MUP) polls the network redirectors to see which one can communicate with the target server. This is the first place where the two versions of Dfs react differently. Here's how:

    • Windows Server 2003, XP, and 2000. In Dfs revision 3 clients, MUP always polls the Dfs provider first to see whether the shared folder is linked to Dfs. If it is, the Dfs driver handles the referral containing the PKT information from the Dfs root server. If the share is not linked to Dfs, the Dfs provider times out and MUP continues on to the next provider in the binding order.

    • Classic Windows. In Dfs revision 2 clients, MUP first polls the standard SMB provider with a query for the attributes of the target share. This query fails in the case of Dfs because Dfs links do not have share attributes. MUP then checks the UNC path on the assumption that the user gave the wrong server name. This also fails because a Dfs junction does not expose a standard path to a shared folder. MUP finally polls the Dfs provider in an attempt to get a Dfs referral. The host server returns the PKT information.

    The main result of this difference is a longer delay in getting connected to a Dfs folder by older clients.

    Target Selection

    The second operational difference between Dfs versions affects load sharing by changing the way the client selects a target server:

    • Windows Server 2003, XP, and 2000. Modern Windows servers, when queried by modern Windows clients, return a list of referrals with the local site servers sorted to the top. This is called a managed list. The Dfs client picks the server at the top of the referral list. If it cannot get a reply from that server, it proceeds to the next one on the list and so on.

    • Classic Windows. NT4 and Win9X servers, and modern Windows servers queried by classic clients, return referrals in the order they exist in the Registry (or Active Directory). The client randomly chooses one of the referrals. This means that a classic client might get a referral to a replica in another site. You can avoid this by loading the DSCLIENT patch on the Win9x clients and the Active Directory hotfix for the NT4 clients.

    A client also reports back to the Dfs server if it cannot find the host referenced in a referral. This Report Dfs Inconsistency SMB helps to keep the system free of broken links. Its use is optional, however, so you may encounter situations where the system doesn't realize that a link is broken.

    After the client receives the PKT information in the referral from the host server, it caches this information for a period of time specified in the PKT information. This Time-to-Live (TTL) parameter is configurable in the Dfs console. Open the Properties window for a link and change the Amount Of Time Clients Cache This Referral (in seconds) option. Ordinarily, the default 30-minute (1800-second) cache interval is short enough to provide flexibility in moving around target servers while long enough to limit load on the Dfs server.

    Dfs Namespace Design

    When you lay out your proposed Dfs namespace, start with the name for the enterprise root. This is not a name users will see very often because you will map to it in a logon script. A common selection is Dfsroot.

    Now decide how you will handle major information nodes in your enterprise. Ideally, the volume structure should match the logical organization of your enterprise so that users can navigate the Dfs as easily as they navigate an organizational chart.

    A functional approach is to have a top-level link for major departments such as Accounting (Acct), Engineering (Eng), and Sales along with a top-level link with the company name for holding policies, standards, web pages, and other information of general interest. Remember to keep the names short so the paths don't exceed 260 characters.

    You will also probably want links for IT operationsfor example, a SoftDistro link for distributing applications, an Apps link for holding server-based applications, a web link for Internet/intranet files, and a Users link for storing user home directories, roaming profiles, and redirected folders.

    You can choose to have separate roots for these top-level items with links to them from the main Dfs root. There is no performance difference, although you may see some hesitation when clients negotiate the link from one Dfs to another.

    Dfs roots should always be replicated to two or three additional servers for fault tolerance. You won't have many, if any, files at the root of the Dfs, so use a full mesh topology to shorten the convergence time.

    Links containing files controlled by single users (or a small group of informed administrators) are candidates for multiple targets to get redundancy and site affiliation.

    Dfs Deployment

    You will get the most flexibility and reliability by using domain Dfs. If you elect to use a standalone root, the configuration options are similar except that you cannot have the following:

    • Multiple root servers.

    • Multiple target servers for a link.

    • Site-based replica access.

    • Domain-based central naming. (Clients must point directly at the standalone server.)

    Here's a quick rundown of the steps involved in setting up a domain Dfs before looking at the procedures to do the work:

    1. Determine which servers will host Dfs roots. These do not need to be domain controllers, but they must be members of the domain. The domain can be running in Mixed or Native.

    2. Create a shared folder on the Dfs server to act as the Dfs root. Avoid placing files in this volume. Keep the files inside the linked shares. Some administrators like to put a brief text file or web page at the root of the Dfs with instructions on how to navigate.

    3. Create the Dfs root. This can be any name because users won't see it very often.

    4. Assign additional root targets. These servers become your fallback root servers.

    5. Create Dfs links to shared folders. The Dfs volume starts out empty. You add links to shared folder on other servers to structure the volume according to your Dfs design plan.

    6. Assign additional link targets. If you have identified shares as candidates for replicated links, create the shared folders on the additional servers then define the additional targets.

    Create a Dfs Root Directory

    You can use the Dfs console, Dfsgui.msc, to create and manage all roots and links. Launch the console from START | PROGRAMS | ADMINISTRATIVE TOOLS | DISTRIBUTED FILE SYSTEM. You can also use a command-line tool, DFSCMD.

    When you've completed your namespace design and are ready to start deploying Dfs, follow the steps in Procedure 16.6.

    Procedure 16.6 Creating a Dfs Root

    1. Open the Dfs console.

    2. Right-click the Distributed File System icon and select NEW ROOT from the flyout menu. The New Root Wizard opens.

    3. Click Next. The Root Type window opens. Select the Domain Root radio button.

    4. Click Next. The Host Domain window opens. Select the domain to host the Dfs PKT information. The root servers should be members of this domain.

    5. Click Next. The Host Server window opens. Enter the fully qualified DNS name of the server that will host the Dfs root or click Browse to locate the server. This searches Active Directory. It does not use the browser. This ensures that you pick servers from the correct domain. The server need not be a domain controller.

    6. Click Next. The Root Name window opens. Enter the name you selected for the Dfs root. The Preview field will show you the UNC name and the Share To Be Used field will show you the flat name of the share that will be created at the root server.

    7. Click Next. If the share does not already exist on the root server, the Root Share window opens. Enter the full path to the folder. For example: D:\Dfsroot. If the folder does not already exist, it will be created.

    8. Click Finish. The wizard shares the selected folder and creates the root. It makes the necessary entries in the local Registry and creates the FTDfs object in Active Directory.

    Create Dfs Links

    At this point, the Dfs namespace is like a freshly formatted disk. It has a root directory but no data. It's time to build links to share points in accordance with your design document. Open the Dfs console and follow the steps in Procedure 16.7.

    Procedure 16.7 Creating Dfs Links

    1. Right-click the Dfs root icon and select NEW LINK from the flyout menu. The New Link window opens (see Figure 16.17).

      Figure 16.17. New Link window showing Dfs Link Name and share point used for referral.


    2. Under Link Name, enter the name that you want the users to see when they browse Dfs.

    3. Under Path To Target, enter the UNC path to the share point at the server or use the Browse button. This will search the My Network Places browse list. Remember that the target does not have to be a Windows server as long as you are running NT4 clients or later.

    4. Click OK to save the change and add the link to Dfs.

    Users can now browse the contents of the folder via Dfs. You can create additional links to other share points and those folders will automatically appear in the namespace.

    If you decide you want to stop listing a shared folder in Dfs, you can delete the link. The data is not touched.

    Assign Additional Link Targets

    If you have a share that is a candidate for a replicated link, create a share on the secondary server then proceed as directed in Procedure 16.8. You'll be configuring FRS as part of this procedure.

    Procedure 16.8 Designating Additional Dfs Link Targets

    1. In the Dfs console, right-click the link you want to replicate and select NEW TARGET from the flyout menu. The New Target window opens.

    2. Enter the UNC path to the shared folder on the secondary server or use Browse to locate the share. Make sure the Add This Target To The Replication Set option is checked.

    3. Click OK. If the system finds the share point, you will get a message, The target cannot be replicated until replication is configured. Do you want to configure it now?

    4. Click Yes. The Configure Replication Wizard opens.

    5. Click Next. This window lists the UNC paths for the original link and the new target. Highlight the target you want to be the master. (Files will be copied from this server to the other.)

    6. Click Next. Select a topology. The default topology is a ring.

    7. Click Finish. The two targets are now listed in the right pane of the window. The link has an icon indicating that it has multiple targets.

    Use this same procedure to add more targets to the link. The new targets will be added to the FRS replication topology you originally configured. The wizard will not reappear when you add more links.

    Changing Replication Topology

    If you decide after you create a set of multiple targets for a link that you want to change the replication topology, proceed as follows:

    1. In the Dfs console, open the Properties window for the link you want to modify.

    2. Select the Replication tab.

    3. Click Customize. The Customize Topology window opens (see Figure 16.18).

      Figure 16.18. Customize Topology window showing the connections in a fully meshed topology.


    4. Select a new topology. If you select a Hub and Spoke topology, select a server that will act as the master hub server.

    5. Click OK to save the change. This updates the Active Directory object representing the Dfs root. As this object replicates to the rest of the domain controllers, the FRS on each server modifies its replication behavior.

    You can also use the Replication tab to exclude certain file types from replication. By default, .bak and .tmp files are excluded. You can add others. You can also elect to exclude subfolders from replication.

    Removing a Dfs Root

    You can delete a Dfs root if it is no longer required. This requires a little surgery in the Registry of every root server and in Active Directory. Make sure users know that shortcuts to these folders will no longer work, then follow Procedure 16.9.

    Procedure 16.9 Removing a Dfs Root

    1. In the Dfs console, right-click the root icon and select DELETE from the flyout menu.

    2. When prompted to confirm the decision, click Yes to complete the transaction.

    3. In the Registry of each root server, delete the following subtree: HKLM | Software | Microsoft | DfsHost.

    4. Open the AD Users and Computers console.

    5. Enable the Advanced View option if it is not already set using VIEW | ADVANCED FEATURES.

    6. Navigate to System | Dfs-Configuration.

    7. Delete the FTDfs object representing the deleted root.

    8. Restart Dfs by opening a command prompt and entering Net Stop Dfs then Net Start Dfs.

    Managing Dfs from the Command Line

    There are two utilities for managing Dfs from the command line. The DFSUTIL utility from the Support Tools is used to add and remove Dfs roots and to manage the contents of the PKT cache at the client and to display the structure of a Dfs. We've already seen the PKT information displayed by dfsutil /pktinfo. Here is a listing for the /spcinfo switch:

    C:\Program Files\Support Tools>dfsutil /spcinfo

    If you're familiar with DFSUTIL from Windows 2000, you'll notice that the number of features has been trimmed quite a bit. The older DFSUTIL tool will work with Windows Server 2003.

    The second utility, DFSCMD, cannot create or remove roots but it can do just about anything else, including creating and removing links, creating and removing replica targets, and viewing the Dfs structure.

    A great feature in DFSCMD is the /batch switch, which creates a batch file that can recreate a Dfs structure should it be lost. This should not be necessary in a domain Dfs but it can be a lifesaver for a standalone Dfs root.

      Previous Section Next Section