• Chapter 1. Installing and Configuring Windows Server 2003
  • software development Company Server 2003
  • Chapter 1. Installing and Configuring Windows Server 2003
  • New Features in Windows Server 2003
  • Best Practices
  • Moving Forward
  • Version Comparisons
  • Hardware Recommendations
  • Installation Checklist
  • Functional Overview of Windows Server 2003 Setup
  • Installing Windows Server 2003
  • Post Setup Configurations
  • Functional Description of the Windows Server 2003 Boot Process
  • Correcting Common Setup Problems
  • Chapter 2. Performing Upgrades and Automated Installations
  • New Features in Windows Server 2003
  • NT4 Upgrade Functional Overview
  • Upgrading an NT4 or Windows 2000 Server
  • Automating Windows Server 2003 Deployments
  • Moving Forward
  • Chapter 3. Adding Hardware
  • New Features in Windows Server 2003
  • Functional Description of Windows Server 2003 Architecture
  • Overview of Windows Server 2003 Plug and Play
  • Installing and Configuring Devices
  • Troubleshooting New Devices
  • Moving Forward
  • Chapter 4. Managing NetBIOS Name Resolution
  • New Features in Windows Server 2003
  • Moving Forward
  • Overview of Windows Server 2003 Networking
  • Name Resolution and Network Services
  • Network Diagnostic Utilities
  • Resolving NetBIOS Names Using Broadcasts
  • Resolving NetBIOS Names Using Lmhosts
  • Resolving NetBIOS Names Using WINS
  • Managing WINS
  • Disabling NetBIOS-over-TCP/IP Name Resolution
  • Chapter 5. Managing DNS
  • New Features in Windows Server 2003
  • Configuring a Caching-Only Server
  • Configuring a DNS Server to Use a Forwarder
  • Managing Dynamic DNS
  • Configuring Advanced DNS Server Parameters
  • Examining Zones with Nslookup
  • Command-Line Management of DNS
  • Configuring DHCP to Support DNS
  • Moving Forward
  • Overview of DNS Domain Structure
  • Functional Description of DNS Query Handling
  • Designing DNS Domains
  • Active Directory Integration
  • Configuring DNS Clients
  • Installing and Configuring DNS Servers
  • Configuring Secondary DNS Servers
  • Integrating DNS Zones into Active Directory
  • Chapter 6. Understanding Active Directory Services
  • New Features in Windows Server 2003
  • Active Directory Support Files
  • Active Directory Utilities
  • Bulk Imports and Exports
  • Moving Forward
  • Limitations of Classic NT Security
  • Directory Service Components
  • Brief History of Directory Services
  • X.500 Overview
  • LDAP Information Model
  • LDAP Namespace Structure
  • Active Directory Namespace Structure
  • Active Directory Schema
  • Chapter 7. Managing Active Directory Replication
  • New Features in Windows Server 2003
  • Replication Overview
  • Detailed Replication Transaction Descriptions
  • Designing Site Architectures
  • Configuring Inter-site Replication
  • Controlling Replication Parameters
  • Special Replication Operations
  • Troubleshooting Replication Problems
  • Moving Forward
  • Chapter 8. Designing Windows Server 2003 Domains
  • New Features in Windows Server 2003
  • Design Objectives
  • DNS and Active Directory Namespaces
  • Domain Design Strategies
  • Strategies for OU Design
  • Flexible Single Master Operations
  • Domain Controller Placement
  • Moving Forward
  • Chapter 9. Deploying Windows Server 2003 Domains
  • New Features in Windows Server 2003
  • Preparing for an NT Domain Upgrade
  • In-Place Upgrade of an NT4 Domain
  • In-Place Upgrade of a Windows 2000 Forest
  • Migrating from NT and Windows 2000 Domains to Windows Server 2003
  • Additional Domain Operations
  • Moving Forward
  • Chapter 10. Active Directory Maintenance
  • New Features in Windows Server 2003
  • Loss of a DNS Server
  • Loss of a Domain Controller
  • Loss of Key Replication Components
  • Backing Up the Directory
  • Performing Directory Maintenance
  • Moving Forward
  • Chapter 11. Understanding Network Access Security and Kerberos
  • New Features in Windows Server 2003
  • Windows Server 2003 Security Architecture
  • Security Components
  • Password Security
  • Authentication
  • Analysis of Kerberos Transactions
  • MITv5 Kerberos Interoperability
  • Security Auditing
  • Moving Forward
  • Chapter 12. Managing Group Policies
  • New Features in Windows Server 2003
  • Group Policy Operational Overview
  • Managing Individual Group Policy Types
  • Moving Forward
  • Chapter 13. Managing Active Directory Security
  • New Features in Windows Server 2003
  • Overview of Active Directory Security
  • Using Groups to Manage Active Directory Objects
  • Service Accounts
  • Using the Secondary Logon Service and RunAs
  • Using WMI for Active Directory Event Notification
  • Moving Forward
  • Chapter 14. Configuring Data Storage
  • New Features in Windows Server 2003
  • Functional Description of Windows Server 2003 Data Storage
  • Performing Disk Operations on IA32 Systems
  • Recovering Failed Fault Tolerant Disks
  • Working with GPT Disks
  • Moving Forward
  • Chapter 15. Managing File Systems
  • New Features in Windows Server 2003
  • Overview of Windows Server 2003 File Systems
  • NTFS Attributes
  • Link Tracking Service
  • Reparse Points
  • File System Recovery and Fault Tolerance
  • Quotas
  • File System Operations
  • Moving Forward
  • Chapter 16. Managing Shared Resources
  • New Features in Windows Server 2003
  • Functional Description of Windows Resource Sharing
  • Configuring File Sharing
  • Connecting to Shared Folders
  • Resource Sharing Using the Distributed File System (Dfs)
  • Printer Sharing
  • Configuring Windows Server 2003 Clients to Print
  • Managing Print Services
  • Moving Forward
  • Chapter 17. Managing File Encryption
  • New Features in Windows Server 2003
  • File Encryption Functional Description
  • Certificate Management
  • Encrypted File Recovery
  • Encrypting Server-Based Files
  • EFS File Transactions and WebDAV
  • Special EFS Guidelines
  • EFS Procedures
  • Moving Forward
  • Chapter 18. Managing a Public Key Infrastructure
  • New Features in Windows Server 2003
  • Moving Forward
  • PKI Goals
  • Cryptographic Elements in Windows Server 2003
  • Public/Private Key Services
  • Certificates
  • Certification Authorities
  • Certificate Enrollment
  • Key Archival and Recovery
  • Command-Line PKI Tools
  • Chapter 19. Managing the User Operating Environment
  • New Features in Windows Server 2003
  • Side-by-Side Assemblies
  • User State Migration
  • Managing Folder Redirection
  • Creating and Managing Home Directories
  • Managing Offline Files
  • Managing Servers via Remote Desktop
  • Moving Forward
  • Chapter 20. Managing Remote Access and Internet Routing
  • New Features in Windows Server 2003
  • Configuring a Network Bridge
  • Configuring Virtual Private Network Connections
  • Configuring Internet Authentication Services (IAS)
  • Moving Forward
  • Functional Description of WAN Device Support
  • PPP Authentication
  • NT4 RAS Servers and Active Directory Domains
  • Deploying Smart Cards for Remote Access
  • Installing and Configuring Modems
  • Configuring a Remote Access Server
  • Configuring a Demand-Dial Router
  • Configuring an Internet Gateway Using NAT
  • Chapter 21. Recovering from System Failures
  • New Features in Windows Server 2003
  • Functional Description Ntbackup
  • Backup and Restore Operations
  • Recovering from Blue Screen Stops
  • Using Emergency Management Services (EMS)
  • Using Safe Mode
  • Restoring Functionality with the Last Known Good Configuration
  • Recovery Console
  • Moving Forward
  • Who Should Read This Book
  • Who This Book Is Not For
  • Conventions
  • Acknowledgments
  • About the Author
  • About the Technical Reviewers
  • Index
  • Index A
  • Index B
  • Index C
  • Index D
  • Index E
  • Index F
  • Index G
  • Index H
  • Index I
  • Index J
  • Index K
  • Index L
  • Index M
  • Index N
  • Index O
  • Index P
  • Index Q
  • Index R
  • Index S
  • Index SYMBOL
  • Index T
  • Index U
  • Index V
  • Index W
  • Index X
  • Index Z
  • Preface
  • Previous Section Next Section

    Functional Description of Windows Server 2003 Data Storage

    The Windows Server 2003 storage system supports three different disk partitioning schemes, which Microsoft associates with three different disk types as follows:

    • Basic MBR disks. This partitioning scheme uses the classic Intel Master Boot Record (MBR). The MBR contains a data structure called a partition table that defines up to four partitions that can be used to store data.

    • Basic GPT disks. This partitioning scheme is available only on IA64 systems. It uses a new form of partition table that can define up to 128 partitions, each identified by a Globally Unique Identifier (GUID).

    • Dynamic disks. This partitioning scheme is used on dynamic disks in Windows Server 2003, Windows XP, and Windows 2000. Partition information is stored in a database controlled by a service called the Logical Disk Manager (LDM). Both IA32 and IA64 systems can have dynamic disks. On IA32 systems, a copy of the LDM database is stored in the final cylinder of each dynamic disk. On IA64 systems, the LDM database is stored in a Microsoft Reserved Partition near the beginning of the disk.

    Logical Disk Manager Volume Configurations

    The database managed by the Logical Disk Manager replaces Registry-based classic NT disk sets supported by Ftdisk. The Ftdisk driver has been relegated to middle management where it is responsible for handling basic MBR and GPT disks.

    Under normal operation, with each disk holding its own discrete data, there is no need for anything other than basic disk with partition information stored in an IA32 MBR or IA64 GPT. The same is true for hardware RAID arrays, where the logical disk constructed by the RAID controller appears to the operating system as a single, large basic disk.

    The Logical Disk Manager comes into play when you want more sophisticated disk configurations in software. This is when you would convert the basic disks to dynamic disk then use the LDM to create one or more of the following volume configurations:

    • Simple volume. This is the equivalent of an MBR or GPT partition. When you create a simple volume, you set aside a certain portion of a disk for use by a file system. There is room in the LDM database for thousands of simple volumes, but it's not likely you'll want more than a handful.

    • Spanned volume. This volume type links together free space on the same disk or from other disks to form a single logical drive. Spanned volumes are the equivalent of classic NT volume sets.

    • Striped volume. This is a RAID 0 configuration. The data stream is divided into chunks that are written to separate disks. Striped volumes have performance advantages, especially when use with a high-speed data bus, but they increase the likelihood of data loss because a single drive failure disables the entire volume.

    • Mirrored volume. This is a RAID 1 configuration. The same data stream is directed onto two disks simultaneously. The file systems on the mirrored volumes remain available if either disk fails. If the disks are on separate controllers, the volume is said to be duplexed. Mirrored volumes exhibit fast seek times because either disk can respond to a read request, but they are slower than single disks for writing because data must be written to two disks simultaneously.

    • RAID 5 volume. In this configuration, the data stream is divided into chunks that are written to multiple disks along with parity information. See the sidebar "RAID 5 Operation" for more information. RAID 5 represents a compromise between performance, fault tolerance, and flexibility. It is slower than striping or spanned volumes but provides fault tolerance. It is slower than mirroring but makes more effective use of storage capacity.

    LDM does not support more modern configurations such as RAID 0+1 (striping with mirroring) or RAID 10 (mirroring with striping).

    Data Chunking and Performance

    The underlying storage drivers in Windows Server 2003 (and NT and Windows 2000) move data to and from the disk subsystem in 64KB chunks. You can improve the performance of hardware RAID arrays by configuring the stripe size on the controller to match the 64KB data transfer value from the operating system.

    RAID 5 Operation

    If you've never experimented with fault tolerant drive configurations before, try setting up a three-disk RAID 5 array in your lab and then pulling the power plug on one of the drives. You'll get a small notification bubble message from a drive icon in the System Tray and that's about it. The logical drive is still available, albeit with slightly reduced performance.

    RAID 5 accomplishes this magic by calculating and storing parity information that can be used to reconstitute the contents of a lost disk should one fail. This parity calculation uses a XOR, or exclusive OR, function. A XOR calculation works like a party game:

    • If two values match, you get a logical 0.

    • If two values don't match, you get a logical 1.

    Table 14.1 shows a XOR truth table.

    Table 14.1. XOR Truth Table
















    To see how XOR works to recover lost data, cover up any column in the truth table. You can quickly figure out the value of each hidden item based on the contents of the other two columns.

    In the same way, if you remove a disk from a RAID 5 array, the system quickly calculates the value of the missing contents by doing a XOR on the data on the other disks (data XOR parity or data XOR data to get parity).

    This is the reason you must have at least three disks to make a RAID 5 array. The system needs at least two chunks of data to calculate a parity chunk. Unlike some other RAID flavors, the parity chunks in RAID 5 are spread across the drives. This avoids a single point of failure.

    Keep in mind with RAID 5 that you lose the equivalent of a drive's worth of capacity due to the parity information. If you have four drives of 20GB each, you would lose 25 percent of the total capacity, leaving 60GB of available storage.

    Having more drives in the array makes RAID 5 more space-effective, but it can slow down overall performance if the SCSI bus becomes saturated. If you have an ultrawide SCSI 3 bus populated with fifteen drives, you would lose only 6.7 percent of the total capacity, but you might get very poor I/O results.

    A regular striped volume does not calculate parity information, so its performance is dramatically better, but you lose fault tolerance.

    LDM Database Structure

    Let's avoid grunt-level detail here and just get a feel for how the LDM database is laid out on the disk. This information helps you to understand what you'll see if you use disk utilities. It will also help you avoid making changes that could render the LDM inoperable (and your data unavailable).

    Figure 14.1 shows a block diagram of the LDM database structures stored on a disk. Here are the components:

    • Private Header. This has entries describing where to find the LDM database and generally defining what's inside. There are multiple copies of this header for fault tolerance.

    • Table of Contents. This is a quick index of the database contents. Redundant copies are stored at the end of the disk for fault tolerance.

    • Volume Manager. This is the database itself.

    • Virtual Blocks. These are the database records, one for each partition, disk, and volume. At 256 bytes per record, there is enough space in the database for thousands of records. Microsoft recommends putting no more than 32 elements in the database. Personally, I think if you have more than one fault tolerant storage element in a server, you need to use hardware RAID.

    • Transaction Log. This is a set of two sectors that hold uncommitted updates to the database to protect against a possible power loss or some other critical failure.

    Figure 14.1. Diagram of Logical Disk Manager disk structures.


    When you add a new dynamic disk to a system, either by promoting a basic disk or creating a new volume on an existing dynamic disk, the system adds a new Virtual Block to the Volume Manager. (This is the equivalent of adding a new record to the LDM database.) Each element (record) in the database is assigned a Globally Unique Identifier, or GUID. The GUID acts as a key for the record.

    You can view a pile of details about the LDM database contents and structure by using the DMDIAG utility in the Support Tools. Here is a sample listing (the /v (verbose) switch gives ten times this amount of information):

    ---------- Dynamic Disk Information -----------
     DiskGroup: S1Dg0
      Group-ID: e129db61-e6d5-4ff0-9d2e-660f570cc315
       Sub Disk  Rel Sec   Tot Sec   Tot Size  Plex        Vol Type  Col/Ord  DevName    State
       ========  =======   =======   ========  ====        ========  =======  =========  
       Disk1-01  0         10667097  0         Volume1-01  Simple    1/1                 
       Disk1-02  10667097  204800    0         Volume2-01  Simple    1/1                 
       Disk1-03  10871897  204800    0         Volume3-01  Simple    1/1                 
       Disk1-04  11076697  1042177   0         Volume4-01  Mirror    1/1                 
       Disk1-05  12118874  473088    0         Stripe1-01  Stripe    1/2                 
       LDM-DATA  0         0
       Disk2-01  63        8385867  12594960   Volume5-01  Simple    1/1      Harddisk0  
       Disk2-03  10442250  2136645  12594960   Volume7-01  Simple    1/1      Harddisk0  
       Disk2-04  8385930   1028160  12594960   Volume4-02  Mirror    1/2      Harddisk0  
       Disk2-05  12578895  14017    12594960   Volume4-02  Mirror    1/2      Harddisk0  
       Disk2-02  9414090   473088   12594960   Stripe1-01  Stripe    2/2      Harddisk0  
       LDM-DATA  12592912  2048
     ---------- LDM Volume Information -----------
       Volume   Volume Mnt  Subdisk   Plex        Physical   Size      Total     Col  Plex    
    graphics/ccc.gifRel       Vol     Plex
       Name     Type   Nme  Name      Name        Disk       Sectors   Size      Ord  Offset  
    graphics/ccc.gifSectors   State   State
       ======   ====== ===  ========  ==========  ========== =======   =======   ===  ======  
    graphics/ccc.gif=======   ======  ======
       Volume1  Simple C    Disk1-01  Volume1-01             10667097  10667097  1/1  0       
    graphics/ccc.gif0         ACTIVE  ACTIVE
       Volume2  Simple R    Disk1-02  Volume2-01             204800    204800    1/1  0       
    graphics/ccc.gif10667097  ACTIVE  ACTIVE
       Volume3  Simple S    Disk1-03  Volume3-01             204800    204800    1/1  0       
    graphics/ccc.gif10871897  ACTIVE  ACTIVE
       Volume5  Simple      Disk2-01  Volume5-01  Harddisk0  8385867   8385867   1/1  0       
    graphics/ccc.gif63        ACTIVE  ACTIVE
       Volume7  Simple      Disk2-03  Volume7-01  Harddisk0  2136645   2136645   1/1  0       
    graphics/ccc.gif10442250  ACTIVE  ACTIVE
       Volume4  Mirror D    Disk1-04  Volume4-01             1042177   1042177   1/1  0       
    graphics/ccc.gif11076697  ACTIVE  ACTIVE
       Volume4  Mirror D    Disk2-04  Volume4-02  Harddisk0  1042177   1028160   1/2  0       
    graphics/ccc.gif8385930   ACTIVE  ACTIVE
       Volume4  Mirror D    Disk2-05  Volume4-02  Harddisk0  1042177   14017     1/2  1028160 
    graphics/ccc.gif12578895  ACTIVE  ACTIVE
       Stripe1  Stripe E    Disk1-05  Stripe1-01             946176    473088    1/2  0       
    graphics/ccc.gif12118874  ACTIVE  ACTIVE
       Stripe1  Stripe E    Disk2-02  Stripe1-01  Harddisk0  946176    473088    2/2  0       
    graphics/ccc.gif414090    ACTIVE  ACTIVE

    LDM Group Names

    Each dynamic disk is part of a disk group. Members of a disk group share the same LDM database. Windows Server 2003 can only have one disk group. (The commercial version of LDM from Veritas supports multiple disk groups.)

    The disk group is given a name comprised of the computer name followed by the letters Dg0. For example, the dynamic disks on a server named SRV1 would have a group name of Srv1Dg0.

    If you revert all dynamic disks in a server back to basic disksthis require removing all volumes, converting the drives, then restoring the data from tapethe next disk converted to a dynamic disk would start a new group named Srv1Dg1.

    Group names play an important role when swapping dynamic disks between servers. If you put dynamic disks into a server, you can import the contents of the LDM database on those disks. When you do this, the disks are made part of the local disk group. For example, if you take disks out of server SRV1 and import them into the LDM database in SRV2, the disks would be given the new group name of Srv2Dg0.

    The Registry keeps track of the disk groups in a machine. You cannot boot from a dynamic disk that is in a different disk group. For example, let's say the disk you imported into server SRV2 was a boot disk. During the import, the disk was added to Srv2Dg0. If you took this disk out and put it back into its original server, you would get a 0x0000007B, Inaccessible Boot Device, blue screen stop as soon as the system compared the disk group name in the Registry with the name in the LDM database. There is no workaround for this, so use extreme caution when moving dynamic boot disks.

    If you take dynamic disks from one server and put them in a server that has no dynamic disks of its own, the system behaves like a cubless she-wolf and adopts the new disks as if they were her own. The disks retain their original disk group name, which includes the name of the original server, not the server into which they were imported. The system assigns this name to any subsequent dynamic disks. If you find a system with a disk group name that doesn't match the computer name, this is the most likely cause.

    Restrictions on Dynamic Volumes

    A few restrictions apply when creating dynamic volumes:

    • The Disk Management console only offers NTFS as the format option for a dynamic volume. You can create the volume and use the FORMAT command to create a FAT or FAT32 volume.

    • Spanned volumes that include multiple disks cannot be mirrored.

    • Striped and RAID 5 volumes cannot be mirrored.

    • Only simple volumes can be spanned.

    • The system and boot volumes can be mirrored but cannot be striped or spanned.

    Barring these few restrictions, you can create as many different volumes on the dynamic disks in a system as you need. Keep in mind that the storage subsystem must respond to file system requests from all those volumes, so don't degrade performance by configuring lots and lots of volumes.

    Also, avoid mixing IDE/ATA drives and SCSI drives in the same volume set. You put additional pressure on the storage subsystem to track data packets from two very different sources. The same is true of mixing radically different SCSI drives on the same bus or different SCSI host interface adapters in the same array.

    For the most part, you can achieve acceptable performance and fault tolerance by mirroring a pair of drives for the operating system and then creating a RAID 5 volume set using at least three drives on another interface. Use SCSI drives to take advantage of the increased thread handling capabilities and more robust bus management subsystem.

    XP and LDM

    LDM in Windows XP does not permit creating fault tolerant volumes such as RAID 5 and mirrored drives. You can create simple volumes or striped volumes and you can span volumes. There is no architectural reason for this limitation; it merely differentiates the desktop product from the server product. This limitation has been present in all versions of classic NT and Windows 2000.

    If you create a fault tolerant volume on a server, you can import the disks onto a desktop running XP. I do not recommend this practice because you never know when Microsoft might do something in the code to preclude this configuration.

    It's worth noting here that the Home Edition of XP does not support dynamic disks of any form. You cannot install the Home Edition onto a dynamic disk and you cannot import a dynamic disk from another system into a system running Home Edition.

    When to Use Dynamic Disks

    Dynamic disks have one benefit: They permit you to smear data across multiple disks. There is no performance advantage to using a simple volume on a dynamic disk compared to a basic partition on the same disk. Performance is determined by the speed of the drives, the I/O path, and the file system. Dynamic disks are just as susceptible to viruses as basic disks because the executable code in the Master Boot Record and partition boot sector is unchanged.

    This means you do not need a dynamic disk on a system with only one drive. Converting disks on laptops is restricted by a Registry setting because the laptop may connect to a docking station with an additional drive. It would cause problems for the LDM if the databases on the two drives were to get out of sync.

    Many servers use hardware RAID. A RAID controller presents a virtual disk to the operating system. There is no benefit to converting this virtual disk from basic to dynamic. This was occasionally necessary under Window 2000 to permit expanding volumes by adding new storage to the array and then spanning to the unallocated space. Windows Server 2003 permits expanding basic disk partitions, so there is no need to convert.

    If you decide to use the software-based RAID in Windows Server 2003, you'll like these features:

    • Disk reconfigurations (other than the initial conversion of the system/boot disk) do not require rebooting.

    • Dynamic volumes can be remotely managed, both from the Disk Management console and the command line using Diskpart.

    • The LDM database is replicated to each dynamic disk, improving reliability.

    • The database is on the drives themselves so you can move a drive assembly into another machine and quickly access the data.

    • You can boot from a fault tolerant boot floppy to the secondary drive of a mirrored volume without breaking the mirror. This was not possible in classic NT using Ftdisk because the Registry on the mirrored disk was locked.

    • You can move drives around within a server and retain their logical disk location within their volume sets. This added bit of flexibility is a significant improvement over classic NT Ftdisk sets.

    Dynamic Disks and Laptops

    You may notice that some laptops permit converting to dynamic disks. This is due to a mistake in the interpretation of the machine's BIOS.

    There was an unpublished Registry hack in Windows 2000 that permitted running dynamic disks on a laptop. This hack does not work on Windows Server 2003 or XP (it causes the LDM service to fail) but here it is in case you want to know it:

    Key:    HKLM | System | CurrentControlSet | Services | dmload
    Value:  Start
    Data:   0 (REG_DWORD)

    Dynamic Disks and Hardware RAID

    The chief advantage to software RAID is its price. You can't get better than free. But even free has its price. Ask anyone who has attended a timeshare presentation just to get the free trip to Hawaii.

    Dynamic disks do not provide the same kind of comprehensive feature sets found in hardware RAID controllers. This includes the following:

    • No support for hot-swappable disks

    • No hot-standby disks

    • No dynamic growth when adding new disks

    • No automatic partition management

    • Not as capable of protecting data during some types of drive failure

    In addition, hardware RAID is faster than software RAID, all other things being equal. If price is more important than performance, vendors such as Promise, IBM, Adaptec, and others now offer ATA RAID controllers at competitive prices.

    Still, you can't beat free. If your budget is tight and your CIO or business owner or client can't or won't spend the money on hardware RAID, by all means make use of dynamic disks.

    Basic Disk Conversion

    LDM-based dynamic volumes have lots of advantages over classic MBR partitioning, but they have their eccentricities, most of which have to do with booting and backward compatibility. To understand these eccentricities, we need to get familiar with some of the basic structures on an MBR disk.

    Figure 14.2 shows a diagram of a Master Boot Record. The MBR contains a few hundred bytes of executable code designed to scan a data structure called a Partition Table, which is also in the MBR. One of the entries in the Partition Table should be marked as "active," meaning that it can be used to boot the machine.

    Figure 14.2. Diagram of key Master Boot Record elements and partitioning information.


    The MBR code then goes out to the location specified by the partition table entry and loads the sector at that location into memory. This sector, the partition boot sector, contains bootstrap code that is capable of finding and loading either an operating system or a secondary bootstrap loader. In the case of Windows Server 2003, the secondary bootstrap loader is Ntldr.

    This configuration changes somewhat when you convert the disk from a basic disk to a dynamic disk. Figure 14.3 shows the MBR and partition information following conversion to a dynamic disk. Several things happen during the conversion:

    • An LDM partition is added to the end of the drive.

    • The partition table entries from the MBR are added to the LDM database as simple volumes. Volumes are created for every primary partition and every logical drive within an extended partition.

    • If the system already has dynamic disks, the new disk and its volumes are merged into the existing LDM database on those disks and the result is copied to the LDM database on the new disk.

    • The logical drive letters assigned to the basic partitions by the Registry are retained for the newly created dynamic volumes.

    • The partition table is modified to retain only the partitions required for INT13 access to the disk.

    Figure 14.3. Diagram of Master Boot Record following conversion to a dynamic disk.


    The user interface shows only a few changes. Explorer remains the same. The Disk Management console shows the disk status as Dynamic and On-Line. It identifies the volumes with a different color scheme. Unallocated space on the disk can now participate in dynamic volume structures such as spanning, mirroring, striping, and RAID 5.

    Conversions Requiring Restart

    If you convert a basic disk that holds any of the following components, you must restart the system to complete the conversion:

    • System files

    • Boot files

    • Paging files

    • Crash dump files

    The restart permits the system to create the necessary LDM database entries and Registry entries prior to mounting the file systems. Here is a quick rundown of the operations:

    • An Encapsulation Info key is placed in the Registry under HKLM | System | CurrentControlSet | Services | DMIO with a binary value of FDISK Data showing the structure of the partitions on the disk.

    • An EncapsulationPending key is put under HKLM | System | CurrentControlSet | Services | DMLOAD with no values. It acts as a flag to notify the system to make the change to the Master Boot Record and construct the LDM database using the information in the Encapsulation Info key.

    • The system restarts, makes the change, and then you are prompted to restart again with the message that the system has found new hardware. This is because the dynamic disk participates differently in the Windows Server 2003 object namespace than a basic disk.

    Prerequisites and Restrictions for Converting Basic Disks

    Here is a quick checklist to use when planning a disk conversion. If one of the restrictions prevents you from doing the conversion, your only alternative is to remove the existing partitions (after backing up the data, of course), converting the disk, configuring the dynamic volumes, and then restoring the data to the new volumes:

    • You cannot convert a disk with existing partitions if there is no room for the LDM database at the end of the disk. The database takes a minimum of 1MB but it must align to a cylinder boundary, so the actual size depends on the geometry of the disk.

      The system will resize any existing partitions to make room for the LDM partition. If this resizing fails, the disk cannot be converted. About the only time this would happen is if you have a foreign partition (third-party partition manager, Linux, and so on) as the final partition on the drive.

      If you have SCSI drives, the system will put the LDM database in the area of the disk set aside for sector sparing. This can dramatically reduce the cushion you have for handling sector failures. NTFS supports sector sparing in software, so this should not present a problem.

    • You cannot convert detachable or removable disks to dynamic disks. Each dynamic disk has a copy of the LDM database. It would cause the LDM subsystem to become unstable if a copy of the database were on a removable platter.

    • You can only read dynamic disks using Windows Server 2003, Windows XP, or Windows 2000.

    • You cannot convert a dynamic disk back to a basic disk without first removing all volumes. This involves erasing all data, so it's vital to have a recoverable backup.

    • The system files Ntdlr, Ntdetect.com, Boot.ini, Bootsect.dos, and Ntbootdd.sys can reside on a dynamic disk but they must be on a simple volume or a mirrored volume.

    Booting and Dynamic Disks

    When you boot an Intel computer, the system BIOS performs a series of INT13 calls to locate the bootstrap code for the operating system. (PCs built to PC97 specifications or later use extended INT13 calls that understand modern drive geometries.)

    The INT13 service routines cannot read the LDM database. Without a standard partition table entry, the BIOS cannot locate an operating system partition. For this reason, when you convert a basic disk to a dynamic disk, the system retains the partition table entry for the system partitionthat is, the partition that contains the bootstrap code. There are a couple of subtleties to this operation:

    • It's possible that the system partition is not the first partition on the disk. If that is the case, sufficient partition table entries will be retained to permit INT13 to locate the system partition.

    • It's possible to boot the operating system from a logical drive in an extended partition. For this reason, the partition table entries for all extended partitions are retained.

    Keep in mind that you cannot boot to a drive that contains only dynamic volumes. If you mirror your system drive, be sure to partition the drive first and then delete the partition and mirror it. This creates a bootable partition in the MBR.

    An INT13 call does not see the data boundaries created by the LDM database. For this reason, be very careful when using any disk utilities that run outside of the Windows operating system. This includes partition managers such as BootIt and Partition Magic, boot-time defragmentation features in Diskeeper and PerfectDisk. This also includes the Setup program and the Recovery Console, which relies on the drivers in Setup. See the next section for more details.

    Setup and Dynamic Disks

    You can install Windows Server 2003 onto a disk that has been previously configured as a dynamic disk. Before going over the inevitable restrictions and prerequisites, let's review what happens when you run Setup on a basic disk. (If you are new to Windows system administration, take a quick look at the sidebar "Boot and System Partitions." The nomenclature can be a little confusing.) Here's the review:

    • If you elect to install both the system files and boot files into a single partition, Setup creates a single primary partition and gives it drive letter C. This is usually the first partition on the drive, although it does not have to be. Setup also marks this partition as "active," meaning that the BIOS boot routine will load the boot sector from that partition.

      Creating a Bootable Drive

      If you forget to pre-partition a mirrored drive and you lose the primary drive, you'll discover that the second drive in the mirrored set is not bootable because it has no Master Boot Record and no partition table. You can put a bootable partition in the MBR of the drive using the Diskpart utility, but first you need to boot the operating system. You can do this with a fault tolerant boot floppy. See section "Building a Fault Tolerant Boot Floppy" for instructions on creating the floppy.

      When you've booted the operating system, load the Diskpart console and issue the following commands (the example assumes the disk is the first disk in the array and the volume is the first on the disk):

      DISKPART>  Select Disk 0
      DISKPART>  Select Volume 1
      DISKPART>  Retain

    • If you elect to install the boot files and system files in separate partitions, Setup marks the partition containing the system files as "active" and gives it drive letter C. This is typically the first partition on the drive. The remaining partitions you specify in Setup become logical drives in a single extended partition. Setup will not create more than one primary partition.

    The purpose of these rules is to ensure that the INT13 BIOS service routines can find the bootstrap code in the system partition and the operating system files in the boot partition.

    If you install Windows Server 2003 onto a dynamic disk, the BIOS must still be able to find the bootstrap code and the operating system files. With this in mind, here are the prerequisites and restrictions:

    • If the system or boot volume is mirrored, you must break the mirror before installing or upgrading to Windows Server 2003. You can remake the mirror later after installation.

    • You must select a partition displayed by Setup. These represent volumes that are anchored to classic MBR partition table entries. Other dynamic volumes within these volumes are not displayed, so don't delete them unless you're sure they do not contain data.

    • There will not be any unallocated space beyond the listed partitions. When you select a partition, Setup will probably find an existing Windows operating system. You can elect to overwrite that operating system.

    To avoid data loss, you should avoid installing Windows Server 2003 onto a dynamic disk unless you are diagnosing a problem by installing a parallel version of the operating system.

    Boot and System Partitions

    In case you are not familiar with Microsoft's awkward and unintuitive definition of boot partitions and system partitions, here it is:

    • The boot partition contains the Windows Server 2003 system files. By default, these files are located in the \Windows directory.

    • The system partition contains the files that Windows Server 2003 uses to load the operating system: NTDLR, Ntdetect.com, Boot.ini, Bootsect.dos, and Ntbootdd.sys. These files reside at the root of the drive that is used to boot the system.

    Confusing? You bet. Will it change? Not likely. If data storage and superstring theory ever merge so that a computer can store data in stasis between two infinitesimal instants of time, Microsoft would insist on the instant with bootstrap data the system instant.

    The Recovery Console uses the underlying drivers in Setup, so don't make the mistake of running the MAP command in the Recovery Console and assuming that the drive letters you see are the only logical drives on the disks.

    Partition and Volume Extensions

    Growth is everywhere. 50 million years ago, horses were the size of a small dog. In 1930, the astronomer Edwin Hubbell proved beyond a shadow of a doubt that the universe is expanding. In 1965, Gordon Moore postulated that the rate of on-chip transistor density would double at a more or less steady rate. In 1999, Michael Dell made more money in one month than any person in history.

    Still, nobody understands growth like a system administrator. And I'm not talking about the kind of growth that comes from eating too many chocolate donuts while troubleshooting a service outage. I'm talking about data growth. No matter how generously you size your storage systems, in a blink of an eye you're at a 90 percent loading factor. Let's see how we can use the LDM to respond to that kind of growth.

    Basic Disk Partition Extensions

    When you add more storage, you generally stuff more drives into an array of some sort, either a locally attached RAID controller or a Storage Attached Network (SAN) box of some sort. The additional storage appears in the operating system as unallocated space.

    Under Windows 2000, if you wanted to expand an existing volume to encompass that new space without creating a new logical drive, you were forced to convert the virtual disk presented by the RAID controller to a dynamic disk so you could span volumes. Under Windows Server 2003, you can extend a basic partition without going through the hassle of a disk conversion. If you dual-boot between Windows Server 2003 and Windows 2000 or NT, the extended partition is accessible by the earlier operating systems.

    Here are the restrictions for extending basic disk partitions:

    • The partition must be formatted as NTFS. If it is currently formatted as FAT or FAT32, you can convert it using the CONVERT utility. The new CONVERT utility in Windows Server 2003 is fast and permits you to control the cluster size.

    • The unallocated space must be on the same drive as the existing partition. You cannot span basic partitions across disks.

    • The unallocated space must be contiguous to the basic partition you want to expand.

    After you extend the partition, the space appears in Explorer as free space in the existing logical drive. See the section, "Performing Disk Operations on IA32 Systems," for the procedure to extend a basic partition.

    Dynamic Disk Volume Extensions (Spanning)

    You can add space to a dynamic volume by spanning to unallocated space elsewhere on the drive. You can also span across drives. This configuration has several restrictions:

    • You can span a simple volume or an existing spanned volume. You cannot span striped, RAID 5, or mirrored volumes.

    • You cannot span a system volume (the volume used to boot the operating system). This is because the BIOS locates the volume using INT13 calls, which rely on standard partition information in the MBR.

    • You cannot span a boot volume (the volume that contains the operating system files) for the same reason as the system volume.

    • You cannot span a volume that is anchored to a classic partition table entry. This was an issue under Windows 2000 because it retained the old partition table. Windows Server 2003 changes the structure of the partition table to eliminate most of the primary partitions.

    The chief advantage of volume spanning using dynamic disks over partition extension using basic disks is the capability to span volumes across multiple disks. Ordinarily, you would avoid this configuration because it lacks fault tolerance. However, if you use a SAN for storage or some other fault-tolerance subsystem capable of presenting multiple virtual drives, you can span volumes across those virtual drives while retaining fault tolerance.

    See the following section for the procedure to span a dynamic volume.

      Previous Section Next Section