Quantcast
Channel: Ask the Directory Services Team
Viewing all 74 articles
Browse latest View live

The Mouse Will Play

$
0
0

Hey all, Ned here. Mike and I start teaching Windows Server 2012 and Windows 8 DS internals this month in the US and UK and won’t be back until July. Until then, Jonathan is – I can’t believe I’m saying this – in charge of AskDS. He’ll field your questions and publish… stuff. We’ll make sure he takes his medication before replying.

If you’re in Reading, England June 10-22, first round is on me.

image
I didn’t say what the first round was though.

Ned “crikey” Pyle


Important Information about Remote Desktop Licensing and Security Advisory 2718704

$
0
0

Hi folks, Jonathan here. Dave and I wanted to share some important information with you.

By now you’ve all been made aware of the Microsoft Security Advisory that was published this past Sunday.  If you are a Terminal Services or Remote Desktop Services administrator then we have some information of which you should be aware.  These are just some extra administrative steps you’ll need to follow the next time you have to obtain license key packs, transfer license key packs, or any other task that requires your Windows Server license information to be processed by the Microsoft Product Activation Clearinghouse.  Since there’s a high probability that you’ll have to do that at some point in the future we’re doing our part to help spread the word.  Our colleagues over at the Remote Desktop Services (Terminal Services) Team blog have posted all the pertinent information. Take a look.

Follow-up to Microsoft Security Advisory 2718704: Why and How to Reactivate License Servers in Terminal Services and Remote Desktop Services

If you have any questions, feel free to post them over in the Remote Desktop Services forum.

Jonathan Stephens

RSA Key Blocking is Here!

$
0
0

Hello everyone. Jonathan here again with another Public Service Announcement post.

Today, Microsoft has published a new Security Advisory:

Microsoft Security Advisory (2661254): Update For Minimum Certificate Key Length

The Security Advisory and the accompanying KB article have complete information about the software update, but the key takeaway is that this update is now available on the Download Center and the Microsoft Update Catalog. In addition, Microsoft will release this software update through Microsoft Update (aka Windows Update) in October 2012. So all of you enterprise customers have two months to start testing this update to see what impact it has in your environments.

If you want information on finding weak keys in your environment then review the KB article. It describes several methods you can use. Microsoft Support has also created a PowerShell script that has been posted to the the TechNet Script Center.

Finally, I have one final warning for those of you that use makecert.exe to create test certificates. By default, makecert.exe creates certificates that chains up to the Root Agency root CA certificate located in the Intermediate Certification Authorities store. The Root Agency CA certificate has a public key of 512 bits, so once you deploy this update no certificate created with makecert.exe will be considered valid.

You should now consider makecert.exe deprecated. As a replacement, starting with Windows 7 / Windows Server 2008 R2, you can use certreq.exe to create a self-signed certificate. For example, to create a self-signed code signing certificate you can create the following .INF file:

[NewRequest]
Subject = "CN=Self Signed Cert"
KeyLength = 2048
ProviderName = "Microsoft Enhanced Cryptographic Provider v1.0"
KeySpec = "AT_SIGNATURE"
KeyUsage = "CERT_DIGITAL_SIGNATURE_KEY_USAGE"
RequestType = Cert
SMIME = False
ValidityPeriod = Years
ValidityPeriodUnits = 2

[EnhancedKeyUsageExtension]
OID = 1.3.6.1.5.5.7.3.3

The important line above is the RequestType value. That tells certreq.exe to create a self-signed certificate. Along with that value, the ValidityPeriod and ValidityPeriodUnits values allow you specify the lifetime of the self-signed certificate.

Once you create the .INF file, run the following command:

Certreq -new  selfsigned.inf selfsigned.crt

This will take your .INF file and generate a new self-signed certificate that you can use for testing.

Ok, so this was supposed to be a short post pointing to where you need to go, but it turns out that I had some other related stuff. The important message here is go read the Security Advisory and the KB article.

Go read the Security Advisory and the KB article.

Ex pace.

Jonathan “I am the Key Master” Stephens

....And knowing is half the battle!

Revenge of Y2K and Other News

$
0
0

Hello sports fans!

So this has been a bit of a hectic time for us, as I'm sure you can imagine. Here's just some of the things that have been going on around here.

Last week, thanks to a failure on the time servers at USNO.NAVY.MIL, many customers experienced a time rollback to CY 2000 on their Active Directory domain controllers. Our team worked closely with the folks over at Premier Field Engineering to explain the problem, document resolutions for the various issues that might arise, and describe how to inoculate your DCs against a similar problem in the future. If you were affected by this problem then you need to read this post. If you weren't affected, and want to know why, then you need to read this post. Basically, we think you need to read this post. So...here's the link to the AskPFEPlat blog.

In other news, Ned Pyle has successfully infiltrated the Product Group and has started blogging on The Storage Team blog. His first post is up, and I'm sure there will be many more to follow. If you've missed Ned's rare blend of technical savvy and sausage-like prose, and you have an interest in Microsoft's DFSR and other storage technologies, then go check him out.

Finally...you've probably noticed the lack of activity here on the AskDS blog. Truthfully, that's been the result of a confluence of events -- Ned's departure, the Holiday season here in the US, and the intense interest in Windows 8 and Windows Server 2012 (and subsequent support calls). Never fear, however! I'm pleased to say that your questions to the blog have been coming in quite steadily, so this week I'll be posting an omnibus edition of the Mail Sack. We also have one or two more posts that will go up between now and the end of the year, so there's that to look forward to. Starting with the new calendar year, we'll get back to a semi-regular posting schedule as we get settled and build our queue of posts back up.

In the mean time, if you have questions about anything you see on the blog, don't hesitate to contact us.

Jonathan "time to make the donuts" Stephens

Intermittent Mail Sack: Must Remember to Write 2013 Edition

$
0
0

Hi all, Jonathan here again with the latest edition of the Intermittent Mail Sack. We've had some great questions over the last few weeks so I've got a lot of material to cover. This sack, we answer questions on:

Before we get started, however, I wanted to share information about a new service available to Premier customers through Microsoft Services Premier Support. Many Premier customers will be familiar with the Risk Assessment Program (RAP). Premier Support is now rolling out an online offering called the RAP as a Service (or RaaS for short). Our colleagues over on the Premier Field Engineering (PFE) blog have just posted a description of the new offering, and I encourage you to check it out. I've been working on the Active Directory RaaS offering since the early beta, and we've gotten really good feedback. Unfortunately, the offering is not yet available to non-Premier customers; look at RaaS as yet one more benefit to a Premier Support contract.

 

Now on to the Mail Sack!

Question

I'm considering upgrading my DFSR hub servers to Server 2012. Is there anything I should know before I hit the easy button and do an upgrade?

Answer

The most important thing to note is that Microsoft strongly discourages mixing Windows Server 2012 and legacy operating system DFSR. You just mentioned upgrading your hub servers, and make no mention of any branch servers. If you're going to upgrade your DFSR servers then you should upgrade all of them.

Check out Ned's post over on the FileCab blog: DFS Replication Improvements in Windows Server. Specifically, review the section that discusses Dynamic Access Control Support.

Also, there is a minor issue that has been found that we are still tracking. When you upgrade from Windows Server 2008 R2 to Windows Server 2012 the DFS Management snap-in stops working. The workaround is to just uninstall and then reinstall the DFS Management tools:

You can also do this with PowerShell:

Uninstall-WindowsFeature -name RSAT-DFS-Mgmt-Con
Install-WindowsFeature -name RSAT-DFS-Mgmt-Con

 

Question

From our SharePoint site, when users click on log-off then they get sent to this page: https://your_sts_server/adfs/ls/?wa=wsignout1.0.

We configured the FedAuth cookie to be session based after we did this:

$sts = Get-SPSecurityTokenServiceConfig 
$sts.UseSessionCookies = $true 
$sts.Update() 

 

The problem is, unless the user closes all their browsers then when they go to the log-in page the browser remembers their credentials. This is not acceptable for some PC's are shared by people. Also, closing all browsers is not acceptable as they run multiple web applications.

Answer

(Courtesy of Adam Conkle)

Great question! I hope the following details help you in your deployment:

Moving from a persistent cookie to a session cookie with SharePoint 2010 was the right move in this scenario in order to guarantee that closing the browser window would terminate the session with SharePoint 2010.

When you sign out via SharePoint 2010 and are redirected to the STS URL containing the query string: wa=wsignout1.0, this is what we call a WS-Federation sign-out request. This call is sufficient for signing out of the STS as well as all relying parties signed into during the session.

However, what you are experiencing is expected behavior for how Integrated Windows Authentication (IWA) works with web browsers. If your web browser client experienced either a no-prompt sign-in (using Kerberos authentication for the currently signed in user), or NTLM, prompted sign-in (provided credentials in a Windows Authentication "401" credential prompt), then the browser will remember the Windows credentials for that host for the duration of the browser session.

If you were to collect a HTTP headers trace (Fiddler, HTTPWatch, etc.) of the current scenario, you will see that the wa=wsignout1.0 request is actually causing AD FS and SharePoint 2010 (and any other RPs involved) to clean up their session cookies (MSISAuth and FedAuth) as expected. The session is technically ending the way it should during sign-out. However, if the client keeps the current browser session open, browsing back to the SharePoint site will cause a new WS-Federation sign-in request to be sent to AD FS (wa=wsignin1.0). When the sign-in request is sent to AD FS, AD FS will attempt to collect credentials with a HTTP 401, but, this time, the browser has a set of Windows credentials ready to provide to that host.

The browser provides those Windows credentials without a prompt shown to the user, and the user is signed back into AD FS, and, thus, is signed back into SharePoint 2010. To the naked eye, it appears that sign-out is not working properly, while, in reality, the user is signing out and then signing back in again.

To conclude, this is by-design behavior for web browser clients. There are two workarounds available:

Workaround 1

Switch to forms-based authentication (FBA) for the AD FS Federation Service. The following article details this quick and easy process: AD FS 2.0: How to Change the Local Authentication Type

Workaround 2

Instruct your user base to always close their web browser when they have finished their session

Question

Are the attributes for files and folders used by Dynamic Access Control are replicated with the object? That is, using DFSR, if I replicate the file to another server which uses the same policy will the file have the same effective permissions on it?

Answer

(Courtesy of Mike Stephens)

Let me clarify some aspects of your question as I answer each part

When enabling Dynamic Access Control on files and folders there are multiple aspects to consider that are stored on the files and folders.

Resource Properties

Resource Properties are defined in AD and used as a template to stamp additional metadata on a file or folder that can be used during an authorization decision. That information is stored in an alternate data stream on the file or folder. This would replicate with the file, the same as the security descriptor.

Security Descriptor

The security descriptor replicates with the file or folder. Therefore, any conditional expression would replicate in the security descriptor.

All of this occurs outside of Dynamic Access Control -- it is a result of replicating the file throughout the topology, for example, if using DFSR. Central Access Policy has nothing to do with these results.

Central Access Policy

Central Access Policy is a way to distribute permissions without writing them directly to the DACL of a security descriptor. So, when a Central Access Policy is deployed to a server, the administrator must then link the policy to a folder on the file system. This linking is accomplished by inserting a special ACE in the auditing portion of the security descriptor informs Windows that the file/folder is protected by a Central Access Policy. The permissions in the Central Access Policy are then combined with Share and NTFS permissions to create an effective permission.

If the a file/folder is replicated to a server that does not have the Central Access Policy deployed to it then the Central Access Policy is not valid on that server. The permissions would not apply.

Question

I read the post located here regarding the machine account password change in Active Directory.

Based on what I read, if I understand this correctly, the machine password change is generated by the client machine and not AD. I have been told, (according to this post, inaccurately) that AD requires this password reset or the machine will be dropped from the domain.

I am a Macintosh systems administrator, and as you probably know, this issue does indeed occur on Mac systems.

I have reset the password reset interval to be various durations from fourteen days which is the default, to one day.

I have found that if I disjoin and rejoin the machine to the domain it will generate a new password and work just fine for 30 days. At that time, it will be dropped from the domain and have to be rejoined. This is not 100% of the time, however it is often enough to be a problem for us as we are a higher education institution which in addition to our many PCs, also utilizes a substantial number of Macs. Additionally, we have a script which runs every 60 days to delete machine accounts from AD to keep it clean, so if the machine has been turned off for more than 60 days, the account no longer exists.

I know your forte is AD/Microsoft support, however I was hoping that you might be able to offer some input as to why this might fail on the Macs and if there is any solution which we could implement.

Other Mac admins have found workarounds like eliminating the need for the pw reset or exempting the macs from the script, but our security team does not want to do this.

Answer

(Courtesy of Mike Stephens)

Windows has a security policy feature named Domain member: Disable machine account password change, which determines whether the domain member periodically changes its computer account password. Typically, a mac, linux, or unix operating system uses some version of Samba to accomplish domain interoperability. I'm not familiar with these on the mac; however, in linux, you would use the command

Net ads changetrustpw 

 

By default, Windows machines initiate a computer password change every 30 days. You could schedule this command to run every 30 days once it completes successfully. Beyond that, basically we can only tell you how to disable the domain controller from accepting computer password changes, which we do not encourage.

Question

I recently installed a new server running Windows 2008 R2 (as a DC) and a handful of client computers running Windows 7 Pro. On a client, which is shared by two users (userA and userB), I see the following event on the Event Viewer after userA logged on.

Event ID: 45058 
Source: LsaSrv 
Level: Information 
Description: 
A logon cache entry for user userB@domain.local was the oldest entry and was removed. The timestamp of this entry was 12/14/2012 08:49:02. 

 

All is working fine. Both userA and userB are able to log on on the domain by using this computer. Do you think I have to worry about this message or can I just safely ignore it?

Fyi, our users never work offline, only online.

Answer

By default, a Windows operating system will cache 10 domain user credentials locally. When the maximum number of credentials is cached and a new domain user logs onto the system, the oldest credential is purged from its slot in order to store the newest credential. This LsaSrv informational event simply records when this activity takes place. Once the cached credential is removed, it does not imply the account cannot be authenticated by a domain controller and cached again.

The number of "slots" available to store credentials is controlled by:

Registry path: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon
Setting Name: CachedLogonsCount
Data Type: REG_SZ
Value: Default value = 10 decimal, max value = 50 decimal, minimum value = 1

Cached credentials can also be managed with group policy by configuring:

Group Policy Setting path: Computer Configuration\Policies\Windows Settings\Security Settings\Local Policies\Security Options.
Group Policy Setting: Interactive logon: Number of previous logons to cache (in case domain controller is not available)

The workstation the user must have physical connectivity with the domain and the user must authenticate with a domain controller to cache their credentials again once they have been purged from the system.

I suspect that your CachedLogonsCount value has been set to 1 on these clients, meaning that that the workstation can only cache one user credential at a time.

Question

In Windows 7 and Server 2008 Kerberos DES encryption is disabled by default.

At what point will support for DES Kerberos encryption be removed? Does this happen in Windows 8 or Windows Server 2012, or will it happen in a future version of Windows?

Answer

DES is still available as an option on Windows 8 and Windows Server 2012, though it is disabled by default. It is too early to discuss the availability of DES in future versions of Windows right now.

There was an Advisory Memorandum published in 2005 by the Committee on National Security Systems (CNSS) where DES and all DES-based systems (3DES, DES-X) would be retired for all US Government uses by 2015. That memorandum, however, is not necessarily a binding document. It is expected that 3DES/DES-X will continue to be used in the private sector for the foreseeable future.

I'm afraid that we can't completely eliminate DES right now. All we can do is push it to the back burner in favor of newer and better algorithms like AES.

Question

I have two Issuing certification authorities in our corporate network. All our approved certificate templates are published on both issuing CAs. We would like to enable certificate renewals from Internet with our Internet-facing CEP/CES configured for certificate authentication in Certificate Renewal Mode Only. What we understand from the whitepaper is that it's not going to work when the CA that issues the certificate must be the same CA used for certificate renewal.

Answer

First, I need to correct an assumption made based on your reading of the whitepaper. There is no requirement that, when a certificate is renewed, the renewal request be sent to the same CA as that that issued the original certificate. This means that your clients can go to either enrollment server to renew the certificate. Here is the process for renewal:

  1. When the user attempts to renew their certificate via the MMC, Windows sends a request to the Certificate Enrollment Policy (CEP) server URL configured on the workstation. This request includes the template name of the certificate to be renewed.
  2. The CEP server queries Active Directory for a list of CAs capable of issuing certificates based on that template. This list will include the Certificate Enrollment Web Service (CES) URL associated with that CA. Each CA in your environment should have one or more instances of CES associated with it.
  3. The list of CES URLs is returned to the client. This list is unordered.
  4. The client randomly selects a URL from the list returned by the CEP server. This random selection ensures that renewal requests are spread across all returned CAs. In your case, if both CAs are configured to support the same template, then if the certificate is renewed 100 times, either with or without the same key, then that should result in a nearly 50/50 distribution between the two CAs.

The behavior is slightly different if one of your CAs goes down for some reason. In that case, should clients encounter an error when trying to renew a certificate against one of the CES URIs then the client will failover and use the next CES URI in the list. By having multiple CAs and CES servers, you gain high availability for certificate renewal.

Other Stuff

I'm very sad that I didn't see this until after the holidays. It definitely would have been on my Christmas list. A little pricey, but totally geek-tastic.

This was also on my list, this year. Go Science!

Please do keep those questions coming. We have another post in the hopper going up later in the week, and soon I hope to have some Windows Server 2012 goodness to share with you. From all of us on the Directory Services team, have a happy and prosperous New Year!

Jonathan "13th baktun" Stephens

 

 

SSL/TLS Record Fragmentation Support

$
0
0
This is Jonathan Stephens from the Directory Services team, and I wanted to share with you a recent interoperability issue I encountered. An admin had set up an Apache web server with the OpenSSL mod for SSL/TLS support. Users were able to connect to...(read more)

Windows Server 2008 R2 CAPolicy.inf Syntax

$
0
0

Greetings! This is Jonathan again. I was reviewing Chris’ excellent blog post series on designing and implementing a PKI when I realized that it would be helpful to better document the CAPolicy.inf file. The information in this post relies heavily on the information published in the Windows Server 2003 Help File, but this information is updated to include information pertinent to Windows Server 2008 R2.

Another helpful document that discusses many of these settings is available on Technet.

Background

First, what is a CAPolicy.inf file? The CAPolicy.inf contains various settings that are used when installing the Active Directory Certification Service (ADCS) or when renewing the CA certificate. The CAPolicy.inf file is not required to install ADCS with the default settings, but in many cases the default settings are insufficient. The CAPolicy.inf can be used to configure CAs in these more complicated deployments.

Once you have created your CAPolicy.inf file, you must copy it into the %systemroot% folder (e.g., C:\Windows) of your server before you install ADCS or renew the CA certificate.

I’m not going to discuss what settings you need for your particular configuration, nor will I offer guidance on how you should set up your PKI to meet whatever your needs may be. Please follow Chris’series for that sort of information. I’m simply going to document the available settings in the CAPolicy.inf which, if you follow Chris’ guidance, you’ll find will come in handy.

Let’s get started, shall we?

As I mentioned earlier, the CAPolicy.inf file uses the .INF file structure to specify sections, settings, and values for those settings. It will be impossible here to define a default template suitable for all purposes, so I’m just going to describe all the options and allow you to decide which settings meet your needs. Not all the settings below are required in the file, but those that are required will be called out.

The following key words are used to describe the .INF file structure.

  • A section is an area in the .INF file that covers a logical group of keys. A section always appears in brackets in the .INF file.
  • A key is the parameter that is to the left of the equal sign.
  • A value is the parameter that is to the right of the equal sign.

Version

The first two lines of the CAPolicy.inf file are:

[Version]
Signature=”$Windows NT$”

  • [Version] is the section.
  • Signature is the key.
  • “$Windows NT$” is the value.

Version is the only required section, and must be at the beginning of your CAPolicy.inf file.

PolicyStatementExtension

Next is the PolicyStatementExtension section. This section lists the name of the policies for this CA. Multiple policies are separated by commas. The names LegalPolicy and ManagementPolicy are used here as examples, but the names can be whatever the CA administrator chooses when creating the CAPolicy.inf file.

NOTE: Administrator defined section names must observe the following syntax rules:

  • A section name cannot have leading or trailing spaces, a linefeed character, a return character, or any invisible control character, and it should not contain tabs.
  • A section name cannot contain either of the bracket ([]) characters, a single percent (%) character, a semicolon (;), or any internal double quotation () characters.
  • A section name cannot have a backslash (\) as its last character.

The names have meaning in the context of a specific deployment, or in relation to custom applications that actually check for the presence of these policies.

[PolicyStatementExtension]
Policies=LegalPolicy,ManagementPolicy

For each policy defined in the PolicyStatementExtension section there must be a section that defines the settings for that particular policy. For the example above, the CAPolicy.inf must contain a [LegalPolicy] section and a [ManagementPolicy] section.

For each policy, you need to provide a user-defined object identifier (OID) and either the text you want displayed as the policy statement or a URL pointer to the policy statement. The URL can be in the form of an HTTP, FTP, or LDAP URL. Continuing on with the example started above, if you are going to have text in the policy statement, then the next three lines of the CAPolicy.inf will be:

[LegalPolicy]
OID=1.1.1.1.1.1.1
Notice=”Legal policy statement text”

If you are going to use a URL to host the CA policy statement, then next three lines would instead be:

[ManagementPolicy]
OID=1.1.1.1.1.1.2
URL=
http://pki.wingtiptoys.com/policies/managementpolicy.asp

Please note that the OID above is arbitrary and is used as an example. In a true deployment, you would obtain an OID from your own OID gatekeeper.

In addition:

  • Multiple URL keys are supported
  • Multiple Notice keys are supported
  • Notice and URL keys in the same policy section are supported.
  • URLs with spaces or text with spaces must be surrounded by quotes. This is true for the URL key, regardless of the section in which it appears.
  • The Notice text has a maximum length of 511 characters on Windows Server 2003 [R2], and a maximum length of 4095 characters on Window Server 2008 [R2].

An example of multiple notices and URLs in a policy section would be:

[LegalPolicy]
OID=1.1.1.1.1.1.1
URL=
http://pki.wingtiptoys.com/policies/legalpolicy.asp
URL=ftp://ftp.wingtiptoys.com/pki/policies/legalpolicy.asp
Notice=”Legal policy statement text”

CRLDistributionPoint

You can specify CRL Distribution Points (CDPs) for a root CA certificate in the CAPolicy.inf. This section does not configure the CDP for the CA itself. After the CA has been installed you can configure the CDP URLs that the CA will include in each certificate that it issues. The URLs specified in this section of the CAPolicy.inf file are included in the root CA certificate itself.

[CRLDistributionPoint]
URL=
http://pki.wingtiptoys.com/cdp/WingtipToysRootCA.crl

Some additional information about this section:

  • Multiple URLs are supported.
  • HTTP, FTP, and LDAP URLs are supported. HTTPS URLs are not supported.
  • This section is only used if you are setting up a root CA or renewing the root CA certificate. Subordinate CA CDP extensions are determined by the CA which issues the subordinate CA’s certificate.
  • URLs with spaces must be surrounded by quotes.
  • If no URLs are specified – that is, if the [CRLDistributionPoint] section exists in the file but is empty – the CRL Distribution Point extension will be omitted from the root CA certificate. This is usually preferable when setting up a root CA. Windows does not perform revocation checking on a root CA certificate so the CDP extension is superfluous in a root CA certificate.

Authority Information Access

You can specify the authority information access points in the CAPolicy.inf for the root CA certificate.

[AuthorityInformationAccess]
URL=
http://pki.wingtiptoys.com/Public/myCA.crt

Some additional notes on the authority information access section:

  • Multiple URLs are supported.
  • HTTP, FTP, LDAP and FILE URLs are supported. HTTPS URLs are not supported.
  • This section is only used if you are setting up a root CA, or renewing the root CA certificate. Subordinate CA AIA extensions are determined by the CA which issued the subordinate CA’s certificate.
  • URLs with spaces must be surrounded by quotes.
  • If no URLs are specified – that is, if the [AuthorityInformationAccess] section exists in the file but is empty – the CRL Distribution Point extension will be omitted from the root CA certificate. Again, this would be the preferred setting in the case of a root CA certificate as there is no authority higher than a root CA that would need to be referenced by a link to its certificate.

Enhanced Key Usage

Another section of the CAPolicy.inf file is [EnhancedKeyUsageExtension], which is used to specify the Enhanced Key Usage extension OIDs placed in the CA certificate.

  • Multiple OIDs are supported.
  • This section can be used during CA setup or CA certificate renewal.
  • This section can be used for both the root CA and for subordinate CAs.
  • This extension can be marked as Critical.

An example of this section is:

[EnhancedKeyUsageExtension]
OID=1.2.3.4.5
OID=1.2.3.4.6
Critical=No

If this section is omitted from the CAPolicy.inf file, the Enhanced Key Usage extension will be omitted from the root CA certificate. If this extension does not exist in a root CA certificate then that root CA certificate can be trusted for all purposes.

By populating this section with specific OIDs, you are limiting the purposes for which the root CA certificate can be trusted. For example, consider the following section:

[EnhancedKeyUsageExtension]
OID=1.3.6.1.5.5.7.3.4        ; Secure Email
OID=1.3.6.1.4.1.311.20.2.2    ; Smart Card Logon
Critical=No

During the setup of the CA, the root CA certificate will be created with the two OIDs above in the Enhanced Key Usage extension. This root certificate, because of the OIDs specified, can only be trusted for Secure Email (signing and encrypting) and Smart Card Logon. Any certificate issued for some other purpose, such as Client or Server Authentication, would be considered invalid. This restriction would apply not only to this root CA, but also to any CA subordinate to this root.

Basic Constraints

You can use the CAPolicy.inf file to define the PathLength constraint in the Basic Constraints extension of the root CA certificate. Setting the PathLength basic constraint allows you to limit the path length of the CA hierarchy by specifying how many tiers of subordinate CAs can exist beneath the root. A PathLength of 1 means there can be at most one tier of CAs beneath the root. These subordinate CAs will have a PathLength basic constraint of 0, which means that they cannot issue any subordinate CA certificates.

This extension can be marked as Critical.

[BasicConstraintsExtension]
PathLength=1
Critical=Yes

It is not recommended to use this section in the CAPolicy.inf file for a subordinate CA. To add a PathLength constraint to a subordinate CA certificate if the parent CA has no PathLength constraint in its own certificate, you can set the CAPathLength registry value on the parent CA. For example, to issue a subordinate CA certificate with a PathLength constraint of 1, use the following command to configure the parent CA.

Certutil –setreg Policy\CAPathLength 2

Setting this value causes the CA to behave as though its own certificate had a PathLength constraint of whatever number you specify. Any subordinate CA certificate issued by the parent CA will have a PathLength constraint set appropriately in its Basic Constraints extension.

You must restart Active Directory Certificate Services for this change to take effect.

Cross Certificate Distribution Points

The cross certificate distribution points (CCDP) extension identifies where cross certificates related to the CA certificate can be obtained and how often that location is updated. The CCDP extension is useful if the CA has been cross-certified with another PKI hierarchy. Windows XP and later operating systems would use this extension for the discovery of cross-certificates that might be used during the path discovery and chain building process.

The SyncDeltaTime key indicates how often, in seconds, the locations referred to by the URL key(s) are updated. While this entire section is optional, if it exists, and if the SyncDeltaTime key is present, then at least one URL key must also be present.

This extension can be marked as Critical.

[CrossCertificateDistributionPointsExtension]
SyncDeltaTime=600    ; in seconds
URL=
http://pki.wingtiptoys.com/ccdp/PartnersCA.crt
Critical=No

Request Attributes

The [RequestAttributes] section, when implemented on a subordinate CA, allows you to specify a custom subordinate certification authority template. There is already the default Subordinate Certificate Authority template that is published in Active Directory the first time an Enterprise CA is installed in the forest. This default template, however, is a v1 template (Windows 2000-style) and cannot be edited. The CertificateTemplate key allows you specify a different template for your subordinate CA certificate request, one that you created by duplicating the default template.

[RequestAttributes]
CertificateTemplate=WingtipToysSubCA

Server Settings

Another optional section of the CAPolicy.inf is [certsrv_server], which is used to specify renewal key length, the renewal validity period, and the certificate revocation list (CRL) validity period for a CA that is being renewed or installed. None of the keys in this section are required. Many of these settings have default values that are sufficient for most needs and can simply be omitted from the CAPolicy.inf file. Alternatively, many of these settings can be changed after the CA has been installed.

An example would be:

[certsrv_server]
RenewalKeyLength=2048
RenewalValidityPeriod=Years
RenewalValidityPeriodUnits=5
CRLPeriod=Days
CRLPeriodUnits=2
CRLDeltaPeriod=Hours
CRLDeltaPeriodUnits=4
ClockSkewMinutes=20
LoadDefaultTemplates=True
AlternateSignatureAlgorithm=0
ForceUTF8=0
EnableKeyCounting=0

RenewalKeyLength sets the key size for renewal only. This is only used when a new key pair is generated during CA certificate renewal. The key size for the initial CA certificate is set when the CA is installed.

When renewing a CA certificate with a new key pair, the key length can be either increased or decreased. We in Support see this most often when a customer has set a root CA key size of 4096 bytes or higher, and then discover that they have Java apps or network devices that can only support key sizes of 2048 bytes. In that situation, we can use this setting in the CAPolicy.inf file to reduce the key size of the CA. Of course, that means that we have to reissue all the certificates issued by that CA. The higher up in the hierarchy the CA resides, the more inconvenient this procedure is.

RenewalValidityPeriod and RenewalValidityPeriodUnits establish the lifetime of the new root CA certificate when renewing the old root CA certificate. It only applies to a root CA. The certificate lifetime of a subordinate CA is determined by its superior. RenewalValidityPeriod can have the following values: Hours, Days, Weeks, Months, and Years.

CRLPeriod and CRLPeriodUnits establish the validity period for the base CRL, while CRLDeltaPeriod and CRLDeltaPeriodUnits establish the validity period of the delta CRL. CRLPeriod and CRLDeltaPeriod can have the following values: Hours, Days, Weeks, Months, and Years. Each of these settings can be configured after the CA has been installed:

Certutil -setreg CA\CRLPeriod Weeks
Certutil -setreg CA\CRLPeriodUnits 1
Certutil -setreg CA\CRLDeltaPeriod Days
Certutil -setreg CA\CRLDeltaPeriodUnits 1

Restart Active Directory Certificate Services for any changes to take effect.

ClockSkewMinutes allows you to accommodate possible clock synchronization issues. The CA will set the effective time of the published base CRL and delta CRL to the current time less the ClockSkewMinutes. For example, if the clock skew is set to 5 minutes, and the current time is 4:00pm, then the effective time of a newly published CRL would be 3:55pm.

This value can also be set after the CA has been installed.

Certutil -setreg CA\ClockSkewMinutes 10

Restart Active Directory Certificate Services for any changes to take effect.

The default value for ClockSkewMinutes is 10 minutes; if this interval is sufficient then this key can be omitted from the CAPolicy.inf file.

LoadDefaultTemplates only applies during the install of an Enterprise CA. This setting, either True or False (or 1 or 0), dictates whether or not the CA is configured with any of the default templates.

In a default installation of the CA, a subset of the default certificate templates is added to the Certificate Templates folder in the Certification Authority snap-in. This means that as soon as the ADCS service starts after the role has been installed a user or computer with sufficient permissions can immediately enroll for a certificate. This behavior is not always desirable.

To illustrate the point, the Domain Controller and Domain Controller Authentication templates are among the default templates added to the CA as it is installed. The default permissions on these two templates allow all domain controllers in the forest to enroll for certificates based those two templates. Finally, the default behavior of a domain controller is to immediately enroll for a Domain Controller or Domain Controller Authentication template as soon as an Enterprise CA is detected in the forest (Windows 2000 DCs will attempt to enroll for a Domain Controller certificate; Windows Server 2003 and higher will attempt to enroll for a Domain Controller Authentication certificate).

You may not want to issue any certificates immediately after a CA has been installed, so you can use the LoadDefaultTemplates setting to prevent the default templates from being added to the Enterprise CA. If there are no templates configured on the CA then it can issue no certificates.

On Windows Server 2003 and Windows Server 2003 R2, the LoadDefaultTemplates setting only applies to a root Enterprise CA. It is ignored on a subordinate Enterprise CA.

On Windows Server 2008 and Windows Server 2008 R2, the LoadDefaultTemplates setting applies to both root and subordinate Enterprise CAs.

AlternateSignatureAlgorithm configures the CA to support the PKCS#1 V2.1 signature format for both the CA certificate and certificate requests. When set to 1 on a root CA the CA certificate will include the PKCS#1 V2.1 signature format. When set on a subordinate CA, the subordinate CA will create a certificate request that includes the PKCS#1 V2.1 signature format.

ForceUTF8 changes the default encoding of relative distinguished names (RDNs) in Subject and Issuer distinguished names to UTF-8. Only those RDNs that support UTF-8, such as those that are defined as Directory String types by an RFC, are affected. For example, the RDN for Domain Component (DC) supports encoding as either IA5 or UTF-8, while the Country RDN (C) only supports encoding as a Printable String. The ForceUTF8 directive will therefore affect a DC RDN but will not affect a C RDN.

Finally, EnableKeyCounting configures the CA to increment a counter every time the CA’s signing key is used. Do not enable this setting unless you have a Hardware Security Module (HSM) and associated cryptographic service provider (CSP) that supports key counting. Neither the Microsoft Strong CSP nor the Microsoft Software Key Storage Provider (KSP) support key counting.

For more caveats to be aware of when using key counting, please review the following KB article:

951721 The certification authority startup event in the Security log always reports a usage count of zero for the signing key on a computer that is running Windows Server 2008 or Windows Server 2003

Conclusion

There we go. We’ve finally finished the list of all the settings you can configure via the CAPolicy.inf file.

I had at first considered putting all the sections I talked about above into one file so you could see how a “finished” CAPolicy.inf file would look. Then I realized that would be a monumentally bad idea seeing as, with the exception of the [Version] section, everything covered above is totally optional – perhaps with some settings even being contradictory. I’d hate to be responsible for a bad sample CAPolicy.inf file bouncing around the Internet.

The settings that you will want to configure in your CAPolicy.inf file will completely depend on your needs, and will vary between root CAs and subordinate CAs. I certainly hope that you find this information useful.

- Jonathan ‘Small bills only’ Stephens


Friday Mail Sack – While the Ned’s Away Edition

$
0
0

Hello Internet! Last week, Ned said there wouldn’t be a Mail Sack this week because he was going to be out of town. Well, the DS team was sitting around during our “Ned is out of our hair for a few days” party and we decided that since this is a Team Blog after all, we’d go ahead and post a Friday Mail Sack. So even though the volume was a little light this week, perhaps due to Ned’s announcement, we put one together all by ourselves.

So without further ado, here is this week’s Ned-less Mail Sack.

Certificate Template Supersedence

Q: I’m using the Certificate Wizard in OCS to generate a certificate request and submit it to my Enterprise CA. My CA isn’t configured to issue certificates based on the Web Server template, but I have duplicated the Web Server template and modified the settings. My new template is configured to supersede the Web Server template.

The request fails. Why doesn’t the CA issue the certificate based on my new template if it supersedes the default Web Server template?

A: While that would be a really cool feature, that’s not how Supersedence works. Supersedence is used when you want to replace certificates that have already been issued with a new certificate with modified settings. In addition, it only works with certificates that are being managed by Windows Autoenrollment.

For example, the Administrator has enabled Autoenrollment in the Computer Configuration of the Default Domain Policy:

image

Further, the Administrator has granted the Domain Computers group permission to Autoenroll for the Corporate Computer template. Appropriately, every Windows workstation and member server in the domain enrolls for a certificate based on this template.

Later, the Administrator decides that she needs to update the template in some fashion – add a new certificate purpose to the Enhanced Key Usage, change a key option, whatever. Our intrepid Admin duplicates her Corporate Computer template and creates a new Better Corporate Computer template. In the properties of this new template, she adds the now obsolete Corporate Computer template to the Superseded Templates list.

image

The Admin clicks Ok to commit the changes and then sits back and waits for all of the workstations and member servers in the domain to update their certificate. So how does that work, exactly?

On each workstation and member server, the Autoenrollment server wakes up about every 8 hours and checks to see if it has any work to do. As this occurs on each Windows computer, Autoenrollment determines it is enabled by policy and so checks Active Directory for a list of templates. It discovers that there is a new template for which this computer has Autoenrollment permissions. Further, this new template is configured to supersede the template a certificate it already has is based upon.

The Autoenrollment service then archives the current certificate and enrolls for a new certificate based on the superseding template.

In summary, supersedence doesn’t change the behavior of the CA at all, so you can’t use it to control how the CA will respond when it receives a request for a certain template. No, supersedence is merely a hint to tell Autoenrollment on the client that it needs to replace an existing certificate.

Active Directory Web Services

Q: I’m seeing the following warning event recorded in the Active Directory Web Services event log about once a minute.

Log Name:      Active Directory Web Services
Source:        ADWS
Date:          4/8/2010 3:13:53 PM
Event ID:      1209
Task Category: ADWS Instance Events
Level:         Warning
Keywords:      Classic
User:          N/A
Computer:      corp-adlds-01.corp.contoso.com
Description:
Active Directory Web Services encountered an error while reading the settings for the specified Active Directory Lightweight Directory Services instance.  Active Directory Web Services will retry this operation periodically.  In the mean time, this instance will be ignored.
Instance name: ADAM_ContosoAddressbook

I can’t find any Microsoft resources to explain why this event occurs, or what it means.

A: Well…we couldn’t find any documentation either, but we were curious ourselves so we dug into the problem. It turns out that event is only recorded if ADWS can’t read the ports that AD LDS is configured to use for LDAP and Secure LDAP (SSL). In our test environment, we deleted those values and restarted the ADWS service, and sure enough, those pesky warning events started getting logged.

The following registry values are read by ADWS:

Key: HKLM\SYSTEM\CurrentControlSet\Services\<ADAM_INSTANCE_NAME>\Parameters
Value: Port LDAP
Type: REG_DWORD
Data: 1 - 65535 (default: 389)

Key: HKLM\SYSTEM\CurrentControlSet\Services\<ADAM_INSTANCE_NAME>\Parameters
Value: Port SSL
Type: REG_DWORD
Data: 1 - 65535 (default: 636)

Verify that the registry values described above exist and have the appropriate values. Also verify that the NT AUTHORITY\SYSTEM account has permission to read the values. ADWS runs under the Local System account.

Once you've corrected the problem, restart the ADWS service. If you have to recreate the registry values because they've been deleted, restart the AD LDS instance before restarting the ADWS service.

Thanks for sending us this question. We’ve created the necessary internal documentation, and if we see more issues like this we’ll promote it to the Knowledge Base.

Final Note

Well…that’s it for this week. Please keep posting your comments, observations, topic ideas and questions. And fear not, Ned will be back next week.

Jonathan “The Pretender” Stephens

Blog Platform Migration Complete

$
0
0

Hello, Internetz. Jonathan here again. Ned didn’t tell you the whole story. Not only did I have to wait for the truth serum to wear off; I also had to chew my way out the straps. Nevertheless, I’ve emerged victorious and have again successfully stormed the AskDS gates and vanquished Ned. Don’t fear for the little Neebler, though. Yes, he’s been jammed into a steel drum along the side of one of our nation’s great highways, but he’s being fed well through the bung hole, mostly, and he has a nice view of the Interstate. I hope he enjoys playing Punch Buggy with himself.

Of course, knowing Ned, I give him about a week before he escapes, so let’s make the most of that time, shall we?

AskDS has been successfully migrated to our new blog platform. Unfortunately, the backup that was restored after the migration was older than we thought so we appear to have lost some of our more recent posts. I’m working now to re-post those articles now. Please let us know if I missed one.

--Jonathan “Pretender, Redux” Stephens

AskDS is 0.03 Centuries Old Today

$
0
0

Three years ago today the AskDS site published its first post and had its first commenter. In the meantime we’ve created 455 articles and we’re now ranked 6th in all of TechNet’s blogs, behind AskPerf, Office2010, MarkRussinovich, SBS, and HeyScriptingGuy. That’s a pretty amazing group to be lumped in with for traffic, I don’t mind saying. Especially Mark, he has incredible hair.

Without your visits we wouldn’t be here to celebrate another weirdly composed Office Clipart birthday.

image

Thanks everyone,

- Ned “and the rest of the AskDS contributors” Pyle

Moving Your Organization from a Single Microsoft CA to a Microsoft Recommended PKI

$
0
0

Hi, folks! Jonathan here again, and today I want to talk about what appears to be an increasingly common topic: migrating from a single Windows Certification Authority (CA) to a multi-tier hierarchy. I’m going to assume that you already have a basic understanding of Public Key Infrastructure (PKI) concepts, i. e., you know what a root CA is versus an issuing CA, and you understand that Microsoft CAs come in two flavors – Standalone and Enterprise. If you don’t know those things then I recommend that you take a look at this before proceeding.

It seems that many organizations had installed a single Windows CA in order to support whatever major project that may have required it. Perhaps they were rolling out System Center Configuration Manager (SCCM), or wireless, or some other certificate consuming technology and one small line item in the project’s plan was Install a CA. Over time, though, this single CA began to see a lot of use as it was leveraged more and more for purposes other than originally conceived. Suddenly, there is a need for a proper Public Key Infrastructure (PKI) and administrators are facing some thorny questions:

  1. Can I install multiple PKIs in my forest without them interfering with each other?
  2. How do I set up my new PKI properly so that it is scalable and manageable?
  3. How do I get rid of my old CA without causing an interruption in my business?

I’m here to tell you that you aren’t alone. There are many organizations in the same situation, and there are good answers to each of these questions. More importantly, I’m going to share those answers with you. Let’s get started, shall we?

Important Note: This blog post does not address the private key archival scenario. Stay tuned for a future blog post on migrating archived private keys from one CA to another.

Multiple PKIs In The Forest? Isn’t That Like Two Cats Fighting Over the Same Mouse?

Uh….no.

(You know, I actually considered asking Ned to find some Office clip art that showed two cats fighting over a mouse, and then thought, “What if he found it?!” I decided I didn’t really want to know and bagged the idea.)

To be clear, there is absolutely no issue with installing multiple Windows root CAs in the same forest. You can deploy your new PKI and keep it from issuing certificates to your users or computers until you are good and ready for it to do so. And while you’re doing all this, the old CA will continue to chug along oblivious to the fact that it will soon be removed with extreme prejudice.

Each Windows CA you install requires some objects created for it in Active Directory. If the CA is installed on a domain member these objects are created automatically. If, on the other hand, you install the CA on a workgroup computer that is disconnected from the network, you’ll have to create these objects yourself.

Regardless, all of these objects exist under the following container in Active Directory:

CN=Public Key Services, CN=Services, CN=Configuration, DC=<forestRootPartition>

As you can see, these objects are located in the Configuration partition of Active Directory which explains why you have to be an Enterprise Admin in order to install a CA in the forest. The Public Key Services Container holds the following objects:

CN=AIA Container

AIA stands for Authority Information Access, and this container is the place where each CA will publish its own certificate for applications and services to find if needed. The AIA container holds certificationAuthority objects, one for each CA. The name of the object matches the canonical name of the CA itself.

CN=CDP Container

CDP stands for CRL Distribution Point (and CRL stands for Certificate Revocation List). This container is where each CA publishes its list of revoked certificates to Active Directory. In this container, you’ll find another container object whose common name matches the host name of the server on which Certificate Services is installed – one for each Windows CA in your forest. Within each server container is a cRLDistributionPoint object named for the CA itself. The actual CRL for the CA is published to this object.

CN=Certificate Templates Container

The Certificate Templates container holds a list of pKICertificateTemplate objects, each one representing one of the templates you see in the Certificate Templates MMC snap-in. Certificate templates are shared objects, meaning they can be used by any Enterprise CA in the forest. There is no CA-specific information stored on these objects.

CN=Certification Authorities Container

The Certification Authorities container holds a list of certificationAuthority objects representing each root CA trusted by the Enterprise. Any root CA certificate published here is distributed to each and every member of the forest as a trusted root. A Windows root CA installed on a domain server will publish its certificate here. If you install a root CA on a workgroup server you’ll have to publish the certificate here manually.

CN=Enrollment Services

The Enrollment Services container holds a list of pKIEnrollmentService objects, each one representing an Enterprise CA installed in the forest. The pKIEnrollmentService object is used by Windows clients to locate a CA capable of issuing certificates based on a particular template. When you add a certificate template to a CA via the Certification Authority snap-in, that CA’s pKIEnrollmentService object is updated to reflect the change.

Other Container

There are few other objects and containers in the Public Key Services container, but they are beyond the scope of this post. If you’re really interested in the nitty-gritty details, post a comment and I’ll address them in a future post.

To summarize, let’s look at a visual of each of these objects and containers and see how they fit together. I’ve diagrammed out an environment with three CAs. One is the Old And BustedCA, which has been tottering along for years ever since Bob the network admin put it up to issue certificates for wireless authentication.

Now that Bob has moved onto new and exciting opportunities in the field of food preparation and grease trap maintenance after that unfortunate incident with the misconfigured VLANs, his successor, Mike, has decided to deploy a new, enterprise-worthy PKI.

To that end, Mike has deployed the New Hotness Root CA, along with the More New Hotness Issuing CA. The New Hotness Root CA is an offline Standalone root, meaning it is running the Windows CA in Standalone mode on a workgroup server disconnected from the network. The New Hotness Issuing CA, however, is an online issuing CA. It’s running in Enterprise mode on a domain server.

Let’s see what the AD objects for these CAs look like:

clip_image001

Figure 1: Sample PKI AD objects

We’ve come an awful long way to emphasize one simple point. As you can see, each PKI-related object in Active Directory is uniquely named, either for the CA itself or the server on which the CA is installed. Because of this, you can install a (uniquely named) CA on every server in your environment and not run into the sort of conflict that some customers fear when I talk to them about this topic. You could also press your tongue against a metal pole in the dead of winter. Of course, it would hurt, and you’d look silly, but you could do it. Same concept applies here.

So what’s the non-silly approach?

The Non-Silly Approach

If you need to migrate your organization from the Old And Busted CA to the New Hotness PKI, then the very first thing you should do is deploy the new PKI. This requires proper planning, of course; select your platform, locate your servers, that sort of thing. I encourage you to use a Windows Server 2008 R2 platform. WS08R2 CAs are supported with a minimum schema version of 30 which means you do not need to upgrade your Windows Server 2003 domain controllers. More details are here.

Once your planning is complete, deploy your new PKI. Actual step-by-step guidance is beyond the scope of this blog post, but it is pretty well covered elsewhere. You should first take a look at the Best Practices for Implementing a Microsoft Windows Server 2003 Public Key Infrastructure. Yes, I realize this was for Windows Server 2003, but the concepts are identical for Windows Server 2008 and higher, and the scripts included in the Best Practices Guide are just as useful for the later platforms. It is also true that the Guide describes setting up a three tiered hierarchy, but again, you can easily adapt the prescriptive guidance to a two tiered hierarchy. If you want help with that then you should take a look at this post.

The major benefit to using Windows Server 2008 or higher is a neat little addition to the CAPolicy.INF file. When you install a new Enterprise CA it is preconfigured with a set of default certificate templates for which it is ready immediately to start issuing certificates. You don’t really want the CA to issue any certificates until you’re good and ready for it to do so. If the Enterprise CA wasn’t configured with any templates by default then it wouldn’t issue any certificates after the CA starts up. When you were ready to switch over to the new PKI, you’d just configure the issuing CA with the appropriate templates. It turns out that as of Windows Server 2008 you can install an Enterprise issuing CA so that the default certificate templates were not automatically configured on the CA. You accomplish this by adding a line to the CAPolicy.inf file:

LoadDefaultTemplates=False

Now, if at this point you’re wondering, “What is a CAPolicy.INF file, and how is it involved in setting up a CA,” then guess what? That is your clue that you need to read the Best Practices Guide linked above. It’s all in there, including samples.

“Oh…but the samples are for Windows Server 2003,” you say, accusingly. Relax; here’s a blog post I wrote earlier fully documenting the Windows Server 2008 R2 CAPolicy.INF syntax. Again, the concepts and broad strokes are all the same; just some minor details have changed. Use my earlier post to supplement the Best Practices Guide and you’ll be golden.

I Have My New PKI, So Now What?

So you have your new PKI installed and you’re ready to migrate your organization over to it. How does one do that without impacting one’s organization too severely?

The first thing you’ll want to do is prevent the old CA from issuing any new certificates. You just uninstall it, of course, but that could cause considerable problems. What do you think would happen if that CA’s published CRL expired and it wasn’t around to publish a new one? Depending on the application using those certificates, they’d all fail to validate and become useless. Wireless clients would fail to connect, smart card users would fail to authenticate, and all sorts of other bad things would occur. The goal is to prevent any career limiting outages so you shouldn’t just uninstall that CA.

No, you should instead remove all the templates from the Certificate Templates folder using the Certification Authority MMC snap-in on the old CA. If an Enterprise CA isn’t configured with any templates it can’t issue any new certificates. On the other hand, it is still quite capable of refreshing its CRL, and this is exactly the behavior you want. Conversely, you’ll want to add those same templates you removed from the Old And Busted CA into the Certificate Templates folder on the New Hotness Issuing CA.

If you modify the contents of the Certificate Templates folder for a particular CA, that CA’s pKIEnrollmentService object must be updated in Active Directory. That means that you will have some latency as the changes replicate amongst your domain controllers. It is possible that some user in an outlying site will attempt to enroll for a certificate against the Old And Busted CA and that request will fail because the Old And Busted CA knows immediately that it should not issue any certificates. Given time, though, that error condition will fade as all domain controllers get the new changes. If you’re extremely sensitive to that kind of failure, however, then just add your templates to the New Hotness Issuing CA first, wait a day (or whatever your end-to-end replication latency is) and then remove those templates from the Old And Busted CA. In the long run, it won’t matter if the Old And Busted CA issues a few last minute certificates.

At this point all certificate requests within your organization will be processed by the New Hotness Issuing CA, but what about all those certificates issued by the Old And Busted CA that are still in use? Do you have to manually go to each user and computer and request new certificates? Well…it depends on how the certificates were originally requested.

Manually Requested

If a certificate has been manually requested then, yes, in all likelihood you’ll need to manually update those certificates. I’m referring here to those certificates requested using the Certificates MMC snap-in, or through the Web Enrollment Pages. Unfortunately, there’s no automatic management for certificates requested manually. In reality, though, refreshing these certificates probably means changing some application or service so it knows to use the new certificate. I refer here specifically to Server Authentication certificates in IIS, OCS, SCCM, etc. Not only do you need to change the certificate, but you also need to reconfigure the application so it will use the new certificate. Given this situation, it makes sense to make your necessary changes gradually. Presumably, there is already a procedure in place for updating the certificates used by these applications I mentioned, among others I didn’t, as the current certificates expire. As time passes and each of these older, expiring certificates are replaced by new certificates issued by the new CA, you will gradually wean your organization off of the Old And Busted CA and onto the New Hotness Issuing CA. Once that is complete you can safely decommission the old CA.

And it isn’t as though you don’t have a deadline. As soon as the Old And Busted CA certificate itself has expired you’ll know that any certificate ever issued by that CA has also expired. The Microsoft CA enforces such validity period nesting of certificates. Hopefully, though, that means that all those certificates have already been replaced, and you can finally decommission the old CA.

Automatically Enrolled

Certificate Autoenrollment was introduced in Windows XP, and it allows the administrator to assign certificates based on a particular template to any number of forest users or computers. Triggered by the application of Group Policy, this component can enroll for certificates and renew them when they get old. Using Autoenrollment, once can easily deploy thousands of certificates very, very quickly. Surely, then, there must be an automated way to replace all those certificates issued by the previous CA?

As a matter of fact, there is.

As described above, the new PKI is up and ready to start issuing digital certificates. The old CA is still up and running, but all the templates have been removed from the Certificate Templates folder so it is no longer issuing any certificates. But you still have literally thousands of automatically enrolled certificates outstanding that need to be replaced. What do you do?

In the Certificates Templates MMC snap-in, you’ll see a list of all the templates available in your enterprise. To force all holders of a particular certificate to automatically enroll for a replacement, all you need to do is right-click on the template and select Reenroll All Certificate Holders from the context menu.

clip_image002

What this actually does is increment the major version number of the certificate template in question. This change is detected by the Autoenrollment component on each Windows workstation and server prompting them to enroll for the updated template, replacing any certificate they may already have. Automatically enrolled user certificates are updated in the exact same fashion.

Now, how long it takes for each certificate holder to actually finish enrolling will depend how many there are and how they connect to the network. For workstations that are connected directly to the network, user and computer certificates will be updated at the next Autoenrollment pulse.

Note: For computers, the autoenrollment pulse fires at computer startup and every eight hours thereafter. For users, the autoenrollment pulse fires at user logon and every eight hours thereafter. You can manually trigger an autoenrollment pulse by running certutil -pulse from the command line. Certutil.exe is installed with the Windows Server 2003 Administrative Tools Pack on Windows XP, but it is installed by default on the other currently supported versions of Windows.

For computers that only connect by VPN it may take longer for certificates to be updated. Unfortunately, there is no blinking light that says all the certificate holders have been reenrolled, so monitoring progress can be difficult. There are ways it could be done -- monitoring the certificates issued by the CA, using a script to check workstations and servers and verify that the certificates are issued from the new CA, etc. -- but they require some brain and brow work from the Administrator.

There is one requirement for this reenrollment strategy to work. In the group policy setting where you enable Autoenrollment, you must have the following option selected: Update certificates that use certificate templates.

clip_image003

If this policy option is not enabled then your autoenrolled certificates will not be automatically refreshed.

Remember, there are two autoenrollment policies -- one for the User Configuration and one for the Computer Configuration. This option must be selected in both locations in order to allow the Administrator to force both computers and users to reenroll for an updated template.

But I Have to Get Rid of the Old CA!

As I’ve said earlier, once you’ve configured the Old And Busted CA so that it will no longer issue certificates you shouldn’t need to touch it again until all the certificates issued by that CA have expired. As long as the CA continues to publish a revocation list, all the certificates issued by that CA will remain valid until they can be replaced. But what if you want to decommission the Old And Busted CA immediately? How could make sure that your outstanding certificates would remain viable until you can replace them with new certificates? Well, there is a way.

All X.509 digital certificates have a validity period, a defined interval time with fixed start and end dates between which the certificate is considered valid unless it has been revoked. Once the certificate is expired there is no need to check with a certificate revocation list (CRL) -- the certificate is invalid regardless of its revocation status. Revocation lists also have a validity period during which time it is considered an authoritative list of revoked certificates. Once the CRL has expired it can no longer be used to check for revocation status; a client must retrieve a new CRL.

You can use this to your advantage by extending the validity period of the Old And Busted CA’s CRL in the CA configuration to match (or exceed) the remaining lifetime of the CA certificate. For example, if the Old And Busted CA’s certificate will be valid for the next 4 years, 3 months, and 10 days, then you can set the publication interval for the CA’s CRL to 5 years and immediately publish it. The newly published CRL will remain valid for the next five years, and as long as you leave that CRL published in the defined CRL distribution points -- Active Directory and/or HTTP -- clients will continue to use it for checking revocation status. You no longer need the actual CA itself so you can uninstall it.

One drawback to this, however, is that you won’t be able to easily add any certificates to the revocation list. If you need to revoke a certificate after you’ve decommissioned the CA, then you’ll need to use the command line utility certutil.exe.

Certutil.exe -resign “Old And Busted CA.crl” +<serialNumber>

Of course, this requires that you keep the private keys associated with the CA, so you’d better back up the CA’s keys before you uninstall the role.

Conclusion

Wow…we’ve covered a lot of information here, so I’ll try to boil all of it down to the most important points. First, yes you can have multiple root CAs and even multiple PKIs in a single Active Directory forest. Because of the way the objects are representing those CAs are named and stored, you couldn’t possibly experience a conflict unless you tried to give more than one CA the same CA name.

Second, once the new PKI is built you’ll want to configure your old CA so that it no longer issues certificates. That job will now belong to the issuing CA in your new PKI.

Third, the ease with which you can replace all the certificates issued by the old CA with certificates issued by your new CA will depend mainly on how the certificates were first deployed. If all of your old certificates were requested manually then you will need to replace them in the same way. The easiest way to do that is replace them all gradually as they expired. On the other hand, if your old certificates were deployed via autoenrollment then you can trigger all of your autoenrollment clients to replace the old certificates with new ones from the new PKI. You can do this through the Certificate Templates MMC snap-in.

And finally, what do you do with the old CA? Well, if you don’t need the equipment you can just keep it around until it either expires or all the old certificates have been replaced. If, however, you want to get rid of it immediately you can extend the lifetime of the old CA’s CRL to match the remaining validity period of the CA certificate. Just publish a new CRL and it’ll be good until all outstanding certificates have expired. Just keep in mind that this route will limit your ability to revoke those old certificates.

If you think I missed something, or you want me to clarify a certain point, please feel free to post in the comments below.

Jonathan “Man in Black” Stephens

PS: Don’t ever challenge my Office clip art skills again, Jonathan.

image

- Ned

The Case of the Enormous CA Database

$
0
0

Hello, faithful readers! Jonathan here again. Today I want to talk a little about Certification Authority monitoring and maintenance. This topic was brought to my attention by a recent case that I had where a customer’s CA database had grown to rather elephantine proportions over the course of many months quite unbeknownst to the administrators. In fact, the problem didn’t come to anyone’s attention until the CA database had consumed nearly all of the 55 GB partition on which it resided. How many of you may be in this same situation and be completely unaware of it? Hmmm? Well, in this post, I’ll first go over the details of the issue and the steps we took to resolve the immediate crisis. In the second part, I’ll cover some processes and tools you can put in place to both maintain your CA database and also alert you to possible problems that may increase its size.

The Issue

Once upon a time, Roger contacted Microsoft Support and reported that he had a problem. His Windows Server 2003 Enterprise CA database, which had been given its own partition, had grown to over 50 GB in size, and was still growing. The partition itself was only 55 GB in size, so Roger asked if there is any way to compact the CA database before the CA failed due to a lack of disk space.

Actually, compacting the CA database is a simple process, and while this isn’t a terribly common request we’re pretty familiar with the steps. What made this case so unusual was the sheer size of the database file. Previously, the largest CA database I’d ever seen was only about 21 GB, and this one was over twice that size! But no matter. The principles are the same regardless, and so we went to it.

Compacting the CA Database

Compacting a CA database is essentially a two-step process. The first step is to delete any unnecessary rows from the CA database. This will leave behind what we call white space in the database file that can be reused by the CA for any new records that it adds. If we just removed the unneeded records the size of the database file would not be reduced, but we could be confident that the database file would grow no larger in size.

If the database file were smaller, this might be an acceptable solution. In this case, the size of the database file relative to the size of the partition on which it resided mandated that we also compact the database file itself.

If you are familiar with compacting the Active Directory database on a domain controller, then you will realize that this process is identical. A new database file is created and all the active records are copied from the old database file to the new database file, thus removing any of the white space. When finished, the old database file is deleted and the new file is renamed in place with the name of the old file. While actually performing the compaction, Certificate Services must be disabled.

At the end of this process, we should have a significantly smaller database file, and with appropriate monitoring and maintenance in the future we can ensure that it never reaches such difficult to manage proportions again.

What to Delete?

What rows can we safely delete from the CA database? First, you need to have a basic understanding of what exactly is stored in the CA database. When a new certificate request is submitted to the CA a new row is created in the database. As that request is processed by the CA the various fields in that row are updated and the status of each request at a particular point in time describes at what point in the process the request is. What are the possible states for each row?

  • Pending - A pending request is basically on hold until an Administrator manually approves the request. When approved, the request is re-submitted to the CA to be processed. On a Standalone CA, all certificate requests are pended by default. On an Enterprise CA, certificate requests are pended if the option to require CA Manager approval is selected in the certificate template.
  • Failed - A failed request is one that has been denied by the CA because the request isn’t suitable per the CA’s policy, or there was an error encountered while generating the certificate. One example of such an error is if the certificate template is configured to require key archival, but no Key Recovery Agents are configured on the CA. Such a request will fail.
  • Issued - The request has been processed successfully and the certificate has been issued.
  • Revoked - The certificate request has been processed and the certificate issued, but the administrator has revoked the certificate.

In addition, issued and revoked certificates can either be time valid or expired.

These states, and whether or not a certificate is expired, need to be taken into account when considering which rows to delete. For example, you do not want to delete the row for a time valid, issued certificate, and in fact, you won’t be able to. You won’t be able to delete the row for a time valid, revoked certificate either because this information is necessary in order for the CA to periodically build its certificate revocation list (CRL).

Once a certificate has expired, however, then Certificate Services will allow you to delete its row. Expired certificates are no longer valid on their face, so there is no need to retain any revocation status. On the other hand, if you’ve enabled key archival then you may have private keys stored in the database row as well, and if you delete the row you’d never be able to recover those private keys.

That leaves failed and pending requests. These rows are just requests; there are no issued certificates associated with them. In addition, while technically a failed request can be resubmitted to the CA by the Administrator, unless the cause of the original failure is addressed there is little purpose in doing so. In practice, you can safely delete failed requests. Any pending requests should probably be examined by an Administrator before you delete them. A pending request means that someone out there has an outstanding certificate request for which they are patiently waiting on an answer. The Administrator should go through and either issue or deny any pending requests to clear that queue, rather than just deleting the records.

In this customer’s case, we decided to delete all the failed requests. But first, we had to determine exactly why the database had grown to such huge proportions.

Fix the Root Problems, First

Before you start deleting the failed requests from the database, you should ensure that you have addressed any configuration issues that led to these failures to begin with. Remember, Roger reported that the database was continuing to grow in size. It would make little sense to start deleting failed requests -- a process that requires that the CA be up and running -- if there are new requests being submitted to the CA and subsequently failing. The rows you delete could just be replaced by more failed rows and you’ll have gained nothing.

In this particular case, we found that there were indeed many request failures still being reported by the CA. These had to be addressed before we could actually do anything about the size of the CA database. When we checked the application log, we saw that Certificate Services was recording event ID 53 warnings and event ID 22 errors for multiple users. Let’s look at these events.

Event ID 53

Event ID 53 is a warning event indicating that the submitted request was denied, and containing information about why it was denied. This is a generic event whose detailed message takes the form of:

Certificate Services denied request %1 because %2. The request was for %3. Additional information: %4

Where:

%1: Request ID
%2: Reason request was denied
%3: Account from which the request was submitted
%4: Additional information

In this particular case, the actual event looked like this:

Event Type:   Warning

Event Source:CertSvc

Event Category:      None

Event ID:     53

Date:         <date>

Time:         <time>

User:         N/A

Computer:     <CA server>

Description:

Certificate Services denied request 22632 because The EMail name is unavailable and cannot be added to the Subject or Subject Alternate name. 0x80094812 (-2146875374).  The request was for CORP02\jackburton.  Additional information: Denied by Policy Module

This event means that the certificate template is configured to include the user’s email address in the Subject field, the Subject Alternative Name extension, or both, and that this particular user does not have an email address configured. When we looked at the users for which this event was being recorded, they were all either service accounts or test users. These are accounts for which there would probably be no email address configured under normal circumstances. Contributing to the problem was the fact that user autoenrollment had been enabled at the domain level by policy, and the Domain Users group had permissions to autoenroll for this particular template.

In general, one probably shouldn’t configure autoenrollment for service accounts or test accounts without specific reasons. In this case, simple User certificates intended for “real” users certainly don’t apply to these types of accounts. The suggestion in this case would be to create a separate OU wherein user autoenrollment is disabled by policy, and then place all service and test accounts in that OU. Another option is to create a group for all service and test accounts, and then deny that group Autoenroll permissions on the template. Either way, these particular users won’t attempt to autoenroll for the certificates intended for your users which will eliminate these events.

For information on troubleshooting other possible causes of these warning events, check out this link.

Event ID 22

Event ID 22 is an error event indicating that the CA was unable to process the request due to an internal failure. Fortunately, this event also tells you what the failure was. This is a generic event whose detailed message takes the form of:

Certificate Services could not process request %1 due to an error: %2. The request was for %3. Additional information: %4

Where:

%1: Request ID
%2: The internal error
%3: Account from which the request was submitted
%4: Additional information

In this particular case, the actual event looked like this:

Event Type:   Error

Event Source:CertSvc

Event Category:      None

Event ID:     22

Date:         <date>

Time:         <time>

User:         N/A

Computer:     <CA server>

Description:

Certificate Services could not process request 22631 due to an error: Cannot archive private key.  The certification authority is not configured for key archival. 0x8009400a (-2146877430).  The request was for CORP02\david.lo.pan.  Additional information: Error Archiving Private Key

This event means that the certificate template is configured for key archival but the CA is not. A CA will not accept the user’s encrypted private key in the request if there are no valid Key Recovery Agent (KRA) configured. The fix for this is pretty simple for our current purposes; disable key archival in the template. If you actually need to archive keys for this particular template then you should set that up before you start removing failed requests from your database. Here are some links to more information on that topic:

Key Archival and Recovery in Windows Server 2003
Key Archival and Recovery in Windows Server 2008 and Windows Server 2008 R2

Template, Template, Where’s the Template?

What’s the fastest way to determine which template is actually associated with each of these events? You can find that by looking at the failed request entry in the Certification Authority MMC snap-in (certsrv.msc). If you have more than a couple hundred failed requests, however, find the one you actually want can be difficult. This is where filtering the view comes in handy.

1. In the Certification Authority MMC snap-in, right-click on Failed Requests, select View, then select Filter….

clip_image001

2. In the Filter dialog box, click Add….

clip_image002

3. In the New Restriction dialog box, set the Request ID to the value that you see in the event, and click Ok.

clip_image003

4. In the Filter dialog box, click Ok.

clip_image004

5. Now you should see just the failed request designated in the event. Right-click on it, select All Tasks, and then select View Attributes/Extensions….

clip_image005

6. In the properties for this request, click on the Extensions tab. In the list of extensions, locate Certificate Template Information. The template name will be show in the extension details.

clip_image006

This is the name of the template whose settings you should review and correct, if necessary.

Once the root problems causing the failed requests have been resolved, monitor the Application event log to ensure that Certificate Services is not logging any more failed requests. Some failed requests in a large environment are expected. That’s just the CA doing its job. What you’re trying to eliminate are the large bulk of the failures caused by certificate template and CA misconfiguration. Once this is complete, you’re ready to start deleting rows from the database.

Deleting the Failed Requests

The next step in this process is to actually delete the rows using our trusty command line utility certutil.exe. The -deleterow verb, introduced in Windows Server 2003, can be used to delete rows from the CA database. You just provide it with the type of records you want deleted and a past date (if you use a date equal to the current date or later, the command will fail). Certutil.exe will then delete the rows of that type where the date the request was submitted to the CA (or the date of expiration, for issued certificates) is earlier than the date you provide. The supported types of records are:

Name

Description

Type of date

Request

Failed and pending requests

Submission date

Cert

Expired and revoked certificates

Expiration date

Ext

Extension table

N/A

Attrib

Attribute table

N/A

CRL

CRL table

Expiration date

 

 

 

 

For example, if you want to delete all failed and pending requests submitted by January 22, 2001, the command is:

C:\>Certutil -deleterow 1/22/2001 Request

The only problem with this approach is that certutil.exe will only delete about 2,000 - 3,000 records at a time before failing due to exhaustion of the version store. Luckily, we can wrap this command in a simple batch file that runs the command over and over until all the designated records have been removed.

@echo off

:Top

Certutil -deleterow 8/31/2010 Request

If %ERRORLEVEL% EQU -939523027 goto Top

This batch file runs certutil.exe with the -deleterow verb. If the command fails with the specific error code indicating that the version store has been exhausted, the batch file simply loops and the command is executed again. Eventually, the certutil.exe command will exit with an ERRORLEVEL value of 0, indicating success. The script will then exit.

Every time the command executes, it will display how many records were deleted. You may therefore want to pipe the output of the command to a text file from which you can total up these values and determine how many records in total were deleted.

In Roger’s case, the total number of deleted records came to about 7.8 million rows. Yes…that is 7.8 million failed requests. The script above ran for the better part of a week, but the CA was up and running the entire time so there was no outage. Indeed, the CA must be up and running for the certutil.exe command to work as certutil.exe communicates with the ICertAdmin COM interface of Certificate Services.

That is not to say that one should not take precautions ahead of time. We increased the base CRL publication interval to seven days and published a new base CRL immediately before starting to delete the rows. We also disabled delta CRLs temporarily while the script was running. We did this so that even if something unexpected happen, clients would still be able to check the revocation status of certificates issued by the CA for an extended period, giving us the luxury of time to take any necessary remediation steps. As expected, however, none were required.

And Finally, Compaction

The final step in this process is compacting the CA database file to remove all the white space resulting from deleting the failed requests from the database. This process is identical to defragmenting and compacting Active Directory’s ntds.dit file, as the Certificate Services uses the same underlying database technology as Active Directory -- the Extensible Storage Engine (ESE).

Just as with AD, you must have free space on the partition equal to or greater than the database file size. As you’ll recall, we certainly didn’t have that in this case what with a database of 50 GB on a 55 GB partition. What do you do in this case? Move the database and log files to a partition with enough free space, of course.

Fortunately, Roger’s backing store was on a Storage Area Network (SAN), so it was trivial to slice off a new 150 GB partition and move the database and log files to the new, larger partition. We didn’t even have to modify the CA configuration as Roger’s storage admins were able to just swap drive letters since the only thing on the original partition was the CertLog folder containing the CA database and log files. Good planning, that.

With enough free space now available, all is ready to compact the database. Well…almost. You should first take the precaution of backing up the CA database prior to starting just in case something goes wrong. The added benefit to backing up the CA database is that you’ll truncate the database log files. In Roger’s case, after deleting 7.8 million records there were several hundred megabytes of log files. To back up just the CA database, run the following command:

C:\>Certutil -backupDB backupDirectory

The backup directory will be created for you if it does not already exist, but if it does exist, it must be empty. Once you have the backup, copy it somewhere safe. And now we’re finally ready to proceed.

To compact the CA database, stop and then disable Certificate Services. The CA cannot be online during this process. Next, run the following command:

C:\>Esentutl /d Path\CaDatabase.edb

Esentutl.exe will take care of the rest. In the background, esentutl.exe will create a temporary database file and copy all the active records from the current database file to the new one. When the process is complete, the original database file will be deleted and the temporary file renamed to match the original. The only difference is that the database file should be much smaller.

How much smaller? Try 2.8 GB. That’s right. By deleting 7.8 million records and compacting the database, we recovered over 47 GB of disk space. Your own mileage may vary, though, as it depends on the number of failed requests in your own database. To finish, we just copied the now much smaller database and log files to the original drive and then re-enabled and restarted Certificate Services.

While very time consuming, simply due to the sheer number of failed requests in the database, overall the operation went off without a hitch. And everyone lived happily ever after.

Preventative Maintenance and Monitoring

Now that the CA database is back down to its fighting weight, how do you make sure you keep it that way? There are actually several things you can do, including regular maintenance and, if you have the capability, closer monitoring of the CA itself.

Maintenance

You’ll remember that it was not necessary to take the CA offline while deleting the failed requests. We did take precautions by modifying the CRL publication interval but fortunately that turned out to be unnecessary. Since no outage is required to remove failed requests from the CA database, it should be pretty simple to get approval to add it to your regular maintenance cycle. (You do have one, right?) Every quarter or so, run the script to delete the failed requests. You can do it more or less often as is appropriate for your own environment.

You don’t have to compact the CA database each time. Remember, the white space will simply be reused by the CA for processing new requests. Over time, you may find that you reach a sort of equilibrium, especially if you also have the freedom to delete expired certificates as well (i.e., no Key Archival), where the CA database just doesn’t get any bigger. Rows are deleted and new rows are created in roughly equal numbers, and the space within the database file is reused over and over -- a state of happy homeostasis.

If you want, you can even use scheduled tasks to automatically perform this maintenance every three months. The batch file above can be modified to run using VBScript or even PowerShell. Simply add some code to email yourself a report when the deletion process is finished; there are plenty of code samples available on the web for sending email using both VBScript and PowerShell. Bing it!

Monitoring

In addition to this maintenance, you can also use almost any monitoring or management software to watch for certain key events on the CA. Those key events? I already covered two of them above -- event IDs 53 and 22. For a complete list of events recorded by Certificate Services, look here.

If you have Microsoft Operations Manager (MOM) 2005 or System Center Operations Manager (SCOM) 2007 deployed, and you have Windows Server 2008 or Windows Server 2008 R2 CAs, then you can download the appropriate management pack to assist you with your monitoring.

MOM 2005:Windows Server 2008 Active Directory Certificate Services Management Pack for Microsoft OpsMgr 2005
SCOM 2007 SP1:Active Directory Certificate Services Monitoring Management Pack

The management packs encompass event monitoring and prescriptive guidance and troubleshooting steps to make managing your PKI much simpler. These management packs are only supported for CAs running on Windows Server 2008 or higher, so this is yet one more reason to upgrade those CAs.

Conclusion

Like any other infrastructure service in your enterprise environment, the Windows CA does require some maintenance and monitoring to maintain its viability over time. If you don’t pay attention to it, you may find yourself in a situation similar to Roger’s, not noticing the problem until it is almost too late to do anything to prevent an outage. With proper monitoring, you can become aware of any serious problems almost as soon as they begin, and with regular maintenance you prevent such problems from ever occurring. I hope you find the information in this post useful.

Jonathan “Pork Chop Express” Stephens

Friday Mail Sack: LeBron is not Jordan Edition

$
0
0

Hi folks, Ned here again. Today we discuss trusts rules around domain names, attribute uniqueness, the fattest domains we’ve ever seen, USMT data-only migrations, kicking FRS while it’s down, and a few amusing side topics.

Scottie, don’t be that way. Go Mavs.

Question

I have two forests with different DNS names, but with duplicate NetBIOS names on the root domain. Can I create a forest (Kerberos) trust between them? What about NTLM trusts between their child domains?

Answer

You cannot create external trusts between domains with the same name or SID, nor can you create Kerberos trusts between two forests with the same name or SID. This includes both the NetBIOS and FQDN version of the name – even if using a forest trust where you might think that the NB name wouldn’t matter – it does. Here I am trying to create a trust between fabrikam.com and fabrikam.net forests – I get the super useful error:

“This operation cannot be performed on the current domain”

image

But if you are creating external (NTLM, legacy) trusts between two non-root domains in two forests, as long as the FQDN and NB name of those two non-root domains are unique, it will work fine. They have no transitive relationship.

So in this example:

  • You cannot create a domain trust nor a forest trust between fabrikam.com and fabrikam.net
  • You can create a domain (only) trust between left.fabrikam.com and right.fabrikam.net
  • You cannot create a domain trust between fabrikam.com and right.fabrikam.net
  • You cannot create a domain trust between fabrikam.net and left.fabrikam.com

fabrikam1

Why don’t the last two work? Because the trust process thinks that the trust already exists due to the NetBIOS name match with the child’s parent. Arrrgh!

image

BUT!

You could still have serious networking problems in this scenario regardless of the trust. If there are two same-named domains physically accessible through the network from the same computer, there may be a lot of misrouted communication when people just use NetBIOS domain names. They need to make sure that no one ever has to broadcast NetBIOS to find anything – their WINS environments must be perfect in both forests and they should convert all their DFS to using DfsDnsConfig. Alternatively they could block all communication between the two root domains’ DCs, perhaps at a firewall level.

Note: I am presuming your NetBIOS domain name matches the left-most part of the FQDN name. Usually it does, but that’s not a requirement (and not possible if you are using more than 15 characters in that name).

Question

Is possible to enforce uniqueness for the sAMAccountName attribute within the forest?

Answer

[Courtesy of Jonathan Stephens]

The Active Directory schema does not have a mechanism for enforcing uniqueness of an attribute. Those cases where Active Directory does require an attribute to be unique in either the domain (sAMAccountName) or forest (objectGUID) are enforced by other code – for example, AD Users and Computers won’t let you do it:

image

The only way you could actually achieve this is to have a custom user provisioning application that would perform a GC lookup for an account with a particular sAMAccountName, and would only permit creation of the new object should no existing object be found.

[Editor’s note: If you want to see what happens when duplicate user samaccountname entries are created, try this on for size in your test lab:

1. Enable AD Recycle Bin.
2. Create an OU called Sales.
3. Create a user called 'Sara Davis' with a logon name and pre-windows 2000 logon name of 'saradavis'.
4. Delete the user.
5. In the Users container, create a user called 'Sara Davis' with a logon name and pre-windows 2000 logon name of 'saradavis' (simulating someone trying to get that user back up and running by creating it new, like a help desk would do for a VIP in a hurry).
6. Restore the deleted 'Sara Davis' user back to her previous OU (this will work because the DN's do not match and the recreated user is not really the restored one), using:

get-adobject -filter 'samaccountname -eq "saradavis"' -includedeletedobjects | restore-adobject -targetpath "ou=sales,dc=consolidatedmessenger,=dc=com"

(note the 'illegal modify operation' error).

7. Despite the above error, the user account will in fact be restored successfully and will now exist in both the Sales OU and the Users container, with the same sAMAccountName and userPrincipalName.
8. Logon as SaraDavis using the NetBIOS-style name.
9. Logoff.
10. Note in DSA.MSC how 'Sara Davis' in the Sales OU now has a 'pre-windows 2000' logon name of $DUPLICATE-<something>.
11. Note how both copies of the user have the same UPN.
12. Logon with the UPN name of saradavis@consolidatedmessenger.com and note that this attribute does not get mangled.

Fun, eh? – Ned]

Question

Which customer in the world has the most number of objects in a production AD domain?

Answer

Without naming specific companies - I have to protect their privacy - the single largest “real” domain I have ever heard of had ~8 million user objects and nearly nothing else. It was used as auth for a web system. That was back in Windows 2000 so I imagine it’s gotten much bigger since then.

I have seen two other customers (inappropriately) use AD as a quasi-SQL database, storing several hundred million objects in it as ‘transactions’ or ‘records’ of non-identity data, while using a custom schema. This scaled fine for size but not for performance, as they were constantly writing to the database (sometimes at a rate of hundreds of thousands of new objects a day) and the NTDS.DIT is - naturally - optimized for reading, not writing.  The performance overall was generally terrible as you might expect. You can also imagine that promoting a new DC took some time (one of them called about how initial replication of a GC had been running for 3 weeks; we recommended IFM, a better WAN link, and to stop doing that $%^%^&@).

For details on both recommended and finite limits, see:

Active Directory Maximum Limits - Scalability

http://technet.microsoft.com/en-us/library/active-directory-maximum-limits-scalability(WS.10).aspx

The real limit on objects created per DC is 2,147,483,393 (or 231 minus 255). The real limit on users/groups/computers (security principals) in a domain is 1,073,741,823 (or 230). If you find yourself getting close on the latter you need to open a support case immediately!

Question

Is a “Data Only” migration possible with USMT? I.e. no application settings or configuration is migrated, only files and folders.

Answer

Sure thing.

1. Generate a config file with:

scanstate.exe /genconfig:config.xml

2. Open that config.xml in Notepad, then search and replace “yes” with “no” (including the quotation marks) for all entries. Save that file. Do not delete the lines, or think that not including the config.xml has the same effect – that will lead to those rules processing normally.

3. Run your scanstate, including config.xml and NOT including migapp.xml. For example:

scanstate.exe c:\store /config:config.xml /i:migdocs.xml /v:5

Normally, your scanstate log should be jammed with entries around the registry data migration:

Processing Registry HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\AllowedDragImageExts [.ico]
Processing Registry HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\AllowedDragImageExts [.jfif]
Processing Registry HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\AllowedDragImageExts [.jpe]
Processing Registry HKCU\Control Panel\Accessibility\HighContrast
Processing Registry HKCU\Control Panel\Accessibility\HighContrast [Flags]
Processing Registry HKCU\Control Panel\Accessibility\HighContrast [High Contrast Scheme]

If you look through your log after using the steps above, none of those will appear.

You might also think that you could just rename the DLManifests and ReplacementManifests folders to get the same effect and you’d almost be right. The problem is that Vista or Windows 7also use the built in %systemroot%\winsxs\manifests folders, and you certainly cannot remove those. Just go with the config.xml technique.

Question

After we migrate SYSVOL from FRS to DFSR on Windows Server 2008 R2, we still see that the FRS service is set to automatic. Is it ok to disable?

Answer

Absolutely. Once an R2 server stops replicating SYSVOL with FRS, it cannot use that service for any other data. If you try to start the FRS service or replicate with it will log events like these:

Log Name: File Replication Service
Source: NtFrs
Date: 1/6/2009 11:12:45 AM
Event ID: 13574
Task Category: None
Level: Error
Keywords: Classic
User: N/A
Computer: 7015-SRV-03.treyresearch.net
Description:
The File Replication Service has detected that this server is not a domain controller. Use of the File Replication Service for replication of non-SYSVOL content sets has been deprecated and therefore, the service has been stopped. The DFS Replication service is recommended for replication of folders, the SYSVOL share on domain controllers and DFS link targets.

Log Name: File Replication Service
Source: NtFrs
Date: 1/6/2009 2:16:14 PM
Event ID: 13576
Task Category: None
Level: Error
Keywords: Classic
User: N/A
Computer: 7015-srv-01.treyresearch.net
Description:
Replication of the content set "PUBLIC|FRS-REPLICATED-1" has been blocked because use of the File Replication Service for replication of non-SYSVOL content sets has been deprecated. The DFS Replication service is recommended for replication of folders, the SYSVOL share on domain controllers and DFS link targets.

We document this in the SYSVOL Replication Migration Guide but it’s easy to miss and a little confusing – this article applies to both R2 and Win2008, and Win2008 can still use FRS:

7. Stop and disable the FRS service on each domain controller in the domain unless you were using FRS for purposes other than SYSVOL replication. To do so, open a command prompt window and type the following commands, where <servername> is the Universal Naming Convention (UNC) path to the remote server:

Sc <servername>stop ntfrs

Sc <servername>config ntfrs start=disabled

Other stuff

Another week, another barrage of cloud.


(courtesy of the http://xkcd.com/ blog)

Finally… friggin’ Word. Play this at 720p, full screen.

 

----

Have a great weekend folks.

- Ned “waiting on yet more email from humor-impaired MS colleagues about his lack of professionalism” Pyle

What is the Impact of Upgrading the Domain or Forest Functional Level?

$
0
0

Hello all, Jonathan here again. Today, I want to address a question that we see regularly. As customers upgrade Active Directory, and they inevitably reach the point where they are ready to change the Domain or Forest Functional Level, they sometimes become fraught. Why is this necessary? What does this mean? What’s going to happen? How can this change be undone?

What Does That Button Do?

Before these question can be properly addressed, if must first be understood exactly what purposes the Domain and Forest Functional Levels serve. Each new version of Active Directory on Windows Server incorporates new features that can only be taken advantage of when all domain controllers (DC) in either the domain or forest have been upgraded to the same version. For example, Windows Server 2008 R2 introduces the AD Recycle Bin, a feature that allows the Administrator to restore deleted objects from Active Directory. In order to support this new feature, changes were made in the way that delete operations are performed in Active Directory, changes that are only understood and adhered to by DCs running on Windows Server 2008 R2. In mixed domains, containing both Windows Server 2008 R2 DCs as well as DCs on earlier versions of Windows, the AD Recycle Bin experience would be inconsistent as deleted objects may or may not be recoverable depending on the DC on which the delete operation occurred. To prevent this, a mechanism is needed by which certain new features remain disabled until all DCs in the domain, or forest, have been upgraded to the minimum OS level needed to support them.

After upgrading all DCs in the domain, or forest, the Administrator is able to raise the Functional Level, and this Level acts as a flag informing the DCs, and other components as well, that certain features can now be enabled. You'll find a complete list of Active Directory features that have a dependency on the Domain or Forest Functional Level here:

Appendix of Functional Level Features
http://technet.microsoft.com/en-us/library/understanding-active-directory-functional-levels(WS.10).aspx

There are two important restrictions of the Domain or Forest Functional Level to understand, and once they are, these restrictions are obvious. Once the Functional Level has been upgraded, new DCs on running on downlevel versions of Windows Server cannot be added to the domain or forest. The problems that might arise when installing downlevel DCs become pronounced with new features that change the way objects are replicated (i.e. Linked Value Replication). To prevent these issues from arising, a new DC must be at the same level, or greater, than the functional level of the domain or forest.

The second restriction, for which there is a limited exception on Windows Server 2008 R2, is that once upgraded, the Domain or Forest Functional Level cannot later be downgraded. The only purpose that having such ability would serve would be so that downlevel DCs could be added to the domain. As has already been shown, this is generally a bad idea.

Starting in Windows Server 2008 R2, however, you do have a limited ability to lower the Domain or Forest Functional Levels. The Windows Server 2008 R2 Domain or Forest Functional level can be lowered to Windows Server 2008, and no lower, if and only if none of the Active Directory features that require a Windows Server 2008 R2 Functional Level has been activated. You can find details on this behavior - and how to revert the Domain or Forest Functional Level - here.

What Happens Next?

Another common question: what impact does changing the Domain or Forest Functional Level have on enterprise applications like Exchange or Lync, or on third party applications? First, new features that rely on the Functional Level are generally limited to Active Directory itself. For example, objects may replicate in a new and different way, aiding in the efficiency of replication or increasing the capabilities of the DCs. There are exceptions that have nothing to do with Active Directory, such as allowing NTFRS replacement by DFSR to replicate SYSVOL, but there is a dependency on the version of the operating system. Regardless, changing the Domain or Forest Functional Level should have no impact on an application that depends on Active Directory.

Let's fall back on a metaphor. Imagine that Active Directory is just a big room. You don't actually know what is in the room, but you do know that if you pass something into the room through a slot in the locked door you will get something returned to you that you could use. When you change the Domain or Forest Functional Level, what you can pass in through that slot does not change, and what is returned to you will continue to be what you expect to see. Perhaps some new slots added to the door through which you pass in different things, and get back different things, but that is the extent of any change. How Active Directory actually processes the stuff you pass in to produce the stuff you get back, what happens behind that locked door, really isn't relevant to you.

If you carry this metaphor forward into the real world, if an application like Exchange uses Active Directory to store its objects, or to perform various operations, none of that functionality should be affected if the Domain or Forest Functional Mode changes. In fact, if your applications are also written to take advantage of new features introduced in Active Directory, you may find that the capabilities of your applications increase when the Level changes.

The answer to the question about the impact of changing the Domain or Forest Functional Level is there should be no impact. If you still have concerns about any third party applications, then you should contact the vendor to find out if they tested the product at the proposed Level, and if so, with what result. The general expectation, however, should be that nothing will change. Besides, you do test your applications against proposed changes to your production AD, do you not? Discuss any issues with the vendor before engaging Microsoft Support.

Where’s the Undo Button?

Even after all this, however, there is a great concern about the change being irreversible, so that you must have a rollback plan just in case something unforeseen and catastrophic occurs to Active Directory. This is another common question, and there is a supported mechanism to restore the Domain or Forest Functional Level. You take a System State back up of one DC in each domain in the forest. To recover, flatten all the DCs in the forest, restore one for each domain from the backup, and then DCPROMO the rest back into their respective domains. This is a Forest Restore, and the steps are outlined in detail in the following guide:

Planning for Active Directory Forest Recovery
http://technet.microsoft.com/en-us/library/planning-active-directory-forest-recovery(WS.10).aspx

By the way, do you know how often we’ve had to help a customer perform a complete forest restore because something catastrophic happened when they raised the Domain or Forest Functional Level? Never.

Best Practices

What can be done prior to making this change to ensure that you have as few issues as possible? Actually, there are some best practices here that you can follow:

1. Verify that all DCs in the domain are, at a minimum, at the OS version to which you will raise the functional level. Yes… I know this sounds obvious, but you’d be surprised. What about that DC that you decommissioned but for which you failed to perform metadata cleanup? Yes, this does happen.
Another good one that is not so obvious is the Lost and Found container in the Configuration container. Is there an NTDS Settings object in there for some downlevel DC? If so, that will block raising the Domain Functional Level, so you’d better clean that up.

2. Verify that Active Directory is replicating properly to all DCs. The Domain and Forest Functional Levels are essentially just attributes in Active Directory. The Domain Functional Level for all domains must be properly replicated before you’ll be able to raise the Forest Functional level. This practice also addresses the question of how long one should wait to raise the Forest Functional Level after you’ve raised the Domain Functional Level for all the domains in the forest. Well…what is your end-to-end replication latency? How long does it take a change to replicate to all the DCs in the forest? Well, there’s your answer.

Best practices are covered in the following article:

322692 How to raise Active Directory domain and forest functional levels
http://support.microsoft.com/default.aspx?scid=kb;EN-US;322692

There, you’ll find some tools you can use to properly inventory your DCs, and validate your end-to-end replication.

Update: Woo, we found an app that breaks! It has a hotfix though (thanks Paolo!). Mkae sure you install this everywhere if you are using .Net 3.5 applications that implement the DomainMode enumeration function.

FIX: "The requested mode is invalid" error message when you run a managed application that uses the .NET Framework 3.5 SP1 or an earlier version to access a Windows Server 2008 R2 domain or forest  
http://support.microsoft.com/kb/2260240

Conclusion

To summarize, the Domain or Forest Functional Levels are flags that tell Active Directory and other Windows components that all DCs in the domain or forest are at a certain minimal level. When that occurs, new features that require a minimum OS on all DCs are enabled and can be leveraged by the Administrator. Older functionality is still supported so any applications or services that used those functions will continue to work as before -- queries will be answered, domain or forest trusts will still be valid, and all should remain right with the world. This projection is supported by over eleven years of customer issues, not one of which involves a case where changing the Domain or Forest Functional Level was directly responsible as the root cause of any issue. In fact, there are only cases of a Domain or Forest Functional Level increase failing because the prerequisites had not been met; overwhelmingly, these cases end with the customer's Active Directory being successfully upgraded.

If you want to read more about Domain or Forest Functional Levels, review the following documentation:

What Are Active Directory Functional Levels?
http://technet.microsoft.com/en-us/library/cc787290(WS.10).aspx

Functional Levels Background Information
http://technet.microsoft.com/en-us/library/cc738038(WS.10).aspx

Jonathan “Con-Function Junction” Stephens


AskDS is 12,614,400,000,000,000 shakes old

$
0
0

It’s been four years and 591 posts since AskDS reached critical mass. You’d hope our party would look like this: 

image

But it’s more likely to be:

image

Without you, we’d be another of those sites that glow red hot, go supernova, then collapse into a white dwarf. We really appreciate your comments, questions, and occasional attaboys. Hopefully we’re good for another year of insightful commentary.

Thanks readers.

The AskDS Contributors

Friday Mail Sack: Super Slo-Mo Edition

$
0
0

Hello folks, Ned here again with another Mail Sack. Before I get rolling though, a quick public service announcement:

Plenty of you have downloaded the Windows 8 Developer Preview and are knee-deep in the new goo. We really want your feedback, so if you have comments, please use one of the following avenues:

I recommend sticking to IT Pro features; the consumer side’s covered and the biggest value is your Administrator experience. The NDA is not off - I still cannot comment on the future of Windows 8 or tell you if we already have plans to do X with Y. This is a one-way channel from you to us (to the developers).

Cool? On to the sack. This week we discuss:

Shake it.

Question

We were chatting here about password synchronization tools that capture password changes on a DC and send the clear text password to some third party app. I consider that a security risk...but then someone asked me how the password is transmitted between a domain member workstation and a domain controller when the user performs a normal password change operation (CTRL+ALT+DEL and Change Password). I suppose the client uses some RPC connection, but it would be great if you could point me to a reference.

Answer

Windows can change passwords many ways - it depends on the OS and the component in question.

1. For the specific case of using CTRL+ALT+DEL because your password has expired or you just felt like changing your password:

If you are using a modern OS like Windows 7 with AD, the computer uses the Kerberos protocol end to end. This starts with a normal AS_REQ logon, but to a special service principal name of kadmin/changepw, as described in http://www.ietf.org/rfc/rfc3244.txt.

The computer first contacts a KDC over port 88, then communicates over port 464 to send along the special AP_REQ and AP_REP. You are still using Kerberos cryptography and sending an encrypted payload containing a KRB_PRIV message with the password. Therefore, to get to the password, you have to defeat Kerberos cryptography itself, which means defeating the crypto and defeating the key derived from the cryptographic hash of the user's original password. Which has never happened in the history of Kerberos.

image

The parsing of this kpasswd traffic is currently broken in NetMon's latest public parsers, but even when you parse it in WireShark, all you can see is the encryption type and a payload of encrypted goo. For example, here is that Windows 7 client talking to a Windows Server 2008 R2 DC, which means AES-256:

image
Aka: Insane-O-Cryption ™

On the other hand, if using a crusty OS like Windows XP, you end up using a legacy password mechanism that worked with NT 4.0 – in this case SamrUnicodeChangePasswordUser2 (http://msdn.microsoft.com/en-us/library/cc245708(v=PROT.10).aspx).

XP also supports the Kerberos change mechanism, but by default uses NTLM with CTRL+ALT+DEL password changes. Witness:

image

This uses “RPC over SMB with Named Pipes” with RPC packet privacy. You are using NTLM v2 by default (unless you set LMCompatibility unwisely) and you are still double-protected (the payload and packets), which makes it relatively safe. Definitely not as safe as Win7 though – just another reason to move forward.

image

You can disable NTLM in the domain if you have Win2008 R2 DCs and XP is smart enough to switch to using Kerberos here:

image

... but you are likely to break many other apps. Better to get rid of Windows XP.

2. A lot of administrative code use SamrSetInformationUser2, which does not require knowing the user’s current password (http://msdn.microsoft.com/en-us/library/cc245793(v=PROT.10).aspx). For example, when you use NET USER to change a domain user’s password:

image

This invokes SamrSetInformationUser2 to set Internal4InformationNew data:

image

So, doubly-protected (a cryptographically generated, key signed hash covered by an encrypted payload). This is also “RPC over SMB using Named Pipes”

image

The crypto for the encrypted payload is derived from a key signed using the underlying authentication protocol, seen from a previous session setup frame (negotiated as Kerberos in this case):

image

3. The legacy mechanisms to change a user password are NetUserChangePassword (http://msdn.microsoft.com/en-us/library/windows/desktop/aa370650(v=vs.85).aspx) and IADsUser::ChangePassword (http://msdn.microsoft.com/en-us/library/windows/desktop/aa746341(v=vs.85).aspx)

4. A local user password change usually involves SamrUnicodeChangePasswordUser2, SamrChangePasswordUser, or SamrOemChangePasswordUser2 (http://msdn.microsoft.com/en-us/library/cc245705(v=PROT.10).aspx).

There are other ways but those are mostly corner-case.

Note: In my examples, I am using the most up to date Netmon 3.4 parsers from http://nmparsers.codeplex.com/.

Question

If I try to remove the AD Domain Services role using ServerManager.msc, it blocks me with this message:

image

But if I remove the role using Dism.exe, it lets me continue:

image

This completely hoses the DC and it no longer boots normally. Is this a bug?

And - hypothetically speaking, of course - how would I fix this DC?

Answer

Don’t do that. :)

Not a bug, this is expected behavior. Dism.exe is a pure servicing tool; it knows nothing more of DCs than the Format command does. ServerManager and servermanagercmd.exe are the tools that know what they are doing.
Update: Although as Artem points out in the comments, we want you to use the Server Manager PowerShell and not servermanagercmd, which is on its way out.

To fix your server, pick one:

  • Boot it into DS Repair Mode with F8 and restore your system state non-authoritatively from backup (you can also perform a bare metal restore if you have that capability - no functional difference in this case). If you do not have a backup and this is your only DC, update your résumé.
  • Boot it into DS Repair Mode with F8 and use dcpromo /forceremoval to finish what you started. Then perform metadata cleanup. Then go stand in the corner and think about what you did, young man!

Question

We are getting Event ID 4740s (account lockout) for the AD Guest account throughout the day, which is raising alerts in our audit system. The Guest account is disabled, expired, and even renamed. Yet various clients keep locking out the account and creating the 4740 event. I believe I've traced it back to the occasional attempt of a local account attempting to authenticate to the domain. Any thoughts?

Answer

You'll see that when someone has set a complex password on the Guest account, using NET USER for example, rather than having it be the null default. The clients never know what the guest password is, they always assume it's null like default - so if you set a password on it, they will fail. Fail enough and you lock out (unless you turn that policy off and replace it with intrusion protection detection and two-factor auth). Set it back to null and you should be ok. As you suspected, there a number of times when Guest is used as part of a "well, let's try that" algorithm:

Network access validation algorithms and examples for Windows Server 2003, Windows XP, and Windows 2000

To set it back you just use the Reset Password menu in Dsa.msc on the guest account, making sure not to set a password and clicking ok. You may have to adjust your domain password policy temporarily to allow this.

As for why it's "locking out" even though it's disabled and renamed:

  • It has a well-known SID (S-1-5-21-domain-501) so renaming doesn’t really do anything except tick a checkbox on some auditor's clipboard
  • Disabled accounts can still lock out if you keep sending bad passwords to them. Usually no one notices though, and most people are more concerned about the "account is disabled" message they see first.

Question

What are the steps to change the "User Account" password set when the Network Device Enrollment Service (NDES) is installed?

Answer

When you first install the Network Device Enrollment Service (NDES), you have the option of setting the identity under which the application pool runs to the default application pool identity or to a specific user account. I assume that you selected the latter. The process to change the password for this user account requires two steps -- with 27 parts (not really…).

  1. First, you must reset the user account's password in Active Directory Users and Computers.

  2. Next, you must change the password configured in the application pool Advanced Settings on the NDES server.

a. In IIS manager, expand the server name node.

b. Click on Application Pools.

c. On the right, locate and highlight the SCEP application pool.

image

d. In the Action pane on the right, click on Advanced Settings....

e. Under Process Model click on Identity, then click on the … button.

image

f. In the Application Pool Identity dialog box, select Custom account and then click on Set….

g. Enter the custom application pool account name, and then set and confirm the password. Click Ok, when finished.

image

h. Click Ok, and then click Ok again.

i. Back on the Application Pools page, verify that SCEP is still highlighted. In the Action pane on the right, click on Recycle….

j. You are done.

Normally, you would have to be concerned with simply resetting the password for any service account to which any digital certificates have been assigned. This is because resetting the password can result in the account losing access to the private keys associated with those certificates. In the case of NDES, however, the certificates used by the NDES service are actually stored in the local computer's Personal store and the custom application pool identity only has read access to those keys. Resetting the password of the custom application pool account will have no impact on the master key used to protect the NDES private keys.

[Courtesy of Jonathan, naturally - Neditor]

Question

If I have only one domain in my forest, do I need a Global Catalog? Plenty of documents imply this is the case.

Answer

All those documents saying "multi-domain only" are mistaken. You need GCs - even in a single-domain forest - for the following:

(Update: Correction on single-domain forest logon made, thanks for catching that Yusuf! I also added a few more breakage scenarios)

  • Perversely, if you have enabled IgnoreGCFailures (http://support.microsoft.com/kb/241789); turning it on removes universal groups from the user security token if there is no GC, meaning they will logon but not be able to access resources they accessed fine previously).
  • If your users logon with UPNs and try to change their password (they can still logon in a single domain forest with UPN or NetBiosDomain\SamAccountName style logons).
  • Even if you use Universal Group Membership Caching to avoid the need for a GC in a site, that DC needs a GC to update the cache.
  • MS Exchange is deployed (All versions of Exchange services won't even start without a GC).
  • Using the built-in Find in the shell to search AD for published shares, published DFS links, published printers, or any object picker dialog that provides option "entire directory"  will fail.
  • DPM agent installation will fail.
  • AD Web Services (aka AD Management Gateway) will fail.
  • CRM searches will fail.
  • Probably other third parties of which I'm not aware.

We stopped recommending that customers use only handfuls of GCs years ago - if you get an ADRAP or call MS support, we will recommend you make all DCs GCs, unless you have an excellent reason not. Our BPA tool states that you should have at least one GC per AD site: http://technet.microsoft.com/en-us/library/dd723676(WS.10).aspx.

Question

If I use DFSR to replicate a folder containing symbolic links, will this replicate the source files or the actual symlinks? The DFSR FAQ says symlink replication is supported under certain circumstances.

Answer

The symlink replicates; however, the underlying data does not replicate just because there is a symlink. If the data is not stored within the RF, you end up with a replicated symlink to nowhere:

Server 1, replicating a folder called c:\unfiltersub. Note how the symlink points to a file that is not in the scope of replication:

image

Server 2, the symlink has replicated - but naturally, it points to an un-replicated file. Boom:

image

If the source data is itself replicated, you’re fine. There’s no real way to guarantee that though, except preventing users from creating files outside the RF by using permissions and FSRM screens. If your end users can only access the data through a share, they are in good shape. I'd imagine they are not the ones creating symlinks though. ;-)

Question

I read your post on career development. There are many memory techniques and I know everyone is different, but what do you use?

[A number of folks asked this question - Neditor]

Answer

When I was younger, it just worked - if I was interested in it, I remembered it. As I get older and burn more brain cells though, I find that my best memory techniques are:

  • Periodic skim and refresh. When I have learned something through deep reading and hands on, I try to skim through core topics at least once a year. For example, I force myself to scan the diagrams in the all the Win2003 Technical Reference A-Z sections, and if I can’t remember what the diagram is saying, I make myself read that section in detail. I don’t let myself get too stale on anything and try to jog it often.
  • Mix up the media. When learning a topic, I read, find illustrations, and watch movies and demos. When there are no illustrations, I use Visio to make them for myself based on reading. When there are no movies, I make myself demo the topics. My brain seems to retain more info when I hit it with different styles on the same subject.
  • I teach and publically write about things a lot. Nothing hones your memory like trying to share info with strangers, as the last thing I want is look like a dope. It makes me prepare and check my work carefully, and that natural repetition – rather than forced “read flash cards”-style repetition, really works for me. My brain runs best under pressure.
  • Your body is not a temple (of Gozer worshipers). Something of a cliché, but I gobble vitamins, eat plenty of brain foods, and work out at least 30 minutes every morning.

I hope this helps and isn’t too general. It’s just what works for me.

Other Stuff

Have $150,000 to spend on a camera, a clever director who likes FPS gaming, and some very fit paint ballers? Go make a movie better than this. Watch it multiple times.

image
Once for the chat log alone

Best all-around coverage of the Frankfurt Auto Show here, thanks to Jalopnik.

image
Want!

The supposedly 10 Coolest Death Scenes in Science Fiction History. But any list not including Hudson’s last moments in Aliens is fail.

If it’s true… holy crap! Ok, maybe it wasn’t true. Wait, HOLY CRAP!

So many awesome things combined.

Finally, my new favorite time waster is Retronaut. How can you not like a website with things like “Celebrities as Russian Generals”.

image
No, really.

Have a nice weekend folks,

- Ned “Oh you want some of this?!?!” Pyle

Friday Mail Sack: Guest Reply Edition

$
0
0

Hi folks, Ned here again. This week we talk:

Let's gang up.

Question

We plan to migrate our Certificate Authority from single-tier online Enterprise Root to two-tier PKI. We have an existing smart card infrastructure. TechNet docs don’t really speak to this scenario in much detail.

1. Does migration to a 2-tier CA structure require any customization?

2. Can I keep the old CA?

3. Can I create a new subordinate CA under the existing CA and take the existing CA offline?

Answer

[Provided by Jonathan Stephens, the Public Keymaster- Editor]

We covered this topic in a blog post, and it should cover many of your questions: http://blogs.technet.com/b/askds/archive/2010/08/23/moving-your-organization-from-a-single-microsoft-ca-to-a-microsoft-recommended-pki.aspx.

Aside from that post, you will also find the following information helpful: http://blogs.technet.com/b/pki/archive/2010/06/19/design-considerations-before-building-a-two-tier-pki-infrastructure.aspx.

To your questions:

  1. While you can migrate an online Enterprise Root CA to an offline Standalone Root CA, that probably isn't the best decision in this case with regard to security. Your current CA has issued all of your smart card logon certificates, which may have been fine when that was all you needed, but it certainly doesn't comply with best practices for a secure PKI. The root CA of any PKI should be long-lived (20 years, for example) and should only issue certificates to subordinate CAs. In a 2-tier hierarchy, the second tier of CAs should have much shorter validity periods (5 years) and is responsible for issuing certificates to end entities. In your case, I'd strong consider setting up a new PKI and migrating your organization over to it. It is more work at the outset, but it is a better decision long term.
  2. You can keep the currently issued certificates working by publishing a final, long-lived CRL from the old CA. This is covered in the first blog post above. This would allow you to slowly migrate your users to smart card logon certificates issued by the new PKI as the old certificates expired. You would also need to continue to publish the old root CA certificate in the AD and in the Enterprise NTAuth store. You can see these stores using the Enterprise PKI snap-in: right-click on Enterprise PKI and select Manage AD Containers. The old root CA certificate should be listed in the NTAuthCertificates tab, and in the Certificate Authorities Container tab. Uninstalling the old CA will remove these certificates; you'll need to add them back.
  3. You can't take an Enterprise CA offline. An Enterprise CA requires access to Active Directory in order to function. You can migrate an Enterprise CA to a Standalone CA and take that offline, but, as I've said before, that really isn't the best option in this case.

Question

Are there any know issues with P2Ving ADAM/AD LDS servers?

Answer

[Provided by Kim Nichols, our resident ADLDS guru'ette - Editor]

No problems as far as we know. The same rules apply as P2V’ing DCs or other roles; make sure you clean up old drivers and decommission the physicals as soon as you are reasonably confident the virtual is working. Never let them run simultaneously. All the “I should have had a V-8” stuff.

Considering how simple it is to create an ADLDS replica, it might be faster and "cleaner" to create a new virtual machine, install and replicate ADLDS to it, then rename the guest and throw away the old physical; if ADLDS was its only role, naturally.

Question

[Provided by Fabian Müller, schlau Deutsche PFE- Editor]

When using production delegation in AGPM, we can grant permissions for editing group policy objects in the production environment. But these permissions will be written to all deployed GPOs, not for specific ones. GPMC makes it easy to set “READ” and “APPLY” permissions on a GPO, but I cannot find a security filtering switch in AGPM. So how can we manage the security filtering on group policies without setting the same ACL on all deployed policies?

Answer

Ok, granting “READ” and “APPLY” permissions respectively managing security filtering in AGPM is not that obvious to find. Do it like this in the change control panel of AGPM:

  • Check-out the according Group Policy Object and provide a brief overview of the changes to be done in the “comments” window, e.g. “Add important security filtering ACLs for group XYZ, dude!
  • Edit the checked-out GPO

In the top of the Group Policy Management Editor, click “Action” –> “Properties”:

image

  • Change to “Security” tab and provide your settings for security filtering:

image

  • Close the Group Policy Management Editor and Check-in the policy (again with a good comment)
  • If everything is done you can now safely “Deploy” the just edited GPO – now the security filter is in place in production:

image

Note 1: Be aware that you won’t find any information regarding the security filtering change in the AGPM history of the edited group policy object. There is nothing in the HTML reports that refer to security filtering changes. That’s why you should provide a good explanation on your changes during “check-in” and “check-out” phase:

image

image

Note 2: Be careful with “DENY” ACEs using AGPM – they might get removed. See the following blog for more information on that topic: http://blogs.technet.com/b/grouppolicy/archive/2008/10/27/agpm-3-0-doesnt-preserve-all-the-deny-aces.aspx

Question

I have one Windows Server 2003 IIS machine with two web applications, each in its own application pool. How can I register SPNs for each application?

Answer

[This one courtesy of Rob Greene, the Abominable Authman - Editor]

There are a couple of options for you here.

  1. You could address each web site on the same server with different host names.  Then you can add the specific HTTP SPN to each application pool account as needed.
  2. You could address each web site with a unique port assignment on the web server.  Then you can add the specific HTTP SPN with the port attached like http/myweb.contoso.com:88
  3. You could use the same account to run all the application pool accounts on the same web server.

NOTE: If you choose option 1 or 2, you have to be careful about Internet Explorer behaviors.  If you choose the unique host name per web site then you will need to make sure to use HOST records in DNS or put a registry key in place on all workstations if you choose CNAME.  If you choose having a unique port for each web site, you will need to put a registry key in place on all workstations so that they send the port number in the TGS SPN request.

http://blogs.technet.com/b/askds/archive/2009/06/22/internet-explorer-behaviors-with-kerberos-authentication.aspx

Question

Comparing AGPM controlled GPOs within the same domain is no problem at all – but if the AGPM server serves more than one domain, how can I compare GPOs that are hosted in different domains using AGPM difference report?

Answer

[Again from Fabian, who was really on a roll last week - Editor]

Since AGPM 4.0 we provide the ability to export and import Group Policy Objects using AGPM. What you have to do is:

  • To export one of the GPOs from domain 1…:

image

  • … and import the *.cab to domain 2 using the AGPM GPO import wizard (right-click on an empty area in AGPM Contents—> Controlled tab and select “New Controlled GPO…”):

image

image

  • Now you can simply compare those objects using difference report:

image

[Woo, finally some from Ned - Editor]

Question

When I use the Windows 7 (RSAT) version of AD Users and Computers to connect to certain domains, I get error "unknown user name or bad password". However, when I use the XP/2003 adminpak version, no errors for the same domain. There's no way to enter a domain or password.

Answer

ADUC in Vista/2008/7/R2 does some group membership and privilege checking when it starts that the older ADUC never did. You’ll get the logon failure message for any domain you are not a domain admin in, for example. The legacy ADUC is probably broken for that account as well – it’s just not telling you.

image

Question

I have 2 servers replicating with DFSR, and the network cable between them is disconnected. I delete a file on Server1, while the equivalent file on Server2 is modified. When the cable is re-connected, what is the expected behavior?

Answer

Last updater wins, even if a modification of an ostensibly deleted file. If the file was deleted first on server 1 and modified later on server 2, it would replicate back to server 1 with the modifications once the network reconnected. If it had been deleted later than the modification, that “last write” would win and it would delete from the other server once the network resumed.

More info on DFSR conflict handling here http://blogs.technet.com/b/askds/archive/2010/01/05/understanding-dfsr-conflict-algorithms-and-doing-something-about-conflicts.aspx

Question

Is there any automatic way to delete stale user or computer accounts? Something you turn on in AD?

Answer

Nope, not automatically; you have to create a solution to detect the age and disable or delete stale accounts. This is a very dangerous operation - make sure you understand what you are getting yourself into. For example:

Question

Whenever I try to use the PowerShell cmdlet Get-ACL against an object in AD, always get an error like " Cannot find path ou=xxx,dc=xxx,dc=xxx because it does not exist". But it does!

Answer

After you import the ActiveDirectory module, but before you run your commands, run:

CD AD:

Get-Acl won’t work until you change to the magical “active directory drive”.

Question

I've read the Performance Tuning Guidelines for Windows Server, and I wonder if all SMB server tuning parameters (AsyncCredits, MinCredits, MaxCredits, etc) also work (or help) for DFSR.  Also, do you know the limit is for SMB Asynchronous Credits - the document doesn’t say?

Answer

Nope, they won’t have any effect on DFSR – it does not use SMB to replicate files. SMB is only used by the DFSMGMT.MSC if you ask it to create a replicated folder on another server during RF setup. More info here:

Configuring DFSR to a Static Port - The rest of the story - http://blogs.technet.com/b/askds/archive/2009/07/16/configuring-dfsr-to-a-static-port-the-rest-of-the-story.aspx

That AsynchronousCredits SMB value does not have a true maximum, other than the fact that it is a DWORD and cannot exceed 4,294,967,295 (i.e. 0xffffffff). Its default value on Windows Server 2008 and 2008 R2 is 512; on Vista/7, it's 64.

HOWEVER!

As KB938475 (http://support.microsoft.com/kb/938475) points out, adjusting these defaults comes at the cost of paged pool (Kernel) memory. If you were to increase these values too high, you would eventually run out of paged pool and then perhaps hang or crash your file servers. So don't go crazy here.

There is no "right" value to set - it depends on your installed memory, if you are using 32-bit versus 64-bit (if 32-bit, I would not touch this value at all), the number of clients you have connecting, their usage patterns, etc. I recommend increasing this in small doses and testing the performance - for example, doubling it to 1024 would be a fairly prudent test to start.

Other Stuff

Happy Birthday to all US Marines out there, past and present. I hope you're using Veterans Day to sleep off the hangover. I always assumed that's why they made it November 11th, not that whole WW1 thing.

Also, happy anniversary to Jonathan, who has been a Microsoft employee for 15 years. In keeping with the tradition, he had 15 pounds of M&Ms for the floor, which in case you’re wondering, it fills a salad bowl. Which around here, means:

image

Two of the most awesome things ever – combined:

A great baseball story about Lou Gehrig, Kurt Russell, and a historic bat.

Off to play some Battlefield 3. No wait, Skyrim. Ah crap, I mean Call of Duty MW3. And I need to hurry up as Arkham City is coming. It's a good time to be a PC gamer. Or Xbox, if you're into that sorta thing.

 

Have a nice weekend folks,

 - Ned "and Jonathan and Kim and Fabian and Rob" Pyle

Friday Mail Sack: Best Post This Year Edition

$
0
0

Hi folks, Ned here and welcoming you to 2012 with a new Friday Mail Sack. Catching up from our holiday hiatus, today we talk about:

So put down that nicotine gum and get to reading!

Question

Is there an "official" stance on removing built-in admin shares (C$, ADMIN$, etc.) in Windows? I’m not sure this would make things more secure or not. Larry Osterman wrote a nice article on its origins but doesn’t give any advice.

Answer

The official stance is from the KB that states how to do it:

Generally, Microsoft recommends that you do not modify these special shared resources.

Even better, here are many things that will break if you do this:

Overview of problems that may occur when administrative shares are missing
http://support.microsoft.com/default.aspx?scid=kb;EN-US;842715

That’s not a complete list; it wasn’t updated for Vista/2008 and later. It’s so bad though that there’s no point, frankly. Removing these shares does not increase security, as only administrators can use those shares and you cannot prevent administrators from putting them back or creating equivalent custom shares.

This is one of those “don’t do it just because you can” customizations.

Question

The Windows PowerShell Get-ADDomainController cmdlet finds DCs, but not much actual attribute data from them. The examples on TechNet are not great. How do I get it to return useful info?

Answer

You have to use another cmdlet in tandem, without pipelining: Get-ADComputer. The Get-ADDomainController cmdlet is good mainly for searching. The Get-ADComputer cmdlet, on the other hand, does not accept pipeline input from Get-ADDomainController. Instead, you use a pseudo “nested function” to first find the PDC, then get data about that DC. For example, (this is all one command, wrapped):

get-adcomputer (get-addomaincontroller -Discover -Service "PrimaryDC").name -property * | format-list operatingsystem,operatingsystemservicepack

When you run this, PowerShell first processes the commands within the parentheses, which finds the PDC. Then it runs get-adcomputer, using the property of “Name” returned by get-addomaincontroller. Then it passes the results through the pipeline to be formatted. So it’s 123.

get-adcomputer (get-addomaincontroller -Discover -Service "PrimaryDC").name -property * | format-list operatingsystem,operatingsystemservicepack

Voila. Here I return the OS of the PDC, all without having any idea which server actually holds that role:

clip_image002[6]

Moreover, before the Internet clubs me like a baby seal: yes, a more efficient way to return data is to ensure that the –property list contains only those attributes desired:

image

Get-ADDomainController can find all sorts of interesting things via its –service argument:

PrimaryDC
GlobalCatalog
KDC
TimeService
ReliableTimeService
ADWS

The Get-ADDomain cmdlet can also find FSMO role holders and other big picture domain stuff. For example, the RID Master you need to monitor.

Question

I know about Kerberos “token bloat” with user accounts that are a member of too many groups. Does this also affect computers added to too many groups? What would some practical effects of that? We want to use a lot of them in the near future for some application … stuff.

Answer

Yes, things will break. To demonstrate, I use PowerShell to create 2000 groups in my domain and added a computer named “7-01” to them:

image

I then restart the 7-01 computer. Uh oh, the System Event log is un-pleased. At this point, 7-01 is no longer applying computer group policy, getting start scripts, or allowing any of its services to logon remotely to DCs:

image 

Oh, and check out this gem:

image

I’m sure no one will go on a wild goose chase after seeing that message. Applications will be freaking out even more, likely with the oh-so-helpful error 0x80090350:

“The system detected a possible attempt to compromise security. Please ensure that you can contact the server that authenticated you.”

Don’t do it. MaxTokenSize is probably in your future if you do, and it has limits that you cannot design your way out of. IT uniqueness is bad.

Question

We have XP systems using two partitions (C: and D:) migrating to Windows 7 with USMT. The OS are on C and the user profiles on D.  We’ll use that D partition to hold the USMT store. After migration, we’ll remove the second partition and expand the first partition to use the space freed up by the first partition.

When restoring via loadstate, will the user profiles end up on C or on D? If the profiles end up on D, we will not be able to delete the second partition obviously, and we want to stop doing that regardless.

Answer

You don’t have to do anything; it just works. Because the new profile destination is on C, USMT just slots everything in there automagically :). The profiles will be on C and nothing will be on D except the store itself and any non-profile folders*:

clip_image001
XP, before migrating

clip_image001[5]
Win7, after migrating

If users have any non-profile folders on D, that will require a custom rerouting xml to ensure they are moved to C during loadstate and not obliterated when D is deleted later. Or just add a MOVE line to whatever DISKPART script you are using to expand the partition.

Question

Should we stop the DFSR service before performing a backup or restore?

Answer

Manually stopping the DFSR service is not recommended. When backing up using the DFSR VSS Writer – which is the only supported way – replication is stopped automatically, so there’s no reason to stop the service or need to manually change replication:

Event ID=1102
Severity=Informational
The DFS Replication service has temporarily stopped replication because another
application is performing a backup or restore operation. Replication will resume
after the backup or restore operation has finished.

Event ID=1104
Severity=Informational
The DFS Replication service successfully restarted replication after a backup
or restore operation.

Another bit of implied evidence – Windows Server Backup does not stop the service.

Stopping the DFSR service for extended periods leaves you open to the risk of a USN journal wrap. And what if someone/something thinks that the service being stopped is “bad” and starts it up in the middle of the backup? Probably nothing bad happens, but certainly nothing good. Why risk it?

Question

In an environment where AGMP controls all GPOs, what is the best practice when application setup routines make edits "under the hood" to GPOs, such as the Default Domain Controllers GPO? For example, Exchange setup make changes to User Rights Assignment (SeSecurityPrivilege). Obviously if this setup process makes such edits on the live GPO in sysvol the changes will happen, but then only to have those critical edits be lost and overwritten the next time an admin re-deploys with AGPM.

Answer

[via Fabian “Wunderbar” Müller  – Ned]

From my point of view:

1. The Default Domain and Default Domain Controller Policies should be edited very rarely. Manual changes as well as automated changes (e.g. by the mentioned Exchange setup) should be well known and therefore the workaround in 2) should be feasible.

2. After those planned changes were performed, you have to use “import from production” the production GPO to the AGPM archive in order to reflect the production change to AGPM. Another way could be to periodically use “import from production” the default policies or to implement a manual / human process that defines the “import from production” procedure before a change in these policies is done using AGPM.

Not a perfect answer, but manageable.

Question

In testing the rerouting of folders, I took the this example from TechNet and placed in a separate custom.xml.  When using this custom.xml along with the other defaults (migdocs.xml and migapp.xml unchanged), the EngineeringDrafts folder is copied to %CSIDL_DESKTOP%\EngineeringDrafts' but there’s also a copy at C:\EngineeringDrafts on the destination computer.

I assume this is not expected behavior.  Is there something I’m missing?

Answer

Expected behavior, pretty well hidden though:

http://technet.microsoft.com/en-us/library/dd560751(v=WS.10).aspx

If you have an <include> rule in one component and a <locationModify> rule in another component for the same file, the file will be migrated in both places. That is, it will be included based on the <include> rule and it will be migrated based on the <locationModify> rule

That original rerouting article could state this more plainly, I think. Hardly anyone does this relativemove operation; it’s very expensive for disk space– one of those “you can, but you shouldn’t” capabilities of USMT. The first example also has an invalid character in it (the apostrophe in “user’s” on line 12, position 91 – argh!).

Don’t just comment out those areas in migdocs though; you are then turning off most of the data migration. Instead, create a copy of the migdocs.xml and modify it to include your rerouting exceptions, then use that as your custom XML and stop including the factory migdocs.xml.

There’s an example attached to this blog post down at the bottom. Note the exclude in the System context and the include/modify in the user context:

image

image

Don’t just modify the existing migdocs.xml and keep using it un-renamed either; that becomes a versioning nightmare down the road.

Question

I'm reading up on CAPolicy.inf files, and it looks like there is an error in the documentation that keeps being copied around. TechNet lists RenewalValidityPeriod=Years and RenewalValidityPeriodUnits=20 under the "Windows Server 2003" sample. This is the opposite of the Windows 2000 sample, and intuitively the "PeriodUnits" should be something like "Years" or "Weeks", while the "Period" would be an integer value. I see this on AskDS here and here also.

Answer

[via Jonathan “scissor fingers” Stephens  – Ned]

You're right that the two settings seem like they should be reversed, but unfortunately this is not correct. All of the *Period values can be set to Minutes, Hours, Days, Weeks, Months or Years, while all of the *PeriodUnits values should be set to some integer.

Originally, the two types of values were intended to be exactly what one intuitively believes they should be -- *PeriodUnits was to be Day, Weeks, Months, etc. while *Period was to be the integer value. Unfortunately, the two were mixed up early in the development cycle for Windows 2000 and, once the error was discovered, it was really too late to fix what is ultimately a cosmetic problem. We just decided to document the correct values for each setting. So in actuality, it is the Windows 2000 documentation that is incorrect as it was written using the original specs and did not take the switch into account. I’ll get that fixed.

Question

Is there a way to control the number, verbosity, or contents of the DFSR cluster debug logs (DfsrClus_nnnnn.log and DfsrClus_nnnnn.log.gz in %windir%\debug)?

Answer

Nope, sorry. It’s all static defined:

  • Severity = 5
  • Max log messages per log = 10000
  • Max number of log files = 999

Question

In your previous article you say that any registry modifications should be completed with resource restart (take resource offline and bring it back online), instead of direct service restart. However, official whitepaper (on page 16) says that CA service should be restarted by using "net stop certsvc && net start certsvc".

Also, I want to clarify about a clustered CA database backup/restore. Say, a DB was damaged or destroyed. I have a full backup of CA DB. Before restoring, I do I stop only AD CS service resource (cluadmin.msc) or stop the CA service directly (net stop certsvc)?

Answer

[via Rob “there's a Squatch in These Woods” Greene  – Ned]

The CertSvc service has no idea that it belongs to a cluster.  That’s why you setup the CA as a generic service within Cluster Administration and configure the CA registry hive within Cluster Administrator.

When you update the registry keys on the Active CA Cluster node, the Cluster service is monitoring the registry key changes.  When the resource is taken offline the Cluster Service makes a new copy of the registry keys to so that the other node gets the update.  When you stop and start the CA service the cluster services has no idea why the service is stopped and started, since it is being done outside of cluster and those registry key settings are never updated on the stand-by node. General guidance around clusters is to manage the resource state (Stop/Start) within Cluster Administrator and do not do this through Services.msc, NET STOP, SC, etc.

As far as the CA Database restore: just logon to the Active CA node and run the certutil or CA MMC to perform the operation. There’s no need to touch the service manually.

Other stuff

The Microsoft Premier Field Organization has started a new blog that you should definitely be reading.

Welcome to your nightmare (Thanks Mark!)

Totally immature and therefore funny. Doubles as a gender test.

Speaking of George Lucas re-imaginings, check out this awesome shot-by-shot comparison of Raiders and 30 other previous adventure films:


Indy whipped first!

I am completely addicted to Panzer Corps; if you ever played Panzer General in the 90’s, you will be too.

Apropos throwback video gaming and even more re-imagining, here is Battlestar Galactica as a 1990’s RPG:

   
The mail sack becomes meta of meta of meta

Like Legos? Love Simon Pegg? This is for you.

Best sci-fi books of 2011, according to IO9.

What’s your New Year’s resolution? Mine is to stop swearing so much.

 

Until next time,

- Ned “$#%^&@!%^#$%^” Pyle

Friday Mail Sack: It’s a Dog’s Life Edition

$
0
0

Hi folks, Ned here again with some possibly interesting, occasionally entertaining, and always unsolicited Friday mail sack. This week we talk some:

Fetch!

Question

We use third party DNS but used to have Windows DNS on domain controllers; that service has been uninstalled and all that remains are the partitions. According to KB835397, deleting the ForestDNSZones and DomainDNSZones partitions is not supported. Soon we will have removed the last few old domain controllers hosting some of those partitions and replaced them with Windows Server 2008 R2 that never had Windows DNS. Are we getting ourselves in trouble or making this environment unsupported?

Answer

You are supported. Don’t interpret the KB too narrowly; there’s a difference between deletion of partitions used by DNS and never creating them in the first place. If you are not using MS DNS and the zones don’t exist, there’s nothing in Windows that should care about them, and we are not aware of any problems.

This is more of a “cover our butts” article… we just don’t want you deleting partitions that you are actually using and naturally, we don’t rigorously test with non-MS DNS. That’s your job. ;-)

Question

When I run DCDIAG it returns all warning events for the system event log. I have a bunch of “expected” warnings, so this just clogs up my results. Can I change this behavior?

Answer

DCDIAG has no idea what the messages mean and has no way to control the output. You will need to suppress the events themselves in their own native fashion, if their application supports it. For example, if it’s a chatty combination domain controller/print server in a branch office that shows endless expected printer Warning messages, you’d use the steps here.

If your application cannot be controlled, there’s one (rather gross) alternative to make things cleaner though, and that’s to use the FIND command in a few pipelines to remove expected events. For example, here I always see this write cache warning when I boot this DC, and I don’t really care about it:

image

Since I don’t care about these entries, I can use pipelined FIND (with /v to drop those lines) and narrow down the returned data. I probably don’t care about the time generated since DCDIAG only shows the last 60 minutes, nor the event string lines either. So with that, I can use this single wrapped line in a batch file:

dcdiag/test:systemlog | find /I /v "eventid: 0x80040022" | find /I /v "the driver disabled the write cache on device" | find /i /v "event string:" | find /i /v "time generated:"

clip_image002
Whoops, I need to fix that user’s group memberships!

Voila. I still get most of the useful data and nothing about that write cache issue. Just substitute your own stuff.

See, I don’t always make you use Windows PowerShell for your pipelines. ツ

Question

If I walk into a new Windows Server 2008 AD environment cold and need to know if they are using DFSR or FRS for SYSVOL replication, what is the quickest way to tell?

Answer

Just run this DFSRMIG command:

dfsrmig.exe /getglobalstate

That tells you what the current state of the SYSVOL DFSR topology and migration.

If it says:

  • “Eliminated”

… they are using DFSR for SYSVOL. It will show this message even if the domain was built from scratch with a Windows Server 2008 domain functional level or higher and never performed a migration; the tool doesn’t know how to say “they always used DFSR from day one”.

If it says:

  • “Prepared”
  • “Redirected”

… they are mid-migration and using both FRS and DFSR, favoring one or the other for SYSVOL.

If it says:

  • “Start”
  • “DFSR migration has not yet initialized”
  • “Current domain functional level is not Windows Server 2008 or above”

… they are using FRS for SYSVOL.

Question

When using the DFSR WMI namespace “root\microsoftdfs” and class “dfsrvolumeconfig”, I am seeing weird results for the volume path. On one server it’s the C: drive, but on another it just shows a wacky volume GUID. Why?

Answer

DFSR is replicating data under a mount point. You can see this with any WMI tool (surprise! here’s PowerShell) and then use mountvol.exe to confirm your theory. To wit:

image

image

Question

I notice that the "dsquery user -inactive x" command returns a list of user accounts that have been inactive for x number of weeks, but not days.  I suspect that this lack of precision is related to this older AskDS post where it is mentioned that the LastLogonTimeStamp attribute is not terribly accurate. I was wondering what your thoughts on this were, and if my only real recourse for precise auditing of inactive user accounts was by parsing the Security logs of my DCs for user logon events.

Answer

Your supposition about DSQUERY is right. What's worse, that tool's queries do not even include users that have never logged on in its inactive search. So it's totally misleading. If you use the AD Administrative Center query for inactive accounts, it uses this LDAP syntax, so it's at least catching everyone (note that your lastlogontimestamp UTC value would be different):

(&(objectCategory=person)(objectClass=user)(!userAccountControl:1.2.840.113556.1.4.803:=2)(|(lastLogonTimestamp<=129528216000000000)(!lastLogonTimestamp=*)))

You can lower the msDS-LogonTimeSyncInterval down to 1 day, which removes the randomization and gets you very close to that magic "exactness" (within 24 hours). But this will increase your replication load, perhaps significantly if this is a large environment with a lot of logon activity. Warren's blog post you mentioned describes how to do this. I’ve seen some pretty clever PowerShell techniques for this: here's one (untested, non-MS) example that could be easily adopted into native Windows AD PowerShell or just used as-is. Dmitry is a smart fella. Make sure that you if you find scripts that the the author clearly understood Warren’s rules.

There is also the option - if you just care about users' interactive or runas logons and you have all Windows Vista or Windows 7 clients - to implement msDS-LastSuccessfulInteractiveLogonTime. The ups and downs of this are discussed here. That is replicated normally and could be used as an LDAP query option.

Windows AD PowerShell has a nice built-in constructed property called “LastLogonDate” that is the friendly date time info, converted from the gnarly UTC. That might help you in your scripting efforts.

After all that, you are back to Warren's recommended use of security logs and audit collection services. Which is a good idea anyway. You don't get to be meticulous about just one aspect of security!

Question

I was reading your older blog post about setting legal notice text and had a few questions:

  1. Has Windows 7 changed to make this any easier or better?
  2. Any way to change the font or its size?
  3. Any way to embed URLs in the text so the user can see what they are agreeing to in more detail?

Answer

[Courtesy of that post’s author, Mike “DiNozzo” Stephens]

  1. No
  2. No
  3. No

:)

#3 is especially impossible. Just imagine what people would do to us if we allowed you to run Internet Explorer before you logged on!

image

 [The next few answers courtesy of Jonathan “Davros” Stephens. Note how he only ever replies with bad news… – Neditor]

Question

I have encountered the following issue with some of my users performing smart card logon from Windows XP SP3.

It seems that my users are able to logon using smart card logon even if the certificate on the user’s smart card was revoked.
Here are the tests we've performed:

  1. Verified that the CRL is accessible
  2. Smartcard logon with the working certificate
  3. Revoked the certificate + waited for the next CRL publish
  4. Verified that the new CRL is accessible and that the revoked certificate was present in the list
  5. Tested smartcard logon with the revoked certificate

We verified the presence of the following registry keys both on the client machine and on the authenticating DC:

HKEY_Local_Machine\System\CurrentControlSet\Services\KDC\CRLValidityExtensionPeriod
HKEY_Local_Machine\System\CurrentControlSet\Services\KDC\CRLTimeoutPeriod
HKEY_Local_Machine\System\CurrentControlSet\Control\LSA\Kerberos\Parameters\CRLTimeoutPeriod
HKEY_Local_Machine\System\CurrentControlSet\Control\LSA\Kerberos\Parameters\UseCachedCRLOnlyAndIgnoreRevocationUnknownErrors

None of them were found.

Answer

First, there is an overlap built into CRL publishing. The old CRL remains valid for a time after the new CRL is published to allow clients/servers a window to download the new CRL before the old one becomes invalid. If the old CRL is still valid then it is probably being used by the DC to verify the smart card certificate.

Second, revocation of a smart card certificate is not intended to be usable as real-time access control -- not even with OCSP involved. If you want to prevent the user from logging on with the smart card then the account should be disabled. That said, one possible hacky alternative that would be take immediate effect would be to change the UPN of the user so it does not match the UPN on the smart card. With mismatched UPNs, implicit mapping of the smart card certificate to the user account would fail; the DC would have no way to determine which account it should authenticate even assuming the smart card certificate verified successfully.

If you have Windows Server 2008 R2 DCs, you can disable the implicit mapping of smart card logon certificates to user accounts via the UPN in favor of explicit certificate mapping. That way, if a user loses his smart card and you want to make sure that that certificate cannot be used for authentication as soon as possible, remove it from the altSecurityIdentities attribute on the user object in AD. Of course, the tradeoff here is the additional management of updating user accounts before their smart cards can be used for logon.

Question

When using the SID cloning tools like sidhist.vbs in a Windows Server 2008 R2 domain, they always fail with error “Destination auditing must be enabled”. I verified that Account Management auditing is on as required, but then I also found that the newer Advanced Audit policy version of that setting is also on. It seems like the DSAddSIDHistory() API does not consider this new auditing sufficient? In my test environment everything works fine, but it does not use Advanced Auditing. I also found that if I set all Account Management advanced audit subcategories to enabled, it works.

Answer

It turns out that this is a known issue (it affects ADMT too). At this time, DsAddSidHistory() only works if it thinks legacy Account Management is enabled. You will either need to:

  • Remove the Advanced Auditing policy and force the destination computers use legacy auditing by setting Audit: Force audit policy subcategory settings (Windows Vista or later) to override audit policy category settings to disabled.
  • Set all Account Management advanced audit subcategories to enabled, as you found, which satisfies the SID cloning function.

We are making sure TechNet is updated to reflect this as well.  It’s not like Advanced Auditing is going to get less popular over time.

Question

Enterprise and Datacenter editions of Windows Server support enforcing Role Separation based on the common criteria (CC) definitions.  But there doesn't seem to be any way to define the roles that you want to enforce.

CC Security Levels 1 and 2 only define two roles that need to be restricted (CA Administrator and Certificate Manager).  Auditing and Backup functions are handled by the CA administrator instead of dedicated roles.

Is there a way to enforce separation of these two roles without including the Auditor and Backup Operator roles defined in the higher CC Security Levels?

Answer

Unfortunately, there is no way to make exceptions to role separation. Basically, you have two options:

  1. Enable Role Separation and use different user accounts for each role.
  2. Do not enable Role Separation, turn on CA Auditing to monitor actions taken on the CA.

[Now back to Ned for the idiotic finish!]

Other Stuff

My latest favorite site is cubiclebot.com. Mainly because they lead me to things like this:


Boing boing boing

And this:


Wait for the pit!

Speaking of cool dogs and songs: Bark bark bark bark, bark bark bark-bark.

Game of Thrones season 2 is April 1st. Expect everyone to die, no matter how important or likeable their character. Thanks George!

At last, Ninja-related sticky notes.

For all the geek parents out there. My favorite is:

adorbz-ewok
For once, an Ewok does not enrage me

It was inevitable.

 

Finally: I am headed back to Chicagoland next weekend to see my family. If you are in northern Illinois and planning on eating at Slott’s Hots in Libertyville, Louie’s in Waukegan, or Leona’s in Chicago, gimme a wave. Yes, all I care about is the food. My wife only cares about the shopping, that’s why we’re on Michigan avenue and why she cannot complain. You don’t know what it’s like living in Charlotte!! D-:

Have a nice weekend folks,

Ned “my dogs are not quite as athletic” Pyle

Viewing all 74 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>