Quantcast
Channel: Ask the Directory Services Team
Viewing all 74 articles
Browse latest View live

Friday Mail Sack: Carl Sandburg Edition

$
0
0

Hi folks, Jonathan again. Ned is taking some time off visiting his old stomping grounds – the land of Mother-in-Laws and heart-breaking baseball. Or, as Sandburg put it:

Hog Butcher for the World,
Tool Maker, Stacker of Wheat,
Player with Railroads and the Nation's Freight Handler;
Stormy, husky, brawling,
City of the Big Shoulders”

Cool, huh?

Anyway, today we talk about:

And awayyy we go!

Question

When thousands of clients are rebooted for Windows Update or other scheduled tasks, my domain controllers log many KDC 7 System event errors:

Log Name: System
Source: Microsoft-Windows-Kerberos-Key-Distribution-Center
Event ID: 7
Level: Error
Description:

The Security Account Manager failed a KDC request in an unexpected way. The error is in the data field.

Error 170000C0

I’m trying to figure out if this is a performance issue, if the mass reboots are related, if my DCs are over-utilized, or something else.

Answer

That extended error is:

C0000017 = STATUS_NO_MEMORY - {Not Enough Quota} - Not enough virtual memory or paging file quota is available to complete the specified operation.

The DCs are being pressured with so many requests that they are running out of Kernel memory. We see this very occasionally with applications that make heavy use of the older SAMR protocol for lookups (instead of say, LDAP). In some cases we could change the client application's behavior. In others, the customer just had to add more capacity. The mass reboots alone are not the problem here - it's the software that runs at boot up on each client that is then creating what amounts to a denial of service attack against the domain controllers.

Examine one of the client computers mentioned in the event for all non-Windows-provided services, scheduled tasks that run at startup, SCCM/SMS at boot jobs, computer startup scripts, or anything else that runs when the computer is restarted. Then get promiscuous network captures of that computer starting (any time, not en masse) while also running Process Monitor in boot mode, and you'll probably see some very likely candidates. You can also use SPA or AD Data Collector sets (http://blogs.technet.com/b/askds/archive/2010/06/08/son-of-spa-ad-data-collector-sets-in-win2008-and-beyond.aspx) in combination with network captures to see exactly what protocol is being used to overwhelm the DC, if you want to troubleshoot the issue as it happens. Probably at 3AM, that sounds sucky.

Ultimately, the application causing the issue must be stopped, reconfigured, or removed - the only alternative is to add more DCs as a capacity Band-Aid or stagger your mass reboots.

Question

Is it possible to have 2003 and 2008 servers co-exist in the same DFS namespace? I don’t see it documented either “for” or “against” on the blog anywhere.

Answer

It's totally ok to mix OSes in the DFSN namespace, as long as you don't use Windows Server 2008 ("V2 mode") namespaces, which won't allow any Win2003 servers. If you are using DFSR to replicate the data, make sure all server have the latest DFSR hotfixes (here and here), as there areincompatibilities in DFSR that these hotfixes resolve.

Question

Should I create DFS namespace folders (used by the DFS service itself) under NTFS mount points? Is there any advantage to this?

Answer

DFSN management tools do not allow you to create DFSN roots and links under mount points ordinarily, and once you do through alternate hax0r means, they are hard to remove (you have to use FSUTIL). Ergo, do not do it – the management tools blocking you means that it is not supported.

There is no real value in placing the DFSN special folders under mount points - the DFSN special folders consume no space, do not contain files, and exist only to provide reparse point tags to the DFSN service and its file IO driver goo. By default, they are configured on the root of the C: drive in a folder called c:\dfsroots. That ensures that they are available when the OS boots. If clustering you'd create them on one of your drive-lettered shared disks.

Question

How do you back up the Themes folder using USMT4 in Windows 7?

Answer

The built-in USMT migration code copies the settings but not the files, as it knows the files will exist somewhere on the user’s source profile and that those are being copied by the migdocs.xml/miguser.xml. It also knows that the Themes system will take care of the rest after migration; the Themes system creates the transcoded image files using the theme settings and copies the image files itself.

Note here how after scanstate, my USMT store’s Themes folder is empty:

clip_image001

After I loadstate that user, the Themes system fixed it all up in that user’s real profile when the user logged on:

clip_image002

However, if you still specifically need to copy the Themes folder intact for some reason, here’s a sample custom XML file:

<?xmlversion="1.0"encoding="UTF-8"?>

<migrationurlid="http://www.microsoft.com/migration/1.0/migxmlext/migratethemefolder">

<componenttype="Documents"context="User">

<!-- sample theme folder migrator -->

<displayName>ThemeFolderMigSample</displayName>

 <rolerole="Data">

  <rules>

   <includefilter='MigXmlHelper.IgnoreIrrelevantLinks()'>

   <objectSet>

    <patterntype="File">%CSIDL_APPDATA%\Microsoft\Windows\Themes\* [*]</pattern>

   </objectSet>

  </include>

 </rules>

 </role>

And here it is in action:

clip_image004

Question

I've recently been working on extending my AD schema with a new back-linked attribute pair, and I used the instructions on this blog to auto-generate the linkIDs for my new attributes. Confusingly, the resulting linkIDs are negative values (-912314983 and -912314984). The attributes and backlinks seem to work as expected, but when looking at the MSDN definition of the linkID attribute, it specifically states that the linkID should be a positive value. Do you know why I'm getting a negative value, and if I should be concerned?

Answer

The only hard and fast rule is that the forward link (flink) be an even number and the backward link (blink) be the flink's ID plus one. In your case, the flink is -912314984 then the blink had better be -912314983, which I assume is the case since things are working. But, we were curious when you posted the linkID documentation from MSDN so we dug a little deeper.

The fact that your linkIDs are negative numbers is correct and expected, and is the result of a feature called AutoLinkID. Automatically generated linkIDs are in the range of 0xC0000000-0xFFFFFFFC (-1,073,741,824 to -4). This means that it is a good idea to use positive numbers if you are going to set the linkID manually. That way you are guaranteed not to conflict with automatically generated linkIDs.

The bottom line is, you're all good.

Question

I am trying to delegate permissions to the DBA team to create, modify, and delete SPNs since they're the team that swaps out the local accounts SQL is installed under to the domain service accounts we create to run SQL.

Documentation on the Internet has led me down the rabbit hole to no end.  Can you tell me how this is done in a W2K8 R2 domain and a W2K3 domain?

Answer

So you will want to delegate a specific group of users -- your DBA team -- permissions to modify the SPN attribute of a specific set of objects -- computer accounts for servers running SQL server and user accounts used as service accounts under which SQL Server can run.

The easiest way to accomplish this is to put all such accounts in one OU, ie OU=SQL Server Accounts, and run the following commands:

Dsacls "OU=SQL Server Accounts,DC=corp,DC=contoso,DC=com" /I:S /G "CORP\DBA Team":WPRP;servicePrincipalName;user
Dsacls "OU=SQL Server Accounts,DC=corp,DC=contoso,DC=com" /I:S /G "CORP\DBA Team":WPRP;servicePrincipalName;computer

These two commands will grant the DBA Team group permission to read and write the servicePrincipalName attribute on user and computer objects in the SQL Server Accounts OU.

Your admins should then be able to use setspn.exe to modify that property on the designated accounts.

But…what if you have a large number of accounts spread across multiple OUs? The above solution only works well if all of your accounts are concentrated in a few (preferably one) OUs. In this case, you basically have two options:

  1. You can run the two commands specifying the root of the domain as the object, but you would be delegating permissions for EVERY user and computer in the domain. Do you want your DBA team to be able to modify accounts for which they have no legitimate purpose?
  2. Compile a list of specific accounts the DBA team can manage and modify each of them individually. That can be done with a single command line. Create a text file that contains the DNs of each account for which you want to delegate permissions and then use the following command:

    for /f "tokens=*" %i in (object-list.txt) do dsacls "%i" /G "CORP\DBA Team":WPRP;servicePrincipalName

None of these are really great options, however, because you’re essentially giving a group of non-AD Administrators the ability to screw up authentication to what are perhaps critical business resources. You might actually be better off creating an expedited process whereby these DBAs can submit a request to a real Administrator who already has permissions to make the required changes, as well as the experience to verify such a change won’t cause any problems.

Author’s Note: This gentleman pointed out in a reply that these DBAs wouldn’t want him messing with tables, rows and the SA account, so he doesn’t want them touching AD. I thought that was sort of amusing.

Question

What is Powershell checking when your run get-adcomputer -properties * -filter * | format-table Name,Enabled?  Is Enabled an attribute, a flag, a bit, a setting?  What, if at all, would that setting show up as in something like ADSIEdit.msc?

I get that stuff like samAccountName, sn, telephonenumber, etc.  are attributes but what the heck is enabled?

Answer

All objects in PowerShell are PSObjects, which essentially wrap the underlying .NET or COM objects and expose some or all of the methods and properties of the wrapped object. In this case, Enabled is an attribute ultimately inherited from the System.DirectoryServices.AccountManagement.AuthenticablePrincipal .NET class. This answer isn’t very helpful, however, as it just moves your search for answers from PowerShell to the .NET Framework, right? Ultimately, you want to know how a computer’s or user’s account state (enabled or disabled) is stored in Active Directory.

Whether or not an account is disabled is reflected in the appropriate bit being set on the object’s userAccountControl attribute. Check out the following KB: How to use the UserAccountControl flags to manipulate user account properties. You’ll find that the penultimate least significant bit of the userAccountControl bitmask is called ACCOUNTDISABLE, and reflects the appropriate state; 1 is disabled and 0 is enabled.

If you find that you need to use an actual LDAP query to search for disabled accounts, then you can use a bitwise filter. The appropriate LDAP filter would be:

(UserAccountControl:1.2.840.113556.1.4.803:=2)

Other stuff

I watched this and, despite the lack of lots of moving arms and tools, had sort of a Count Zero moment:

And just for Ned (because he REALLY loves this stuff!): Kittens!

No need to rush back, dude.

Jonathan “Payback is a %#*@&!” Stephens


Purging Old NT Security Protocols

$
0
0

Hi folks, Ned here again (with somefriends). Everyone knows that Kerberos is Microsoft’s preeminent security protocol and that NTLM is both inefficient and, in some iterations, not strong enough to avoid concerted attack. NTLM V2 using complex passwords stands up well to common hash cracking tools like Cain and Abel, Ophcrack, or John the Ripper. On the other hand, NTLM V1 is defeated far faster and LM is effectively no protection at all.

I discussed NTLM auditing years ago, when Windows 7 and Windows Server 2008 R2 introduced the concept of NTLM blocking. That article was for well-controlled environments where you thought that there was some chance of disabling NTLM – only modern clients and servers, the latest applications, and Active Directory. In a few other articles, I gave some further details on the limitations of the Windows auditing system logging. It turns out that while we’re ok at telling when NTLM was used, we’re not great at describing which flavor. For instance, Windows Server 2008+security auditing can tell you about the NTLM version through the 4624 event that states a Package Name (NTLM only): NTLM V1 or Package Name (NTLM only): NTLM V2, but all prior operating systems cannot. None of the older auditing can tell you if LM is used either. Windows Server 2008 R2 NTLM auditing only shows you NTLM usage in general.

Today the troika of Dave, Jonathan, and Ned are here to help you discover which computers and applications are using NTLM V1 and LM security, regardless of your operating system. It’s safe to say that some people aren’t going to like our answers or how much work this entails, but that’s life; when LM security was created as part of LAN Manager and OS/2 by Microsoft and IBM, Dave and I were in grade school and Jonathan was only 48.

If you need to keep using NTLM V2 and simply want to hunt down the less secure precursors, this should help.

Finding NTLM V1 and LM Usage via network captures

The only universal, OS-agnostic way you can tell which clients are sending NTLMv1 and LM challenges is by examining a network trace taken from destination computers. Using Netmon 3.4 or another network capture tool, look for packets with a negotiated NTLM security mechanism.

This first example is with LMCompatibilityLevel set to 0 on clients. This example is an SMB session request packet, specifying NTLM authentication.

Here is the SMB SESSION SETUP request, which specifies the security token mechanism:

  Frame: Number = 15, Captured Frame Length = 220, MediaType = ETHERNET

+ Ethernet: Etype = Internet IP (IPv4),DestinationAddress:[00-15-5D-05-B4-44],SourceAddress:[00-15-5D-05-B4-49]

+ Ipv4: Src = 10.10.10.20, Dest = 10.10.10.27, Next Protocol = TCP, Packet ID = 747, Total IP Length = 206

+ Tcp: Flags=...AP..., SrcPort=49235, DstPort=Microsoft-DS(445), PayloadLen=166, Seq=2204022974 - 2204023140, Ack=820542383, Win=32724 (scale factor 0x2) = 130896

+ SMBOverTCP: Length = 162

- SMB2: C   SESSION SETUP (0x1)

    SMBIdentifier: SMB

  + SMB2Header: C SESSION SETUP (0x1),TID=0x0000, MID=0x0002, PID=0xFEFF, SID=0x0000

  - CSessionSetup:

     StructureSize: 25 (0x19)

     VcNumber: 0 (0x0)

   + SecurityMode: 1 (0x1)

   + Capabilities: 0x1

     Channel: 0 (0x0)

     SecurityBufferOffset: 88 (0x58)

     SecurityBufferLength: 74 (0x4A)

     PreviousSessionId: 0 (0x0)

   - securityBlob:

    - GSSAPI:

     - InitialContextToken:

      + ApplicationHeader:

      + ThisMech: SpnegoToken (1.3.6.1.5.5.2)

      - InnerContextToken: 0x1

       - SpnegoToken: 0x1

        + ChoiceTag:

        - NegTokenInit:

         + SequenceHeader:

         + Tag0:

         + MechTypes: Prefer NLMP (1.3.6.1.4.1.311.2.2.10)

         + Tag2:

         + OctetStringHeader:

         -MechToken: NTLM NEGOTIATE MESSAGE

          - NLMP: NTLM NEGOTIATE MESSAGE

             Signature: NTLMSSP

             MessageType: Negotiate Message (0x00000001)

           + NegotiateFlags: 0xE2088297 (NTLM v2128-bit encryption, Always Sign)

           + DomainNameFields: Length: 0, Offset: 0

           + WorkstationFields: Length: 0, Offset: 0

           + Version: Windows 6.1 Build 7601 NLMPv15

Next, the server sends its NTLM challenge back to the client:

  Frame: Number = 16, Captured Frame Length = 447, MediaType = ETHERNET

+ Ethernet: Etype = Internet IP (IPv4),DestinationAddress:[00-15-5D-05-B4-49],SourceAddress:[00-15-5D-05-B4-44]

+ Ipv4: Src = 10.10.10.27, Dest = 10.10.10.20, Next Protocol = TCP, Packet ID = 24310, Total IP Length = 433

+ Tcp: Flags=...AP..., SrcPort=Microsoft-DS(445), DstPort=49235, PayloadLen=393, Seq=820542383 - 820542776, Ack=2204023140, Win=512 (scale factor 0x8) = 131072

+ SMBOverTCP: Length = 389

- SMB2: R  - NT Status: System - Error, Code = (22) STATUS_MORE_PROCESSING_REQUIRED  SESSION SETUP (0x1), SessionFlags=0x0

    SMBIdentifier: SMB

  + SMB2Header: R SESSION SETUP (0x1),TID=0x0000, MID=0x0002, PID=0xFEFF, SID=0x0019

  - RSessionSetup:

     StructureSize: 9 (0x9)

   + SessionFlags: 0x0

     SecurityBufferOffset: 72 (0x48)

     SecurityBufferLength: 317 (0x13D)

   - securityBlob:

    - GSSAPI:

     - NegotiationToken:

      + ChoiceTag:

      - NegTokenResp:

       + SequenceHeader:

       + Tag0:

       + NegState: accept-incomplete (1)

       + Tag1:

       + SupportedMech: NLMP (1.3.6.1.4.1.311.2.2.10)

       + Tag2:

       + OctetStringHeader:

       - ResponseToken: NTLM CHALLENGE MESSAGE

        - NLMP: NTLM CHALLENGE MESSAGE

          Signature: NTLMSSP

           MessageType: Challenge Message (0x00000002)

        + TargetNameFields: Length: 12, Offset: 56

         + NegotiateFlags: 0xE2898215 (NTLM v2128-bit encryption, Always Sign)

         + ServerChallenge: 67F9C5F851F2CD73

           Reserved: Binary Large Object (8 Bytes)

         + TargetInfoFields: Length: 214, Offset: 68

         + Version: Windows 6.1 Build 7601 NLMPv15

           TargetNameString: CORP01

         + AvPairs: 7 pairs

The client calculates the response to the challenge, using the various available hashes of the password. Note how this response includes both LM and NTLMv1 challenge responses.

  Frame: Number = 17, Captured Frame Length = 401, MediaType = ETHERNET

+ Ethernet: Etype = Internet IP (IPv4),DestinationAddress:[00-15-5D-05-B4-44],SourceAddress:[00-15-5D-05-B4-49]

+ Ipv4: Src = 10.10.10.20, Dest = 10.10.10.27, Next Protocol = TCP, Packet ID = 748, Total IP Length = 387

+ Tcp: Flags=...AP..., SrcPort=49235, DstPort=Microsoft-DS(445), PayloadLen=347, Seq=2204023140 - 2204023487, Ack=820542776, Win=32625 (scale factor 0x2) = 130500

+ SMBOverTCP: Length = 343

- SMB2: C   SESSION SETUP (0x1)

    SMBIdentifier: SMB

  + SMB2Header: C SESSION SETUP (0x1),TID=0x0000, MID=0x0003, PID=0xFEFF, SID=0x0019

  - CSessionSetup:

     StructureSize: 25 (0x19)

     VcNumber: 0 (0x0)

   + SecurityMode: 1 (0x1)

   + Capabilities: 0x1

     Channel: 0 (0x0)

   SecurityBufferOffset: 88 (0x58)

     SecurityBufferLength: 255 (0xFF)

     PreviousSessionId: 0 (0x0)

   - securityBlob:

    - GSSAPI:

     - NegotiationToken:

      + ChoiceTag:

      - NegTokenResp:

       + SequenceHeader:

       + Tag0:

       + NegState: accept-incomplete (1)

       + Tag2:

       + OctetStringHeader:

       - ResponseToken: NTLM AUTHENTICATE MESSAGEVersion:v1, Domain: CORP01, User: Administrator, Workstation: CONTOSO-CLI-01

        - NLMP: NTLM AUTHENTICATE MESSAGEVersion:v1, Domain: CORP01, User: Administrator, Workstation: CONTOSO-CLI-01

           Signature: NTLMSSP

           MessageType: Authenticate Message (0x00000003)

         + LmChallengeResponseFields: Length: 24, Offset: 154

         + NtChallengeResponseFields: Length: 24, Offset: 178

         + DomainNameFields: Length: 12, Offset: 88

         + UserNameFields: Length: 26, Offset: 100

         + WorkstationFields: Length: 28, Offset: 126

         + EncryptedRandomSessionKeyFields: Length: 16, Offset: 202

         + NegotiateFlags: 0xE2888215 (NTLM v2128-bit encryption, Always Sign)

         + Version: Windows 6.1 Build 7601 NLMPv15

         + MessageIntegrityCheckNotPresent: 6243C42AF68F9DFE30BD31BFC722B4C0

           DomainNameString: CORP01

           UserNameString: Administrator

           WorkstationString: CONTOSO-CLI-01

         + LmChallengeResponseStruct: 3995E087245B6F7100000000000000000000000000000000

         + NTLMV1ChallengeResponse: B0751BDCB116BA5737A51962328D5CCD19EEBEBB15A69B1E

         + SessionKeyString: 397DACB158C9F10EF4903F10D4CBE032

       + Tag3:

       + OctetStringHeader:

       + MechListMic: Version: 1

The server then responds with successful negotiation state:

  Frame: Number = 18, Captured Frame Length = 159, MediaType = ETHERNET

+ Ethernet: Etype = Internet IP (IPv4),DestinationAddress:[00-15-5D-05-B4-49],SourceAddress:[00-15-5D-05-B4-44]

+ Ipv4: Src = 10.10.10.27, Dest = 10.10.10.20, Next Protocol = TCP, Packet ID = 24312, Total IP Length = 145

+ Tcp: Flags=...AP..., SrcPort=Microsoft-DS(445), DstPort=49235, PayloadLen=105, Seq=820542776 - 820542881, Ack=2204023487, Win=510 (scale factor 0x8) = 130560

+ SMBOverTCP: Length = 101

- SMB2: R   SESSION SETUP (0x1), SessionFlags=0x0

    SMBIdentifier: SMB

  + SMB2Header: R SESSION SETUP (0x1),TID=0x0000, MID=0x0003, PID=0xFEFF, SID=0x0019

  - RSessionSetup:

     StructureSize: 9 (0x9)

   + SessionFlags: 0x0

     SecurityBufferOffset: 72 (0x48)

     SecurityBufferLength: 29 (0x1D)

   - securityBlob:

    - GSSAPI:

     - NegotiationToken:

      + ChoiceTag:

      - NegTokenResp:

       + SequenceHeader:

       + Tag0:

       +NegState: accept-completed (0)

       + Tag3:

       + OctetStringHeader:

       + MechListMic: Version: 1

To contrast this, consider the challenge response packet when LMCompatibility is set to 4 or 5 on the client (meaning it is not allowed to send anything but NTLM V2). The LM response is null, while the NTLMv1 response isn't included at all.

  Frame: Number = 17, Captured Frame Length = 763, MediaType = ETHERNET

+ Ethernet: Etype = Internet IP (IPv4),DestinationAddress:[00-15-5D-05-B4-44],SourceAddress:[00-15-5D-05-B4-49]

+ Ipv4: Src = 10.10.10.20, Dest = 10.10.10.27, Next Protocol = TCP, Packet ID = 844, Total IP Length = 749

+ Tcp: Flags=...AP..., SrcPort=49231, DstPort=Microsoft-DS(445), PayloadLen=709, Seq=4045369997 - 4045370706, Ack=881301203, Win=32625 (scale factor 0x2) = 130500

+ SMBOverTCP: Length = 705

- SMB2: C   SESSION SETUP (0x1)

    SMBIdentifier: SMB

  + SMB2Header: C SESSION SETUP (0x1),TID=0x0000, MID=0x0003, PID=0xFEFF, SID=0x0021

  - CSessionSetup:

     StructureSize: 25 (0x19)

     VcNumber: 0 (0x0)

  + SecurityMode: 1 (0x1)

   + Capabilities: 0x1

     Channel: 0 (0x0)

     SecurityBufferOffset: 88 (0x58)

     SecurityBufferLength: 617 (0x269)

     PreviousSessionId: 0 (0x0)

   - securityBlob:

    - GSSAPI:

     - NegotiationToken:

      + ChoiceTag:

      - NegTokenResp:

       + SequenceHeader:

       + Tag0:

       + NegState: accept-incomplete (1)

       + Tag2:

       + OctetStringHeader:

       - ResponseToken: NTLM AUTHENTICATE MESSAGEVersion:v2, Domain: CORP01, User: Administrator, Workstation: CONTOSO-CLI-01

        - NLMP: NTLM AUTHENTICATE MESSAGEVersion:v2, Domain: CORP01, User: Administrator, Workstation: CONTOSO-CLI-01

           Signature: NTLMSSP

           MessageType: Authenticate Message (0x00000003)

         + LmChallengeResponseFields: Length: 24, Offset: 154

         + NtChallengeResponseFields: Length: 382, Offset: 178

         + DomainNameFields: Length: 12, Offset: 88

         + UserNameFields: Length: 26, Offset: 100

         + WorkstationFields: Length: 28, Offset: 126

         + EncryptedRandomSessionKeyFields: Length: 16, Offset: 560

         + NegotiateFlags: 0xE2888215 (NTLM v2128-bit encryption, Always Sign)

         + Version: Windows 6.1 Build 7601 NLMPv15

         + MessageIntegrityCheck: 2B69C069DD922D4A841D0EC43939DF0F

           DomainNameString: CORP01

           UserNameString: Administrator

           WorkstationString: CONTOSO-CLI-01

         + LmChallengeResponseStruct: 000000000000000000000000000000000000000000000000

         + NTLMV2ChallengeResponse: CD22D7CC09140E02C3D8A5AB623899A8

         + SessionKeyString: AF31EDFAAF8F38D1900D7FBBDCB43760

       + Tag3:

       + OctetStringHeader:

       + MechListMic: Version: 1

By taking traces and filtering on the NTLMV1ChallengeResponse field, you find those hosts that are sending NTLMv1 responses and determine if you need to upgrade them or if they simply have the wrong LMcompatibility values set through security policy.

Finding LM usage via Netlogon debug logs

If you just want to detect LM authentication and not looking to spend time in network captures, you can instead enable Netlogon logging on all DCs and servers in the environment.

Nltest /dbflag:2080ffff
net stop NetLogon
net start NetLogon

This creates the netlogon.log in the C:\Windows\Debug folder and it can grow to a maximum of 20 Mb by default. At that point, the server renames the file to netlogon.bak and a new netlogon.log file started. At 20Mb, the server deletes netlogon.bak, renames the netlogon.log to netlogon.bak, and a new netlogon.log file started. To make these log files larger, you can use a registry entry or group policy:

Registry

Path: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Netlogon\Parameters
Value Name: MaximumLogFileSize
Value Type: REG_DWORD
Value Data: <maximum log file size in bytes>

Group Policy

\Computer Configuration\Administrative Templates\System\Net Logon\Maximum Log File Size

You aren't trying to capture all data here - just useful samples - but if they wrap so much that you're unsure if they are accurate at all, increasing size is a good idea. As an alternative, you can create a scheduled task that runs ONSTART or a computer startup script. Either of them can use this batch file to make backups of the netlogon log by date/time and the computer name:

REM Sample script to copy the netlogon.bak to a netlogon_DATETIME_COMPUTERNAME.log backup form every 5 minutes

:start
if exist %windir%\debug\netlogon.bak goto copylog

:
copylog_return
sleep 300
goto start

:copylog
for /f "tokens=1-7 delims=/:., " %%a in ("%DATE% %TIME%") do (set DATETIME=%%a-%%b-%%c_%%d-%%e-%%f)
copy /v %windir%\debug\netlogon.bak %windir%\debug\netlogon_%DATETIME%_%COMPUTERNAME%.log
if %ERRORLEVEL% EQU 0 del %windir%\debug\netlogon.bak
goto copylog_return

Periodically, gather all of the NetLogon logs from the DCs and servers and place them in a single folder. Once you have assembled the NetLogon logs into a single spot, you may then use the following LogParser command from that folder to parse them all for a count of unique UAS logons to the domain controller by workstation:

Logparser.exe "SELECT TO_UPPERCASE(EXTRACT_SUFFIX(TEXT,0,'returns ')) AS ERR, TO_UPPERCASE (extract_prefix(extract_suffix(TEXT, 0, 'NetrLogonUasLogon of '), 0, 'from ')) as USER, TO_UPPERCASE (extract_prefix(extract_suffix(TEXT, 0, 'from '), 0, 'returns ')) as WORKSTATION, COUNT(*) FROM '*netlogon.*' WHERE INDEX_OF(TO_UPPERCASE (TEXT),'LOGON') >0 AND INDEX_OF(TO_UPPERCASE(TEXT),'RETURNS') >0 AND INDEX_OF(TO_UPPERCASE(TEXT),'NETRLOGONUASLOGON') >0 GROUP BY ERR, USER, WORKSTATION ORDER BY COUNT(*) DESC" -i:TEXTLINE -rtp:-1 >UASLOGON_USER_BY_WORKSTATION.txt

UASLOGON_USER_BY_WORKSTATION.txt contains the unique computers and counts. LogParser is available for download from here.

FIND and PowerShell are options here as well. The simplest approach is just to return the lines, perhaps into a text file for later sorting in say, Excel (which is very fast at sorting and allows you to organize your data).

image

image

image

I'll wager someone in the comments will take on the rather boring challenge of exactly duplicating what LogParser does. I didn't have the energy this time around. :)

Final thoughts

Microsoft stopped using LM after Windows 95/98/ME. If you do find specific LM-only usage and you don't have any (unsupported) Win9X computers, this is a third party application. A really heinous one.

All supported versions of Windows obey the LMCompatibility registry setting, and can use NTLMv2 just as easily as NTLMv1. At that point, analyzing network traces just becomes useful for tracking down those hosts that have applied the policy, but have not yet been rebooted. Considering how unsafe LM and NTLMv1 are, enabling NoLMHash and LMCompatibility 4 or 5 on all computers may be a faster alternative to auditing. It could cause some temporary outages, but would definitely catch anyone requiring unsafe protocols. There's no better auditing that a complaining application administrator.

Finally, do not limit your NTLM inventory to domain controllers and file or application servers. A comprehensive project requires you examine all computers in the environment, as even a Windows XP workstation can be a "server" for some application. Use a multi-pronged approach, where you also inventory operating systems through network probing - if you have Windows 95 or old SAMBA lying around somewhere on a shop floor, they are almost guaranteed to use insecure protocols.

Until next time,

- Ned “and Dave and Jonathan and Jonathan's in-home elderly care nurse” Pyle

Friday Mail Sack: Get Off My Lawn Edition

$
0
0

Hi folks, Ned here again. I know this is supposed to be the Friday Mail Sack but things got a little hectic and... ah heck, it doesn't need explaining, you're in IT. This week - with help from the ever-crotchety Jonathan Stephens - we talk about:

Now that Jonathan's Rascal Scooter has finished charging, on to the Q & A.

Question

We want to create a group policy for an OU that contains various computers needs to run for just Windows 7 notebooks only. All of our notebooks are named starting with an "N". Does group policy WMI filtering allows stacking conditions on the same group policy? 

Answer

Yes, you can chain together multiple query criteria, and they can even be from different classes or namespaces. For example, here I use both the Win32_OperatingSystem and Win32_ComputerSystem classes:

image

And here I use only the Win32_OperatingSystem class, with multiple filter criteria:

image

As long as they all evaluate TRUE, you get the policy. If you had a hundred of these criteria (please don’t) and 99 evaluate true but just one is false, the policy is skipped.

Note that my examples above would catch Win2008 R2 servers also; if you’ve read my previous posts, you know that you can also limit queries to client operating systems using the Win32_OperatingSystem property OperatingSystemSKU. Moreover, if you hadn’t used a predictable naming convention, you can also filter on with Win32_SystemEnclosure and query the ChassisTypes property for 8, 9, or 10 (respectively: “Portable”, “Laptop”, and “Notebook”). And no, I do not know the difference between these, it is OEM-specific. Just like “pizza box” is for servers. You stay classy, WMI.

Question

Is changing LDAP MaxPoolThreads a good or bad idea?

Answer

MaxPoolThreads controls the maximum number of simultaneous threads per-processor that a DC uses to work on LDAP requests. By default, it’s four per processor core. Increasing this value would allow a DC/GC to handle more LDAP requests. So if you have too many LDAP clients talking to too few DCs at once, raising this can reduce LDAP application timeouts and periodic “hangs”. As you might have guessed, the biggest complainer here is often MS Exchange and Outlook. If the performance counters “ATQ Threads LDAP" & "ATQ Threads Total" are constantly at the maximum number based on the number of processor and MaxPoolThreads value, then you are bottlenecking LDAP.

However!

DCs are already optimized to quickly return data from LDAP requests. If your hardware is even vaguely new and if you are not seeing actual issues, you should not increase this default value. MaxPoolThreads depends on non-paged pool memory, which on a Win2003 32-bit Windows OS is limited to 256MB (more on Win2008 32-bit). Meaning that if you still have not moved to at least x64 Windows Server 2003, don’t touch this value at all – you can easily hang your DCs. It also means you need to get with the times; we stopped making a 32-bit server OS nearly three years ago and OEMS stopped selling the hardware even before that. A 64-bit system's non-paged pool limit is 128GB.

In addition, changing the LDAP settings is often a Band-Aid that doesn’t address the real issue of DC capacity for your client/server base.  Use SPA or AD Data Collector sets to determine "Clients with the Most CPU Usage" under section "Ldap Requests”. Especially if the LDAP queries are not just frequent but also gross - there are also built-in diagnostics logs to find poorly-written requests:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Diagnostics\
15 Field Engineering

To categorize search operations as expensive or inefficient, two DWORD registry keys are used:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters\
Expensive Search Results Threshold

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters\
Inefficient Search Results Threshold

These DWORD registry keys have the following default values:

  • Expensive Search Results Threshold: 10000
  • Inefficient Search Results Threshold: 1000

For example, here’s an inefficient result written in the DS event log; yuck, ick, argh!:

Event Type: Information
Event Source: NTDS General
Event Category: Field Engineering
Event ID: 1644
Description:
The Search operation based at RootDSE
using the filter:
& ( | ( & ( (objectCategory = <val>) (objectSid = *) ! ( (sAMAccountType | <bit_val>) ) ) & ( (objectCategory = <val>) ! ( (objectSid = *) ) ) & ( (objectCategory = <val>) (groupType | <bit_val>) ) ) (aNR = <substr>) <startSubstr>*) )

visited 40 entries and returned 0 entries.

Finally, this article should be required reading to any application developers in your company:

Creating More Efficient Microsoft Active Directory-Enabled Applications -
http://msdn.microsoft.com/en-us/library/windows/desktop/ms808539.aspx#efficientadapps_topic04

(The title should be altered to “Creating even slightly efficient…” in my experience).

Question

I want to implement many-to-one certificate mappings by using Issuer and Subject DN match. In altSecurityIdentities I put the following string:

X509:<I>DC=com,DC=contoso,CN=Contoso CA<S>DC=com,DC=contoso,CN=users,CN=user name

In a given example, a certificate with “cn=user name, cn=users, dc=contoso, dc=com” in the Subject field will be mapped to a user account, where I define the mappings. But in that example I get one-to-one mapping. Can I use wildcards here, say:

X509:<I>DC=com,DC=contoso,CN=Contoso CA<S>DC=com,DC=contoso,CN=users,CN=*

So that any certificate that contains “cn=<any value>, cn=users, dc=contoso, dc=com” will be mapped to the same user account?

Answer

[Sent from Jonathan while standing in the 4PM dinner line at Bob Evans]

Unfortunately, no. All that would do is map a certificate with a wildcard subject to that account. The only type of one-to-many mapping supported by the Active Directory mapper is configuring it to ignore the subject completely. Using this method, you can configure the AD mappings so that any certificate issued by a particular CA can be mapped to a single user account. See the following: http://technet.microsoft.com/en-us/library/bb742438.aspx#ECAA

Question

I've recently been working on extending my AD schema with a new back-linked attribute pair, and I used the instructions on this blog and MSDN to auto-generate the linkIDs for my new attributes. Confusingly, the resulting linkIDs are negative values (-912314983 and -912314984). The attributes and backlinks seem to work as expected, but when looking at the MSDN definition of the linkID attribute, it specifically states that the linkID should be a positive value. Do you know why I'm getting a negative value, and if I should be concerned?

Answer

[Sent from Jonathan’s favorite park bench where he feeds the pigeons]

The negative numbers are correct and expected, and are the result of a feature called AutoLinkID. Automatically generated linkIDs are in the range of 0xC0000000-0xFFFFFFFC (-1,073,741,824 to -4). This means that it is a good idea to use positive numbers if you are going to set the linkID manually. That way you are guaranteed not to conflict with automatically generated linkIDs.

The bottom line is, this is expected under the circumstances and you're all good.

Question

Is there any performance advantage to turning off the DFSR debug logging, lowering the number of logs, or moving the logs to another drive? You explained how to do this here in the DFSR debug series, but never mentioned it in your DFSR performance tuning article.

Answer

Yes, you will see some performance improvements turning off the logging or lowering the log count; naturally, all this logging isn’t free, it takes CPU and disk time. But before you run off to make changes, remember that if there are any problems, these logs are the only thing standing between you and the unemployment line. Your server will be much faster without any anti-virus software too, and your company’s profits higher without fire insurance; there are trade-offs in life. That’s why – after some brief agonizing, followed by heavy drinking – I decided not to include it in the performance article.

Moving the logs to another physical disk than Windows is safe and may take some pressure of the OS drive.

Question

When I try to join this Win2008 R2 computer to the domain, it gives an error I’ve never seen before:

"The following error occurred attempting to join the domain "contoso.com":
The request is not supported."

Answer

This server was once a domain controller. During demotion, something prevented the removal of the following registry value name:

HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters\
DSA Database file

Delete that "Dsa Database File" value name and attempt to join the domain again. It should work this time. If you take a gander at the %systemroot%\debug\netsetup.log, you’ll see another clue that this is your issue:

NetpIsTargetImageADC: Determined this is a DC image as RegQueryValueExW loaded Services\NTDS\Parameters\DSA Database file: 0x0
NetpInitiateOfflineJoin: The image at C:\Windows\system32\config\SYSTEM is a DC: 0x32

We started performing this check in Windows Server 2008 R2, as part of the offline domain join code changes. Hurray for unintended consequences!

Question

We have a largish AD LDS (ADAM) instance we update daily through by importing CSV files that deletes all of yesterday’s user objects and import today’s. Since we don’t care about deleted objects, we reduced the tombstoneLifetime to 3 days. The NTDS.DIT usage, as shown by the 1646 Garbage Collection Event ID, shows 1336mb free with a total allocation of 1550mb – this would suggest that there is a total of 214MB of data in the database.

The problem is that Task Manager shows a total of 1,341,208K of Memory (Private Working Set) in use. The memory usage is reduced to around the 214MB size when LDS is restarted; however, when Garbage Collection runs the memory usage starts to climb. I have read many KB articles regarding GC but nothing explains what I am seeing here.

Answer

Generally speaking, LSASS (and DSAMAIN, it’s red-headed ADLDS cousin) is designed to allocate and retain more memory – especially ESE (aka “Jet”) cache memory – than ordinary processes, because LSASS/DSAMAIN are the core processes of a DC or AD/LDS server. I would expect memory usage to grow heavily during the import, the deletions, and then garbage collection; unless something else put pressure on the machine for memory, I’d expect the memory usage to remain. That’s how well-written Jet database applications work – they don’t give back the memory unless someone asks, because LSASS and Jet can reuse it much faster when needed if it’s already loaded; why return memory if no one wants it? That would be a performance bug unto itself.

The way to show this in practical terms is to start some other high-memory process and validate that DSAMAIN starts to return the demanded memory. There are test applications like this on the internet, or you can install some app that likes to gobble a lot of RAM. Sometimes I’ll just install Wireshark and load a really big saved network capture – that will do it in a pinch. :-D You can also use the ESE performance counters under the “Database” and “Database ==> Instances” to see more about how much of the memory usage is Jet database cache size.

Regular DCs have this behavior too, as does DFSR and do other applications. You paid for all that memory; you might as well use it.

(Follow up from the customer where he provided a useful PowerShell “memory gobbler” example)

I ran the following Windows PowerShell script a few times to consume all available memory and the DSAMAIN process started releasing memory immediately as expected:

$chunk = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
for ($i = 0; $i -lt 5000; $i++)

       $chunk += $chunk
}

Question

When I migrate users from Windows 7 to Windows 7 using USMT 4.0, their pinned and automatic taskbar jump lists are lost. Is this expected?

Answer

Yes. For those poor $#%^&#s readers still using XP, Windows 7 introduced application taskbar pinning and a special menu called a jump list:

image

Pinned and Recent jump lists are not migrated by USMT, because the built-in OS Shell32 manifest called by USMT (c:\windows\winsxs\manifests\*_microsoft-windows-shell32_31bf3856ad364e35_6.1.7601.17514_non_ca4f304d289b7800.manifest) contains this specific criterion:

<pattern type="File">%CSIDL_APPDATA%\Microsoft\Windows\Recent [*]</pattern>

Note how it is notRecent\* [*], which would grab the subfolder contents of Recent. It only copies the direct file contents of Recent. The pinned/automatic jump lists are stored in special files under the CustomDestinations and AutomaticDestinations folders inside the Recent folder. All the other contents of Recent are shortcut files to recently opened documents anywhere on the system:

image

If you examine these special files, you'll see that they are binary, unreadable, and totally proprietary:

image

Since these files are binary and embed all their data in a big blob of goo, they cannot simply be copied safely between operating systems using USMT. The paths they reference could easily change in the meantime, or the data they reference could have been intentionally skipped. The only way this would work is if the Shell team extended their shell migration plugin code to handle it. Which would be a fair amount of work, and at the time these manifests were being written, customers were not going to be migrating from Win7 to Win7. So no joy. You could always try copying them with custom XML, but I have no idea if it would work at all and you’re on your own anyway – it’s not supported.

Question

We have a third party application that requires DES encryption for Kerberos. It wasn’t working from our Windows 7 clients though, so we enabled the security group policy “Network security: Configure encryption types allowed for Kerberos” to allow DES. After that though, these Windows 7 clients stopped working in many other operations, with event log errors like:

Event ID: 4
Source: Kerberos
Type: Error
"The kerberos client received a KRB_AP_ERR_MODIFIED error from the server host/myserver.contoso.com. This indicates that the password used to encrypt the kerberos service ticket is different than that on the target server. Commonly, this is due to identically named machine accounts in the target realm (domain.com), and the client realm. Please contact your system administrator."

And “The target principal name is incorrect” or “The target account name is incorrect” errors connecting to network resources.

Answer

When you enable DES on Windows 7, you need to ensure you are not accidentally disabling the other cipher suites. So don’t do this:

image

That means only DES is supported and you just disabled RC4, AES, etc.

Instead, do this:

image

If it exists at all and you want DES, this registry DWORD value to be 0x7fffffff on Windows 7 or Win2008 R2:

MACHINE\Software\Microsoft\Windows\CurrentVersion\Policies\System\Kerberos\Parameters\
SupportedEncryptionTypes

If it’s set to 0x3, all heck will break loose. This security policy interface is admittedly tiresome in that it has no “enabled/disabled” toggle. Use GPRESULT /H or /Z to see how it’s applying if you’re not sure about the actual settings.

Other Stuff

Windows 8 Consumer Preview releases February 29th, as if you didn’t already know it. Don’t ask me if this also means Windows Server 8 Beta the same exact day, I can’t say. But it definitely means the last 16 months of my life finally start showing some results. As will this blog…

Apparently we’ve been wrong about Han and Greedo since day one. I want to be wrong though. Thanks for passing this along Tony. And speaking of which, thanks to Ted O and the rest of the gang at LucasArts for the awesome tee!

This is a … creepily good music video? Definitely a nice find, Mark!


This is basically my home video collection

My new favorite site of the week? The Awesomer. Do not visit if you have to be somewhere in an hour.

Wait, no… my new favorite site is That’s Nerdaliscious. Do not read if hungry or dorky. 

Sick of everyone going on about Angry Birds? Love Chuck Norris? Go here now. There are a lot of these; don't miss Mortal Combat versus Donkey Kong.

Ah, there’s Waldo.

Likely the coolest advertisement for something that doesn’t yet exist that you will see this year.


I need to buy stock in SC Johnson. Can you imagine the Windex sales?!

Until next time.

- Ned “Generation X” Pyle with Jonathan “The Greatest Generation” Stephens

Saturday Mail Sack: Because it turns out, Friday night was alright for fighting edition

$
0
0

Hello all, Ned here again with our first mail sack in a couple months. I have enough content built up here that I actually created multiple posts, which means I can personally guarantee there will be another one next week. Unless there isn't!

Today we answer your questions around:

One side note: as I was groveling old responses, I came across a handful of emails I'd overlooked and never responded to; <insert various excuses here>. People who know me know that I don’t ignore email lightly. Even if I hadn't the foggiest idea how to help, I'd have at least responded with a "Duuuuuuuuuuurrrrrrrr, no clue, sorry".

Therefore, I'll make you deal: if you sent us an email in the past few months and never heard back, please resend your question and I'll answer them as best I can. That way I don’t spend cycles answering something you already figured out later, but if you’re still stuck, you have another chance. Sorry about all that - what with Windows 8 work, writing our internal support engineer training, writing public content, Jonathan having some kind of south pacific death flu, and presenting at internal conferences… well, only the usual insane Microsoft Office clipart can sum up why we missed some of your questions:

clip_image002

On to the goods!

Question

Is it possible to create a WMI Filter that detects only virtual machines? We want a group policy that will apply specifically to our virtualized guests.

Answer

Totally possible for Hyper-V virtual machines: You can use the WMI class Win32_ComputerSystem with a property of Model like “Virtual Machine” and property Manufacturer of “Microsoft Corporation”. You can also use class Win32_BaseBoard for the Product property, which will be “Virtual Machine” and property Manufacturer that will be “Microsoft Corporation”.

image

Technically speaking, this might also capture Virtual PC machines, but I don’t have one handy to see, and I doubt you are allowing those to handle production workloads anyway. As for EMC VMWare, Citrix Xen, KVM, Oracle Virtual Box, etc. you’ll have to see what shows for Win32_BaseBoard/Win32_ComputerSystem in those cases and make sure your WMI filter looks for that too. I don’t have any way to test them, and even if I did, I'd still make you do it out of spite. Gimme money!

Which reminds me - Tad is back:

image

Question

The Understand and Troubleshoot AD DS Simplified Administration in Windows Server "8" Beta guide states:

Microsoft recommends that all domain controllers provide DNS and GC services for high availability in distributed environments; these options default to on when installing a domain controller in any mode or domain.

But when I run Install-ADDSDomainController -DomainName corp.contoso.com -whatif it returns that the cmdlet will not install the DNS Server (DNS Server: No).

If Microsoft recommends that all domain controllers provide DNS, why do I need to specify -InstallDNS argument?

Answer

The output of DNS Server: No is a cosmetic issue with the output of -whatif. It should say YES, but doesn't unless you specifically use the $true parameter. You don't have to specify -installdns; the cmdlet will automatically* install DNS server unless you specify -installdns:$false.

* If you are using Windows DNS on domain controllers, that is. The UTG isn't totally accurate in this version (but will be in the next). The logic is that if that domain already hosts the DNS, all subsequent DCs will also host the DNS by default. So to be very specific:

1. New forest: always install DNS
2. New child or new tree domain: if the parent/tree domain hosts DNS, install DNS
3. Replica: if the current domain hosts DNS, install DNS

Question

How can I disable a user on all domain controllers, without waiting for (or forcing) AD replication?

Answer

The universal in-box way that works in all operating systems would be to use DSMOD.EXE USER and feed it the DC names in a list. For example:

1. Create a text file that contains all your DC in a forest, in a line-separated list:

2008r2-01
2008r2-02

2. Run a FOR loop command to read that list and disable the specified user against each domain controller.

FOR /f %i IN (some text file) DO dsmod user "some DN" -disabled -yes -s %i

For instance:

image

You also have the AD PowerShell option in your Win2008 R2 DC environment, and it’s much easier to automate and maintain. You just tell it the domain controllers' OU and the user and let it rip:

get-adcomputer -searchbase "your DC OU" -filter * | foreach {disable-adaccount "user logon ID" -server $_.dnshostname}

For instance:

image

If you weren't strictly opposed to AD replication (short circuiting it like this isn't going to stop eventual replication traffic) you can always disable the user on one DC then force just that single object to replicate to all the other DCs. Check out repadmin /replsingleobj or the new Windows Server "8" Beta " sync-adobject cmdlet.

image

 The Internet also has many further thoughts on this. It's a very opinionated place.

Question

We have found that modifying the security on a DFSR replicated folder and its contents causes a big DFSR replication backlog. We need to make these permissions changes though; is there any way to avoid that backlog?

Answer

Not the way you are doing it. DFSR has to replicate changes and you are changing every single file; after all, how can you trust a replication system that does not replicate? You could consider changing permissions "from the bottom up" - where you modify perms on lower level folders first - in some sort of staged fashion to minimize the amount of replication that has to occur, but it just sounds like a recipe to get things wrong or end up replicating things twice, making it worse. You will just have to bite the bullet in Windows Server 2008 R2 and older DFSR. Do it on a weekend and next time, treat this as a lesson learned and plan your security design better so that all of your user base fits into the model using groups.

However…

It is a completely different story if you switch to Windows Server "8" Beta - well really, the RTM version when it ships. There you can use Central Access Policies (similar to Windows Server 2008 R2's global object access auditing). This new kind of security system is part of the Dynamic Access Control feature and abstracts the user access from NTFS, meaning you can change security using claims policy and not actually change the files on the disk (under some but not all circumstances - more on this when I write a proper post after RTM). It's amazing stuff; in my opinion, DAC is the first truly huge change in Windows file access control since Windows NT gave us NTFS.

image

Central Access Policy is not a trivial thing to implement, but this is the future of file servers. Admins should seriously evaluate this feature when testing Windows Server "8" Beta in their lab environments and thinking about future designs. Our very own Mike Stephens has written at length about this in the Understand and Troubleshoot Dynamic Access Control in Windows Server "8" Beta guide as well.

Question

[Perhaps interestingly to you the reader, this was my question to the developers of AD PowerShell. I don’t know everything after all… - Ned]

I am periodically seeing error "invalid enumeration context" when querying the Redmond domain using get-adcomputer. It’s a simple query to return all the active Windows 8 and Windows Server "8" computers that were logged into since February 15th and write them to a CSV file:

image

It runs for quite a while and sometimes works, sometimes fails. I don’t find any well-explained reference to what this error means or how to avoid it, but it smells like a “too much data asked for over too long a period of time” kind of issue.

Answer

The enumeration contexts do have a finite hardcoded lifetime and you will get an error if they expire. You might see this error when executing searches that search a huge quantity of data using limited indexed attributes and return a small data set. If we hit a DC that is not very busy then the query will run faster and could have enough time to complete for a big dataset like this query. Server hardware would also be a factor here. You can also try searching starting at a deeper level. You could also tweak the indexes, although obviously not in this case.

[For those interested, when the query worked, it returned roughly 75,000 active Windows 8 family machines from that domain alone. Microsoft dogfoods in production like nobody else, baby - Ned]

Question

Is there any chance that DFSR could lock a file while it is replicating outbound and prevent user access to their data?

Answer

DFSR uses the BackupRead() function when copying a file into the staging folder (i.e. any file over 64KB, by default), so that should prevent any “file in use” issues with applications or users; the file "copying" to the staging folder is effectively instantaneous and non-exclusive. Once staged and marshaled, the copy of the file is replicated and no user has any access to that version of the file.

For a file under 64KB, it is simply replicated without staging and that operation of making a copy and sending it into RPC is so fast there’s no reasonable way for anyone to ever see any issues there. I have certainly never seen it, for sure, and I should have by now after six years.

Question

Why does TechNet state that USMT 4.0 offline migrations don’t work for certain OS settings? How do I figure out the complete list?

Answer

Manifests that use migration plugin DLLs aren’t processed when running offline migrations. It's just a by design limitation of USMT and not a bug or anything. To see which manifests you need to examine and consider creating custom XML to handle, review the complete list at Understanding what the USMT 4.0 CONFIG manifests migrate (Part 1: Introduction).

Question

One of my customers has found that the "Everyone" group is added to the below folders in Windows 2003 and Windows 2008:

Windows Server 2008

C:\ProgramData\Microsoft\Crypto\DSS\MachineKeys

C:\ProgramData\Microsoft\Crypto/RSA\MachineKeys

Windows Server 2003

C:\Documents and Settings\All Users\Application Data\Microsoft\Crypto\DSS\MachineKeys

C:\Documents and Settings\All Users\Application Data\Microsoft\Crypto\RSA\MachineKeys

1. Can we remove the "Everyone" group and give permissions to another group like - Authenticated users for example?

2. Will replacing that default cause issues?

3. Why is this set like this by default?

Answer

[Courtesy of:

image

]

These permissions are intentional. They are intended to allow any process to generate a new private key, even an Anonymous one. You'll note that the permissions on the MachineKeys folder are limited to the folder only. Also, you should note that inheritance has been disabled, so the permissions on the MachineKeys folder will not propagate to new files created therein. Finally, the key generation code itself modifies the permissions on new key container files before the private key is actually written to the container file.

In short, messing with these permissions will probably lead to failures in creating or accessing keys belonging to the computer. So please don't touch them.

1. Exchanging Authenticated Users with Everyoneprobably won't cause any problems. Microsoft, however, doesn't test cryptographic operations after such a permission change; therefore, we cannot predict what will happen in all cases.

2. See my answer above. We haven't tested it. We have, however, been performing periodic security reviews of the default Windows system permissions, tightening them where possible, for the last decade. The default Everyone permissions on the MachineKeys folder have cleared several of these reviews.

3. In local operations, Everyone includes unidentified or anonymous users. The theory is that we always want to allow a process to generate a private key. When the key container is actually created and the key written to it, the permissions on the key container file are updated with a completely different set of default permissions. All the default permissions allow are the ability to create a file, read and write data. The permissions do not allow any process except System to launch any executable code.

Question

If I specify a USMT 4.0 config.xml child node to prevent migration, I am still seeing the settings migrate. But if I set the parent node, those settings do not migrate. The consequence being that no child nodesmigrate, which I do not want.

For example, on XP the Dot3Svc service is set to Manual startup.  On Win7, I want the Dot3Svc service set to Automatic startup.  If I use this config.xml on the loadstate, the service is set to manual like the XP machine and my "no" setting is ignored:

<componentdisplayname="Networking Connections"migrate="yes"ID="network_and_internet\networking_connections">

  <componentdisplayname="Microsoft-Windows-Wlansvc"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-VWiFi"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-RasConnectionManager"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-RasApi"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-PeerToPeerCollab"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-Native-80211"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-MPR"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-Dot3svc"migrate="no"ID="<snip>"/>

</component>

Answer

Two different configurations can cause this symptom:

1. You are using a config.xml file created on Windows 7, then running it on a Windows XP computer with scanstate /config

2. The source computer was Windows XP and it did not have a config.xml file set to block migration.

When coming from XP, where downlevel manifests were used, loadstate does not process those differently-named child nodes on the destination Win7 computer. So while the parent node set to NO would work, the child nodes would not, as they have different displayname and ID.

It’s a best practice to use a config.xml in scanstate as described in http://support.microsoft.com/kb/2481190, if going from x86 to x64; otherwise, you end up with damaged COM settings. Otherwise, you only need to generate per-OS config.xml files if you plan to change default behavior. All the manifests run by default if there is a config.xml with no modifications or if there is no config.xml at all.

Besides being required for XP to block settings, you should also definitely lean towards using config.xml on the scanstate rather than the loadstate. If using Vista to Vista, Vista to 7, or 7 to 7, you could use the config.xml on either side, but I’d still recommend sticking with the scanstate; it’s typically better to block migration from adding things to the store, as it will be faster and leaner.

Other Stuff

[Many courtesy of our pal Mark Morowczynski -Ned]

Happy belated 175th birthday Chicago. Here's a list of things you can thank us for, planet Earth; where would you be without your precious Twinkies!?

Speaking of Chicago…

All the new MCSE and certification news reminded me of the other side to that coin.

Do you know where your nearest gun store is located? Map of the Dead does. Review now; it will be too late when the zombies rise from their graves, and I don't plan to share my bunker, Jim.

image

If you call yourself an IT Pro, you owe it to yourself to visit moviecarposters.com right now and buy… everything. They make great alpha geek conversation pieces. To get things started, I recommend these:

clip_image002[6]clip_image004clip_image006
Sigh - there is never going to be another Firefly

And finally…

I started re-reading Terry Pratchett, picking up where from where I left off as a kid. Hooked again. Damn you English writers, with your understated awesomeness!

Ok, maybe not all English Writers…

image

Until next time,

- Ned "Jonathan is seriously going to kill me" Pyle

Friday Mail Sack: Mothers day pfffft… when is son’s day?

$
0
0

Hi folks, Ned here again. It’s been a little while since the last sack, but I have a good excuse: I just finished writing a poop ton of Windows Server 2012 depth training that our support folks around the world will use to make your lives easier (someday). If I ever open MS Word again it will be too soon, and I’ll probably say the same thing about PowerPoint by June.

Anyhoo, let’s get to it. This week we talk about:

Question

Is it possible to use any ActiveDirectory module cmdlets through invoke-command against a remote non-Windows Server 2012 DC where the module is installed? It always blows up for me as it tries to “locally” (remotely) use the non-existent ADWS with error “Unable to contact the server. This may be because the server does not exist, it is currently down, or it does not have the active directory web services running”

image

Answer

Yes, but you have to ignore that terribly misleading error and put your thinking cap on: the problem is your credentials. When you invoke-command, you make the remote server run the local PowerShell on your behalf. In this case that remote command has to go off-box to yet another remote server – a DC running ADWS. This means a multi-hop credential scenario. Provide –credential (get-credential) to your called cmdlets inside the curly braces and it’ll work fine.

Question

We are using a USMT /hardlink migration to preserve disk space and increase performance. However, performance is crazy slow and we’re actually running out of disk space on some machines that have very large files like PSTs. My scanstate log shows:

Error [0x000000] Write error 112 for C:\users\ned\Desktop [somebig.pst]. Windows error 112 description: There is not enough space on the disk.[gle=0x00000070]

Error [0x080000] Error 2147942512 while gathering object C:\users\ned\Desktop\somebig.pst. Shell application requested abort![gle=0x00000070]

Answer

These files are encrypted and you are using /efs:copyraw instead of /efs:hardlink. Encrypted files are copied into the store whole instead of hardlink'ing, unless you specify /efs:hardlink. If you had not included /efs, this file would have failed with, "File X is encrypted. Use the /efs option to specify a different way to handle this file".

Yes, I realize that we should probably just require that option. But think of all the billable hours we just gave you!

Question

I was using your DFSR pre-seeding post and am finding that robocopy /B is slows down my migration compared to not using it. Is that required for preseeding?

Answer

The /B mode, while inherently slower, ensures that files are copied using a backup API regardless of permissions. It is the safest way, so I took the prudent route when I wrote the sample command. It’s definitely expected to be slower – in my semi-scientific repro’s the difference was ~1.75 times slower on average.

However, /B not required if you are 100% sure you have at least READ permissions to all files.  The downside here is a lot of failures due to permissions might end up making things even slower than just going /B; you will have to test it.

If you are using Windows Server 2012 and have plenty of hardware to back it up, you can use the following options that really make the robocopy fly, at the cost of memory, CPU, and network utilization (and possibly, some files not copying at all):

Robocopy <foo> <bar> /e /j /copyall /xd dfsrprivate /log:<sna.foo> /tee /t:128 /r:1

For those that have used this before, it will look pretty similar – but note:

  • Adds /J option (first introduced in Win8 robocopy) - now performs unbuffered IO, which means gigantic files like ISO and VHD really fly and a 1Gbps network is finally heavily utilized. Adds significant memory overhead, naturally.
  • Add /MT:128 to use 128 simultaneous file copy threads. Adds CPU overhead, naturally.
  • Removes /B and /R:6 in order to guarantee fastest copy method. Make sure you review the log and recopy any failures individually, as you are now skipping any files that failed to copy on the first try.

 

Question

Recently I came across an user account that keeps locking out (yes, I've read several of your blogs where you say account lockout policies are bad "Turning on account lockouts is a way to guarantee someone with no credentials can deny service to your entire domain"). We get the Event ID of 4740 saying the account has been locked out, but the calling computer name is blank:

 

Log Name:     Security

 

Event ID:     4740

 

Level:         Information

 

Description:

 

A user account was locked out.

 

Subject:

 

Security ID: SYSTEM

 

Account Name: someaccount

 

Account Domain: somedomain

 

Logon ID: 0x3e7

 

Account That Was Locked Out:

 

Security ID: somesid

 

Account Name: someguy

 

Additional Information:

 

Caller Computer Name:

 

The 0xC000006A indicates a bad password attempt. This happens every 5 minutes and eventually results in the account being locked out. We can see that the bad password attempts are coming via COMP1 (which is a proxy server) however we can't work out what is sending the requests to COMP1 as the computer is blank again (there should be a computer name).

Are we missing something here? Is there something else we could be doing to track this down? Is the calling computer name being blank indicative of some other problem or just perhaps means the calling device is a non-Microsoft device?

Answer

(I am going to channel my inner Eric here):

A blank computer name is not unexpected, unfortunately. The audit system relies on the sending computers to provide that information as part of the actual authentication attempt. Kerberos does not have a reliable way to provide the remote computer info in many cases. Name resolution info about a sending computer is also easily spoofed. This is especially true with transitive NTLM logons, where we are relying on one computer to provide info for another computer. NTLM provides names but they are also easily spoofed so even when you see a computer name in auditing, you are mainly asking an honest person to tell you the truth.

Since it happens very frequently and predictably, I’d configure a network capture on the sending server to run in a circular fashion, then wait for the lock out and stop the event. You’d see all of the traffic and now know exactly who sent it. If the lockout was longer running and less predictable, I’d recommend using a network capture configured to trace in a circular fashion until that 4740 event writes. Then you can see what the sending IP address is and hunt down that machine. Different techniques here:

[And the customer later noted that since it’s a proxy server, it has lots of logs – and they told him the offender]

Question

I am testing USMT 5.0 and finding that if I migrate certain Windows 7 computers to Windows 8 Consumer Preview, Modern Apps won’t start. Some have errors, some just start then go away.

Answer

Argh. The problem here is Windows 7’s built-in manifest that implements microsoft-windows-com-base , which then copies this registry key:

HKEY_LOCAL_MACHINE\Software\Microsoft\OLE

If the DCOM permissions are modified in that key, they migrate over and interfere with the ones needed by Modern Apps to run. This is a known issue and already fixed so that we don’t copy those values onto Windows 8 anymore. It was never a good idea in the first place, as any applications needing special permissions will just set their own anyways when installed.

And it’s burned us in the past too…

Question

Are there any available PowerShell, WMI, or command-line options for configuring an OCSP responder? I know that I can install the feature with the Add-WindowsFeature, but I'd like to script configuring the responder and creating the array.

Answer

[Courtesy of the Jonathan “oh no, feet!” Stephens– Ned]

There are currently no command line tools or dedicated PowerShell cmdlets available to perform management tasks on the Online Responder. You can, however, use the COM interfaces IOCSPAdmin and IOSCPCAConfiguration to manage the revocation providers on the Online Responder.

  1. Create an IOSCPAdmin object.
  2. The IOSCPAdmin::OCSPCAConfigurationCollection property will return an IOCSPCAConfigurationCollection object.
  3. Use IOCSPCAConfigurationCollection::CreateCAConfiguration to create a new revocation provider.
  4. Make sure you call IOCSPAdmin::SetConfiguration when finished so the online responder gets updated with the new revocation configuration.

Because these are COM interfaces, you can call them from VBScript or PowerShell, so you have great flexibility in how you write your script.

Question

I want to use Windows Desktop Search with DFS Namespaces but according to this TechNet Forum thread it’s not possible to add remote indexes on namespaces. What say you?

Answer

There is no DFSN+WDS remote index integration in any OS, including Windows 8 Consumer Preview. At its heart, this comes down to being a massive architectural change in WDS that just hasn’t gotten traction. You can still point to the targets as remote indexes, naturally.

Question

Certain files – as pointed out here by AlexSemi– that end with invalid characters like a dot or a space break USMT migration. One way to create these files is to use the echo command into a device path like so:

image

These files can’t be opened by anything in Windows, it seems.

image

When you try to migrate, you end up with a fatal “windows error 2” “the system cannot find the file specified” error unless you skip the files using /C:

image

What gives?

Answer

Quit making invalid files! :-)

USMT didn’t invent CreateFile() so its options here are rather limited… USMT 5.0 handles this case correctly through error control - it skips these files when hardlink’ing because Windows returns that they “don’t exist”. Here is my scanstate log using USMT 5.0 beta, where I used /hardlink and did NOT provide /C:

image

In the case of non-hardlink, scanstate copies them without their invalid names and they become non-dotted/non-spaced valid files (even in USMT 4.0). To make it copy these invalid files with the actual invalid name would require a complete re-architecting of USMT or the Win32 file APIs. And why – so that everyone could continue to not open them?

Other Stuff

In case you missed it, Windows 8 Enterprise Edition details. With all the new licensing and activation goodness, Enterprise versions are finally within reach of any size customer. Yes, that means you!

Very solid Mother’s Day TV mash up (a little sweary, but you can’t fight a something that combines The Wire, 30 Rock, and The Cosbys)

Zombie mall experience. I have to fly to Reading in June to teach… this might be on the agenda

Well, it’s about time - Congress doesn't "like" it when employers ask for Facebook login details

Your mother is not this awesome:

image
That, my friend, is a Skyrim birthday cake

SportsCenter wins again (thanks Mark!)

Don’t miss the latest Between Two Ferns (veeerrrry sweary, but Zach Galifianakis at his best; I just wish they’d add the Tina Fey episode)

But what happens if you eat it before you read the survival tips, Land Rover?!

 

Until next time,

- Ned “demon spawn” Pyle

The Mouse Will Play

$
0
0

Hey all, Ned here. Mike and I start teaching Windows Server 2012 and Windows 8 DS internals this month in the US and UK and won’t be back until July. Until then, Jonathan is – I can’t believe I’m saying this – in charge of AskDS. He’ll field your questions and publish… stuff. We’ll make sure he takes his medication before replying.

If you’re in Reading, England June 10-22, first round is on me.

image
I didn’t say what the first round was though.

Ned “crikey” Pyle

Important Information about Remote Desktop Licensing and Security Advisory 2718704

$
0
0

Hi folks, Jonathan here. Dave and I wanted to share some important information with you.

By now you’ve all been made aware of the Microsoft Security Advisory that was published this past Sunday.  If you are a Terminal Services or Remote Desktop Services administrator then we have some information of which you should be aware.  These are just some extra administrative steps you’ll need to follow the next time you have to obtain license key packs, transfer license key packs, or any other task that requires your Windows Server license information to be processed by the Microsoft Product Activation Clearinghouse.  Since there’s a high probability that you’ll have to do that at some point in the future we’re doing our part to help spread the word.  Our colleagues over at the Remote Desktop Services (Terminal Services) Team blog have posted all the pertinent information. Take a look.

Follow-up to Microsoft Security Advisory 2718704: Why and How to Reactivate License Servers in Terminal Services and Remote Desktop Services

If you have any questions, feel free to post them over in the Remote Desktop Services forum.

Jonathan Stephens

RSA Key Blocking is Here!

$
0
0

Hello everyone. Jonathan here again with another Public Service Announcement post.

Today, Microsoft has published a new Security Advisory:

Microsoft Security Advisory (2661254): Update For Minimum Certificate Key Length

The Security Advisory and the accompanying KB article have complete information about the software update, but the key takeaway is that this update is now available on the Download Center and the Microsoft Update Catalog. In addition, Microsoft will release this software update through Microsoft Update (aka Windows Update) in October 2012. So all of you enterprise customers have two months to start testing this update to see what impact it has in your environments.

If you want information on finding weak keys in your environment then review the KB article. It describes several methods you can use. Microsoft Support has also created a PowerShell script that has been posted to the the TechNet Script Center.

Finally, I have one final warning for those of you that use makecert.exe to create test certificates. By default, makecert.exe creates certificates that chains up to the Root Agency root CA certificate located in the Intermediate Certification Authorities store. The Root Agency CA certificate has a public key of 512 bits, so once you deploy this update no certificate created with makecert.exe will be considered valid.

You should now consider makecert.exe deprecated. As a replacement, starting with Windows 7 / Windows Server 2008 R2, you can use certreq.exe to create a self-signed certificate. For example, to create a self-signed code signing certificate you can create the following .INF file:

[NewRequest]
Subject = "CN=Self Signed Cert"
KeyLength = 2048
ProviderName = "Microsoft Enhanced Cryptographic Provider v1.0"
KeySpec = "AT_SIGNATURE"
KeyUsage = "CERT_DIGITAL_SIGNATURE_KEY_USAGE"
RequestType = Cert
SMIME = False
ValidityPeriod = Years
ValidityPeriodUnits = 2

[EnhancedKeyUsageExtension]
OID = 1.3.6.1.5.5.7.3.3

The important line above is the RequestType value. That tells certreq.exe to create a self-signed certificate. Along with that value, the ValidityPeriod and ValidityPeriodUnits values allow you specify the lifetime of the self-signed certificate.

Once you create the .INF file, run the following command:

Certreq -new  selfsigned.inf selfsigned.crt

This will take your .INF file and generate a new self-signed certificate that you can use for testing.

Ok, so this was supposed to be a short post pointing to where you need to go, but it turns out that I had some other related stuff. The important message here is go read the Security Advisory and the KB article.

Go read the Security Advisory and the KB article.

Ex pace.

Jonathan “I am the Key Master” Stephens


....And knowing is half the battle!

Revenge of Y2K and Other News

$
0
0

Hello sports fans!

So this has been a bit of a hectic time for us, as I'm sure you can imagine. Here's just some of the things that have been going on around here.

Last week, thanks to a failure on the time servers at USNO.NAVY.MIL, many customers experienced a time rollback to CY 2000 on their Active Directory domain controllers. Our team worked closely with the folks over at Premier Field Engineering to explain the problem, document resolutions for the various issues that might arise, and describe how to inoculate your DCs against a similar problem in the future. If you were affected by this problem then you need to read this post. If you weren't affected, and want to know why, then you need to read this post. Basically, we think you need to read this post. So...here's the link to the AskPFEPlat blog.

In other news, Ned Pyle has successfully infiltrated the Product Group and has started blogging on The Storage Team blog. His first post is up, and I'm sure there will be many more to follow. If you've missed Ned's rare blend of technical savvy and sausage-like prose, and you have an interest in Microsoft's DFSR and other storage technologies, then go check him out.

Finally...you've probably noticed the lack of activity here on the AskDS blog. Truthfully, that's been the result of a confluence of events -- Ned's departure, the Holiday season here in the US, and the intense interest in Windows 8 and Windows Server 2012 (and subsequent support calls). Never fear, however! I'm pleased to say that your questions to the blog have been coming in quite steadily, so this week I'll be posting an omnibus edition of the Mail Sack. We also have one or two more posts that will go up between now and the end of the year, so there's that to look forward to. Starting with the new calendar year, we'll get back to a semi-regular posting schedule as we get settled and build our queue of posts back up.

In the mean time, if you have questions about anything you see on the blog, don't hesitate to contact us.

Jonathan "time to make the donuts" Stephens

Intermittent Mail Sack: Must Remember to Write 2013 Edition

$
0
0

Hi all, Jonathan here again with the latest edition of the Intermittent Mail Sack. We've had some great questions over the last few weeks so I've got a lot of material to cover. This sack, we answer questions on:

Before we get started, however, I wanted to share information about a new service available to Premier customers through Microsoft Services Premier Support. Many Premier customers will be familiar with the Risk Assessment Program (RAP). Premier Support is now rolling out an online offering called the RAP as a Service (or RaaS for short). Our colleagues over on the Premier Field Engineering (PFE) blog have just posted a description of the new offering, and I encourage you to check it out. I've been working on the Active Directory RaaS offering since the early beta, and we've gotten really good feedback. Unfortunately, the offering is not yet available to non-Premier customers; look at RaaS as yet one more benefit to a Premier Support contract.

 

Now on to the Mail Sack!

Question

I'm considering upgrading my DFSR hub servers to Server 2012. Is there anything I should know before I hit the easy button and do an upgrade?

Answer

The most important thing to note is that Microsoft strongly discourages mixing Windows Server 2012 and legacy operating system DFSR. You just mentioned upgrading your hub servers, and make no mention of any branch servers. If you're going to upgrade your DFSR servers then you should upgrade all of them.

Check out Ned's post over on the FileCab blog: DFS Replication Improvements in Windows Server. Specifically, review the section that discusses Dynamic Access Control Support.

Also, there is a minor issue that has been found that we are still tracking. When you upgrade from Windows Server 2008 R2 to Windows Server 2012 the DFS Management snap-in stops working. The workaround is to just uninstall and then reinstall the DFS Management tools:

You can also do this with PowerShell:

Uninstall-WindowsFeature -name RSAT-DFS-Mgmt-Con
Install-WindowsFeature -name RSAT-DFS-Mgmt-Con

 

Question

From our SharePoint site, when users click on log-off then they get sent to this page: https://your_sts_server/adfs/ls/?wa=wsignout1.0.

We configured the FedAuth cookie to be session based after we did this:

$sts = Get-SPSecurityTokenServiceConfig 
$sts.UseSessionCookies = $true 
$sts.Update() 

 

The problem is, unless the user closes all their browsers then when they go to the log-in page the browser remembers their credentials. This is not acceptable for some PC's are shared by people. Also, closing all browsers is not acceptable as they run multiple web applications.

Answer

(Courtesy of Adam Conkle)

Great question! I hope the following details help you in your deployment:

Moving from a persistent cookie to a session cookie with SharePoint 2010 was the right move in this scenario in order to guarantee that closing the browser window would terminate the session with SharePoint 2010.

When you sign out via SharePoint 2010 and are redirected to the STS URL containing the query string: wa=wsignout1.0, this is what we call a WS-Federation sign-out request. This call is sufficient for signing out of the STS as well as all relying parties signed into during the session.

However, what you are experiencing is expected behavior for how Integrated Windows Authentication (IWA) works with web browsers. If your web browser client experienced either a no-prompt sign-in (using Kerberos authentication for the currently signed in user), or NTLM, prompted sign-in (provided credentials in a Windows Authentication "401" credential prompt), then the browser will remember the Windows credentials for that host for the duration of the browser session.

If you were to collect a HTTP headers trace (Fiddler, HTTPWatch, etc.) of the current scenario, you will see that the wa=wsignout1.0 request is actually causing AD FS and SharePoint 2010 (and any other RPs involved) to clean up their session cookies (MSISAuth and FedAuth) as expected. The session is technically ending the way it should during sign-out. However, if the client keeps the current browser session open, browsing back to the SharePoint site will cause a new WS-Federation sign-in request to be sent to AD FS (wa=wsignin1.0). When the sign-in request is sent to AD FS, AD FS will attempt to collect credentials with a HTTP 401, but, this time, the browser has a set of Windows credentials ready to provide to that host.

The browser provides those Windows credentials without a prompt shown to the user, and the user is signed back into AD FS, and, thus, is signed back into SharePoint 2010. To the naked eye, it appears that sign-out is not working properly, while, in reality, the user is signing out and then signing back in again.

To conclude, this is by-design behavior for web browser clients. There are two workarounds available:

Workaround 1

Switch to forms-based authentication (FBA) for the AD FS Federation Service. The following article details this quick and easy process: AD FS 2.0: How to Change the Local Authentication Type

Workaround 2

Instruct your user base to always close their web browser when they have finished their session

Question

Are the attributes for files and folders used by Dynamic Access Control are replicated with the object? That is, using DFSR, if I replicate the file to another server which uses the same policy will the file have the same effective permissions on it?

Answer

(Courtesy of Mike Stephens)

Let me clarify some aspects of your question as I answer each part

When enabling Dynamic Access Control on files and folders there are multiple aspects to consider that are stored on the files and folders.

Resource Properties

Resource Properties are defined in AD and used as a template to stamp additional metadata on a file or folder that can be used during an authorization decision. That information is stored in an alternate data stream on the file or folder. This would replicate with the file, the same as the security descriptor.

Security Descriptor

The security descriptor replicates with the file or folder. Therefore, any conditional expression would replicate in the security descriptor.

All of this occurs outside of Dynamic Access Control -- it is a result of replicating the file throughout the topology, for example, if using DFSR. Central Access Policy has nothing to do with these results.

Central Access Policy

Central Access Policy is a way to distribute permissions without writing them directly to the DACL of a security descriptor. So, when a Central Access Policy is deployed to a server, the administrator must then link the policy to a folder on the file system. This linking is accomplished by inserting a special ACE in the auditing portion of the security descriptor informs Windows that the file/folder is protected by a Central Access Policy. The permissions in the Central Access Policy are then combined with Share and NTFS permissions to create an effective permission.

If the a file/folder is replicated to a server that does not have the Central Access Policy deployed to it then the Central Access Policy is not valid on that server. The permissions would not apply.

Question

I read the post located here regarding the machine account password change in Active Directory.

Based on what I read, if I understand this correctly, the machine password change is generated by the client machine and not AD. I have been told, (according to this post, inaccurately) that AD requires this password reset or the machine will be dropped from the domain.

I am a Macintosh systems administrator, and as you probably know, this issue does indeed occur on Mac systems.

I have reset the password reset interval to be various durations from fourteen days which is the default, to one day.

I have found that if I disjoin and rejoin the machine to the domain it will generate a new password and work just fine for 30 days. At that time, it will be dropped from the domain and have to be rejoined. This is not 100% of the time, however it is often enough to be a problem for us as we are a higher education institution which in addition to our many PCs, also utilizes a substantial number of Macs. Additionally, we have a script which runs every 60 days to delete machine accounts from AD to keep it clean, so if the machine has been turned off for more than 60 days, the account no longer exists.

I know your forte is AD/Microsoft support, however I was hoping that you might be able to offer some input as to why this might fail on the Macs and if there is any solution which we could implement.

Other Mac admins have found workarounds like eliminating the need for the pw reset or exempting the macs from the script, but our security team does not want to do this.

Answer

(Courtesy of Mike Stephens)

Windows has a security policy feature named Domain member: Disable machine account password change, which determines whether the domain member periodically changes its computer account password. Typically, a mac, linux, or unix operating system uses some version of Samba to accomplish domain interoperability. I'm not familiar with these on the mac; however, in linux, you would use the command

Net ads changetrustpw 

 

By default, Windows machines initiate a computer password change every 30 days. You could schedule this command to run every 30 days once it completes successfully. Beyond that, basically we can only tell you how to disable the domain controller from accepting computer password changes, which we do not encourage.

Question

I recently installed a new server running Windows 2008 R2 (as a DC) and a handful of client computers running Windows 7 Pro. On a client, which is shared by two users (userA and userB), I see the following event on the Event Viewer after userA logged on.

Event ID: 45058 
Source: LsaSrv 
Level: Information 
Description: 
A logon cache entry for user userB@domain.local was the oldest entry and was removed. The timestamp of this entry was 12/14/2012 08:49:02. 

 

All is working fine. Both userA and userB are able to log on on the domain by using this computer. Do you think I have to worry about this message or can I just safely ignore it?

Fyi, our users never work offline, only online.

Answer

By default, a Windows operating system will cache 10 domain user credentials locally. When the maximum number of credentials is cached and a new domain user logs onto the system, the oldest credential is purged from its slot in order to store the newest credential. This LsaSrv informational event simply records when this activity takes place. Once the cached credential is removed, it does not imply the account cannot be authenticated by a domain controller and cached again.

The number of "slots" available to store credentials is controlled by:

Registry path: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon
Setting Name: CachedLogonsCount
Data Type: REG_SZ
Value: Default value = 10 decimal, max value = 50 decimal, minimum value = 1

Cached credentials can also be managed with group policy by configuring:

Group Policy Setting path: Computer Configuration\Policies\Windows Settings\Security Settings\Local Policies\Security Options.
Group Policy Setting: Interactive logon: Number of previous logons to cache (in case domain controller is not available)

The workstation the user must have physical connectivity with the domain and the user must authenticate with a domain controller to cache their credentials again once they have been purged from the system.

I suspect that your CachedLogonsCount value has been set to 1 on these clients, meaning that that the workstation can only cache one user credential at a time.

Question

In Windows 7 and Server 2008 Kerberos DES encryption is disabled by default.

At what point will support for DES Kerberos encryption be removed? Does this happen in Windows 8 or Windows Server 2012, or will it happen in a future version of Windows?

Answer

DES is still available as an option on Windows 8 and Windows Server 2012, though it is disabled by default. It is too early to discuss the availability of DES in future versions of Windows right now.

There was an Advisory Memorandum published in 2005 by the Committee on National Security Systems (CNSS) where DES and all DES-based systems (3DES, DES-X) would be retired for all US Government uses by 2015. That memorandum, however, is not necessarily a binding document. It is expected that 3DES/DES-X will continue to be used in the private sector for the foreseeable future.

I'm afraid that we can't completely eliminate DES right now. All we can do is push it to the back burner in favor of newer and better algorithms like AES.

Question

I have two Issuing certification authorities in our corporate network. All our approved certificate templates are published on both issuing CAs. We would like to enable certificate renewals from Internet with our Internet-facing CEP/CES configured for certificate authentication in Certificate Renewal Mode Only. What we understand from the whitepaper is that it's not going to work when the CA that issues the certificate must be the same CA used for certificate renewal.

Answer

First, I need to correct an assumption made based on your reading of the whitepaper. There is no requirement that, when a certificate is renewed, the renewal request be sent to the same CA as that that issued the original certificate. This means that your clients can go to either enrollment server to renew the certificate. Here is the process for renewal:

  1. When the user attempts to renew their certificate via the MMC, Windows sends a request to the Certificate Enrollment Policy (CEP) server URL configured on the workstation. This request includes the template name of the certificate to be renewed.
  2. The CEP server queries Active Directory for a list of CAs capable of issuing certificates based on that template. This list will include the Certificate Enrollment Web Service (CES) URL associated with that CA. Each CA in your environment should have one or more instances of CES associated with it.
  3. The list of CES URLs is returned to the client. This list is unordered.
  4. The client randomly selects a URL from the list returned by the CEP server. This random selection ensures that renewal requests are spread across all returned CAs. In your case, if both CAs are configured to support the same template, then if the certificate is renewed 100 times, either with or without the same key, then that should result in a nearly 50/50 distribution between the two CAs.

The behavior is slightly different if one of your CAs goes down for some reason. In that case, should clients encounter an error when trying to renew a certificate against one of the CES URIs then the client will failover and use the next CES URI in the list. By having multiple CAs and CES servers, you gain high availability for certificate renewal.

Other Stuff

I'm very sad that I didn't see this until after the holidays. It definitely would have been on my Christmas list. A little pricey, but totally geek-tastic.

This was also on my list, this year. Go Science!

Please do keep those questions coming. We have another post in the hopper going up later in the week, and soon I hope to have some Windows Server 2012 goodness to share with you. From all of us on the Directory Services team, have a happy and prosperous New Year!

Jonathan "13th baktun" Stephens

 

 

Troubleshoot ADFS 2.0 with these new articles

$
0
0

Hi all, here’s a quick public service announcement to highlight some recently published ADFS 2.0 troubleshooting guidance. We get a lot of questions about configuring and troubleshooting ADFS 2.0, so our support and content teams have pitched in to create a series of troubleshooting articles to cover the most common scenarios.

ADFS 2.0 connectivity problems: “This page cannot be displayed” – You receive a “This page cannot be displayed” error message when you try to access an application on a website that uses AD FS 2.0. Provides a resolution.

ADFS 2.0 ADFS service configuration and startup issues-ADFS service won’t start – Provides troubleshooting steps for ADFS service configuration and startup problems.

ADFS 2.0 Certificate problems-An error occurred during an attempt to build the certificate chain – A certificate-related change in AD FS 2.0 causes certificate, SSL, and trust errors triggers errors including Event 133. Provides a resolution.

ADFS 2.0 authentication problems: “Not Authorized HTTP error 401″ – You cannot authenticate an account in AD FS 2.0, that you are prompted for credentials, and event 111 is logged. Provides a resolution.

ADFS 2.0 claims rules problems: “Access is denied” – You receive an “Access Denied” error message when you try to access an application in AD FS 2.0. Provides a resolution.

We hope you will find these troubleshooters useful. You can provide feedback and comments at the bottom of each KB if you want to help us improve them.

Windows 10 Group Policy (.ADMX) Templates now available for download

$
0
0

Hi everyone, Ajay here.  I wanted to let you all know that we have released the Windows 10 Group Policy (.ADMX) templates on our download center as an MSI installer package. These .ADMX templates are released as a separate download package so you can manage group policy for Windows 10 clients more easily.

This new package includes additional (.ADMX) templates which are not included in the RTM version of Windows 10.

 

  1. DeliveryOptimization.admx
  2. fileservervssagent.admx
  3. gamedvr.admx
  4. grouppolicypreferences.admx
  5. grouppolicy-server.admx
  6. mmcsnapins2.admx
  7. terminalserver-server.admx
  8. textinput.admx
  9. userdatabackup.admx
  10. windowsserver.admx

To download the Windows 10 Group Policy (.ADMX) templates, please visit http://www.microsoft.com/en-us/download/details.aspx?id=48257

To review which settings are new in Windows 10, review the Windows 10 ADMX spreadsheet here: http://www.microsoft.com/en-us/download/details.aspx?id=25250

Ajay Sarkaria

Manage Developer Mode on Windows 10 using Group Policy

$
0
0

Hi All,

We’ve had a few folks want to know how to disable Developer Mode using Group Policy, but still allow side-loaded apps to be installed.  Here is a quick note how to do this. (A more AD-centric post from Linda Taylor is on it way)

On the Windows 10 device, click on Windows logo key‌ clip_image001 and then click on Settings.

clip_image002

Click on Update & Security

clip_image003

From the left-side pane, select For developers and from the right-side pane, choose the level that you need.

clip_image004

· If you choose Sideload apps: You can install an .appx and any certificate that is needed to run the app with the PowerShell script that is created with the package. Or you can use manual steps to install the certificate and package separately.

· If you choose Developer mode: You can debug your apps on that device. You can also sideload any apps if you choose developer mode, even ones that you have not developed on the device. You just have to install the .appx with its certificate for sideloading.

Use Group Policy Editor (gpedit) to enable your device:

Using Group Policy Editor (gpedit.msc), a developer mode can be enabled or disabled on computers running Windows 10.

1. Open the Windows Run box using keyboard, press Windows logo key‌  +R

2. Type in gpedit.msc and then press Enter.

3. In Group Policy Editor navigate to Computer Configuration\Administrative Templates\Windows Components\App Package Deployment.

4. From the right-side pane, double click on Allow all trusted apps to install and click on Enabled button.

5. Click on Apply and then OK .

Notes:

· Allow all trusted apps to install

o If you want to disable access to everything in for developers’ disable this policy setting.

o If you enable this policy setting, you can install any LOB or developer-signed Windows Store app.

If you want to allow side-loading apps to install but disable the other options in developer mode disable "Developer mode" and enable "Allow all trusted apps to install"

· Group policies are applied every 90 minutes, plus or minus a random amount up to 30 minutes. To apply the policy immediately, run gpupdate from the command prompt.

For more information on Developer Mode, see the following MSDN article:
https://msdn.microsoft.com/library/windows/apps/xaml/dn706236.aspx?f=255&MSPPError=-2147217396

SHA1 Key Migration to SHA256 for a two tier PKI hierarchy

$
0
0

Hello. Jim here again to take you through the migration steps for moving your two tier PKI hierarchy from SHA1 to SHA256. I will not be explaining the differences between the two or the supportability / security implementations of either. That information is readily available, easily discoverable and is referenced in the links provided below. Please note the following:

Server Authentication certificates: CAs must begin issuing new certificates using only the SHA-2 algorithm after January 1, 2016. Windows will no longer trust certificates signed with SHA-1 after January 1, 2017.

If your organization uses its own PKI hierarchy (you do not purchase certificates from a third-party), you will not be affected by the SHA1 deprecation. Microsoft's SHA1 deprecation plan ONLY APPLIES to certificates issued by members of the Microsoft Trusted Root Certificate program.  Your internal PKI hierarchy may continue to use SHA1; however, it is a security risk and diligence should be taken to move to SHA256 as soon as possible.

In this post, I will be following the steps documented here with some modifications: Migrating a Certification Authority Key from a Cryptographic Service Provider (CSP) to a Key Storage Provider (KSP) -https://technet.microsoft.com/en-us/library/dn771627.aspx

The steps that follow in this blog will match the steps in the TechNet article above with the addition of screenshots and additional information that the TechNet article lacks.

Additional recommended reading:

The following blog written by Robert Greene will also be referenced and should be reviewed – http://blogs.technet.com/b/askds/archive/2015/04/01/migrating-your-certification-authority-hashing-algorithm-from-sha1-to-sha2.aspx

This Wiki article written by Roger Grimes should also be reviewed as well – http://social.technet.microsoft.com/wiki/contents/articles/31296.implementing-sha-2-in-active-directory-certificate-services.aspx

Microsoft Trusted Root Certificate: Program Requirements – https://technet.microsoft.com/en-us/library/cc751157.aspx

The scenario for this exercise is as follows:

A two tier PKI hierarchy consisting of an Offline ROOT and an Online subordinate enterprise issuing CA.

Operating Systems:
Offline ROOT and Online subordinate are both Windows 2008 R2 SP1

OFFLINE ROOT
CANAME – CONTOSOROOT-CA

clip_image001

ONLINE SUBORDINATE ISSUING CA
CANAME – ContosoSUB-CA

clip_image003

First, you should verify whether your CA is using a Cryptographic Service Provider (CSP) or Key Storage Provider (KSP). This will determine whether you have to go through all the steps or just skip to changing the CA hash algorithm to SHA2. The command for this is in step 3. The line to take note of in the output of this command is “Provider =”. If the Provider = line is any of the top five service providers highlighted below, the CA is using a CSP and you must do the conversion steps. The RSA#Microsoft Software Key Storage Provider and everything below it are KSP’s.

clip_image005

Here is sample output of the command – Certutil –store my <Your CA common name>

As you can see, the provider is a CSP.

clip_image006

If you are using a Hardware Storage Module (HSM) you should contact your HSM vendor for special guidance on migrating from a CSP to a KSP. The steps for changing the Hashing algorithm to a SHA2 algorithm would still be the same for HSM based CA’s.

There are some customers that use their HSM for the CA private / public key, but use Microsoft CSP’s for the Encryption CSP (used for the CA Exchange certificate).

We will begin at the OFFLINE ROOT.

BACKUP! BACKUP! BACKUP the CA and Private KEY of both the OFFLINE ROOT and Online issuing CA. If you have more than one CA Certificate (you have renewed multiple times), all of them will need to be backed up.

Use the MMC to backup the private key or use the CERTSRV.msc and right click the CA name to backup as follows on both the online subordinate issuing and the OFFLINE ROOT CA’s –

clip_image008

clip_image010

Provide a password for the private key file.

clip_image012

You may also backup the registry location as indicated in step 1C.

Step 2– Stop the CA Service

Step 3- This command was discussed earlier to determine the provider.

  • Certutil –store my <Your CA common name>

Step 4 and Step 6 from the above referenced TechNet articleshould be done via the UI.

a. Open the MMC – load the Certificates snapin for the LOCAL COMPUTER

b. Right click each CA certificate (If you have more than 1) – export

c. Yes, export the private key

d. Check – Include all certificates in the certification path if possible

e. Check – Delete the private key if the export is successful

clip_image014

f. Click next and continue with the export.

Step 5
Copy the resultant .pfx file to a Windows 8 or Windows Server 2012 computer

Conversion requires a Windows Server 2012 certutil.exe, as Windows Server 2008 (and prior) do not support the necessary KSP conversion commands. If you want to convert a CA certificate on an ADCS version prior to Windows Server 2012, you must export the CA certificate off of the CA, import onto Windows Server 2012 or later using certutil.exe with the -KSP option, then export the newly signed certificate as a PFX file, and re-import on the original server.

Run the command in Step 5 on the Windows 8 or Windows Server 2012 computer.

  • Certutil –csp <KSP name> -importpfx <Your CA cert/key PFX file>

clip_image016

Step 6

a. To be done on the Windows 8 or Windows Server 2012 computer as previously indicated using the MMC.

b. Open the MMC – load the Certificates snapin for the LOCAL COMPUTER

c. Right click the CA certificate you just imported – All Tasks – export

*I have seen an issue where the “Yes, export the private key” is dimmed after running the conversion command and trying to export via the MMC. If you encounter this behavior, simply reimport the .PFX file manually and check the box Mark this key as exportable during the import. This will not affect the previous conversion.

d. Yes, export the private key.

e. Check – Include all certificates in the certification path if possible

f. Check – Delete the private key if the export is successful

g. Click next and continue with the export.

h. Copy the resultant .pfx file back to the destination 2008 R2 ROOTCA

Step 7

You can again use the UI (MMC) to import the .pfx back to the computer store on the ROOTCA

*Don’t forget during the import to Mark this key as exportable.

clip_image018

***IMPORTANT***

If you have renewed you CA multiple times with the same key, after exporting the first CA certificate as indicated above in step 4 and step 6, you are breaking the private key association with the previously renewed CA certificates.  This is because you are deleting the private key upon successful export.  After doing the conversion and importing the resultant .pfx file on the CA (remembering to mark the private key as exportable), you must run the following command from an elevated command prompt for each of the additional CA certificates that were renewed previously:

certutil –repairstore MY serialnumber 

The Serial number is found on the details tab of the CA certificate.  This will repair the association of the public certificate to the private key.


Step 8

Your CSP.reg file must contain the information highlighted at the top –

clip_image020

Step 8c

clip_image022

Step 8d– Run CSP.reg

Step 9

Your EncryptionCSP.reg file must contain the information highlighted at the top –

clip_image024

Step 9c– verification – certutil -v -getreg ca\encryptioncsp\EncryptionAlgorithm

Step 9d– Run EncryptionCsp.reg

Step 10

Change the CA hash algorithm to SHA256

clip_image026

Start the CA Service

Step 11

For a root CA: You will not see the migration take effect for the CA certificate itself until you complete the migration of the root CA, and then renew the certificate for the root CA.

Before we renew the OFFLINE ROOT certificate this is how it looks:

clip_image028

Renewing the CA’s own certificate with a new or existing (same) key would depend on the remaining validity of the certificate. If the certificate is at or nearing 50% of its lifetime, it would be a good idea to renew with a new key. See the following for additional information on CA certificate renewal –

https://technet.microsoft.com/en-us/library/cc730605.aspx

After we renew the OFFLINE ROOT certificate with a new key or the same key, its own Certificate will be signed with the SHA256 signature as indicated in the screenshot below:

clip_image030

Your OFFLINE ROOT CA is now completely configured for SHA256.

Running CERTUTIL –CRL will generate a new CRL file also signed using SHA256

clip_image032

By default, CRT, CRL and delta CRL files are published on the CA in the following location – %SystemRoot%\System32\CertSrv\CertEnroll. The format of the CRL file name is the "sanitized name" of the CA plus, in parentheses, the "key id" of the CA (if the CA certificate has been renewed with a new key) and a .CRL extension. See the following for more information on CRL distribution points and the CRL file name – https://technet.microsoft.com/en-us/library/cc782162%28v=ws.10%29.aspx

Copy this new .CRL file to a domain joined computer and publish it to Active Directory while logged on as an Enterprise Administrator from an elevated command prompt.

Do the same for the new SHA256 ROOT CA certificate.

  • certutil -f -dspublish <.CRT file> RootCA
  • certutil –f -dspublish <.CRL file>

Now continue with the migration of the Online Issuing Subordinate CA.

Step 1– Backup the CA database and Private Key.

Backup the CA registry settings

Step 2– Stop the CA Service.

Step 3- Get the details of your CA certificates

Certutil –store my “Your SubCA name”

image

I have never renewed the Subordinate CA certificate so there is only one.

Step 4 – 6

As you know from what was previously accomplished with the OFFLINE ROOT, steps 4-6 are done via the MMC and we must do the conversion on a Windows 8 or Windows 2012 or later computer for reasons explained earlier.

clip_image035

*When you import the converted SUBCA .pfx file via the MMC, you must remember to again Mark this key as exportable.

Step 8 – Step 9

Creating and importing the registry files for CSP and CSP Encryption (see above)

Step 10- Change the CA hash algorithm to SHA-2

clip_image037

Now in the screenshot below you can see the Hash Algorithm is SHA256.

clip_image039

The Subordinate CA’s own certificate is still SHA1. In order to change this to SHA256 you must renew the Subordinate CA’s certificate. When you renew the Subordinate CA’s certificate it will be signed with SHA256. This is because we previously changed the hash algorithm on the OFFLINE ROOT to SHA256.

Renew the Subordinate CA’s certificate following the proper steps for creating the request and submitting it to the OFFLINE ROOT. Information on whether to renew with a new key or the same key was provided earlier. Then you will copy the resultant .CER file back to the Subordinate CA and install it via the Certification Authority management interface.

If you receive the following error when installing the new CA certificate –

clip_image041

Check the newly procured Subordinate CA certificate via the MMC. On the certification path tab, it will indicate under certificate status that – “The signature of the certificate cannot be verified”

This error could have several causes. You did not –dspublish the new OFFLINE ROOT .CRT file and .CRL file to Active Directory as previously instructed.

clip_image043

Or you did publish the Root CA certificate but the Subordinate CA has not done Autoenrollment (AE) yet and therefore has not downloaded the “NEW” Root CA certificate via AE methods, or AE may be disabled on the CA all together.

After the files are published to AD and after verification of AE and group policy updates on the Subordinate CA, the install and subsequent starting of Certificate Services will succeed.

Now in addition to the Hash Algorithm being SHA256 on the Subordinate CA, the Signature on its own certificate will also be SHA256.

clip_image045

The Subordinate CA’s .CRL files are also now signed with SHA256 –

clip_image047

Your migration to SHA256 on the Subordinate CA is now completed.

I hope you found this information helpful and informative. I hope it will make your SHA256 migration project planning and implementation less daunting.

Jim Tierney


“Administrative limit for this request was exceeded" Error from Active Directory

$
0
0

Hello, Ryan Ries here with my first AskDS post! I recently ran into an issue with a particular environment where Active Directory and UNIX systems were being integrated.  Microsoft has several attributes in AD to facilitate this, and one of those attributes is the memberUid attribute on security group objects.  You add user IDs to the memberUid attribute of the security group, and Active Directory will treat that as group membership from UNIX systems for the purposes of authentication/authorization.

All was well and good for a long time. The group grew and grew to over a thousand users, until one day we wanted to add another UNIX user, and we were greeted with this error:

“The administrative limit for this request was exceeded.”

Wait, there’s a limit on this attribute? I wonder what that limit is.

MSDN documentation states that the rangeUpper property of the memberUid attribute is 256,000. This support KB also mentions that:

“The attribute size limit for the memberUID attribute in the schema is 256,000 characters. It depends on the individual value length on how many user identifiers (UIDs) will fit into the attribute.”

And you can even see it for yourself if you fancy a gander at your schema:

Something doesn’t add up here – we’ve only added around 1200 users to the memberUid attribute of this security group. Sure it’s a big group, but that doesn’t exceed 256,000 characters; not even close. Adding up all the names that I’ve added to the attribute, I figure it adds up to somewhere around 10,000 characters. Not 256,000.

So what gives?

(If you’ve been following along and you’ve already figured out the problem yourself, then please contact us! We’re hiring!)

The problem here is that we’re hitting a different limit as we continue to add members to the memberUid attribute, way before we get to 256k characters.

The memberUid attribute is a multivalued attribute, however it is not a linked attribute.  This means that it has a limitation on its maximum size that is less than the 256,000 characters shown on the memberUid attributeSchema object.

You can distinguish between which attributes are linked or not based on whether those attributeSchema objects have values in their linkID attribute.

Example of a multivalued and linked attribute:

Example of a multivalued but not linked attribute:

So if the limit is not really 256,000 characters, then what is it?

From How the Data Store Works on TechNet:

“The maximum size of a database record is 8110 bytes, based on an 8-kilobyte (KB) page size. Because of variable overhead requirements and the variable number of attributes that an object might have, it is impossible to provide a precise limit for the maximum number of multivalues that an object can store in its attributes. …

The only value that can actually be computed is the maximum number of values in a nonlinked, multivalued attribute when the object has only one attribute (which is impossible). In Windows 2000 Active Directory, this number is computed at 1575 values. From this value, taking various overhead estimates into account and generalizing about the other values that the object might store, the practical limit for number of multivalues stored by an object is estimated at 800 nonlinked values per object across all attributes.

Attributes that represent links do not count in this value. For example, the members linked, multivalued attribute of a group object can store many thousands of values because the values are links only.

The practical limit of 800 nonlinked values per object is increased in Windows Server 2003 and later. When the forest has a functional level of Windows Server 2003 or higher, for a theoretical record that has only one attribute with the minimum of overhead, the maximum number of multivalues possible in one record is computed at 3937. Using similar estimates for overhead, a practical limit for nonlinked multivalues in one record is approximately 1200. These numbers are provided only to point out that the maximum size of an object is somewhat larger in Windows Server 2003 and later.”

(Emphasis is mine.)

Alright, so according to the above article, if I’m in an Active Directory domain running all Server 2003 or better, which I am, then a “practical” limit for non-linked multi-value attributes should be approximately 1200 values.

So let’s put that to the test, shall we?

I wrote a quick and dirty test script with PowerShell that would generate a random 8-character string from a pool of characters (i.e., a random fictitious user ID,) and then add that random user ID to the memberUid attribute of a security group, in a loop until the script encounters an error because the script can’t add any more values:

# This script is for testing purposes only!
$ValidChars = @('a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j',
'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't',
'u', 'v', 'w', 'x', 'y', 'z', 'A', 'B', 'C', 'D',
'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N',
'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X',
'Y', 'Z', '0', '1', '2', '3', '4', '5', '6', '7','8', '9')

[String]$Str = [String]::Empty
[Int]$Bytes = 0
[Int]$Uids = 0
While ($Uids -LT 1000000)
{
$Str = [String]::Empty
1..8 | % { $Str += ($ValidChars | Get-Random) }
Try
{
Set-ADGroup 'TestGroup' -Add @{ memberUid = $Str } -ErrorAction Stop
}
Catch
{
Write-Error $_.Exception.Message
Write-Host "$Bytes bytes $Uids users added"
Break
}
$Bytes += 8
$Uids += 1
}

Here’s the output from when I run the script:

Huh… whaddya’ know? Approximately 1200 users before we hit the “administrative limit,” just like the article suggests.

One way of getting around this attribute's maximum size would be to use nested groups, or to break the user IDs apart into two separate groups… although this may cause you to have to change some code on your UNIX systems. It’s typically not a fun day when you first realize this limit exists. Better to know about it beforehand.

Another attribute in Active Directory that could potentially hit a similar limit is the servicePrincipalName attribute, as you can read about in this AskPFEPlat article.

Until next time!

Ryan Ries

Using Repadmin with ADLDS and Lingering objects

$
0
0

 

Hi! Linda Taylor here from the UK Directory Services escalation team. This time on ADLDS, Repadmin, lingering objects and even PowerShell….

The other day a colleague was trying to remove a lingering object in ADLDS. He asked me about which repadmin syntax would work for ADLDS and it occurred to us both that all the documented examples we found for repadmin were only for AD DS.

So, here are some ADLDS specific examples of repadmin use.

For the purpose of this post I will be using 2 servers with ADLDS. Both servers belong to Root.contoso.com Domain and they replicate a partition called DC=Fabrikam.

    LDS1 runs ADLDS on port 50002.
    RootDC1 runs ADLDS on port 51995.

1. Who is replicating my partition?

If you have many servers in your replica set you may want to find out which ADLDS servers are replicating a specific partition. ….Yes! The AD PowerShell module works against ADLDS.

You just need to add the :port on the end of the servername.

One way to list which servers are replicating a specific application partition is to query the attribute msDs-MasteredBy on the respective partition. This attribute contains a list of NTDS server settings objects for the servers which replicate this partition.

You can do this with ADSIEDIT or ldp.exe or PowerShell or any other means.

Powershell Example: Use the Get-ADObject comandlet and I will target my command at localhost:51995.  (I am running this on RootDC1)

powershell_lindakup_ADLDS

Notice there are 2 NTDS Settings objects returned and servername is recorded as ServerName$ADLDSInstanceName.

So this tells me that according to localhost:51995 , DC=Fabrikam partition is replicated between Server LDS1$instance1 and server ROOTDC1$instance1.

2. REPADMIN for ADLDS

Generic rules and Tips:

  • For most commands the golden rule is to simply use the port inside the DSA_NAME or DSA_LIST parameters like lds1:50002 or lds1.contoso.com:50002. That’s it!

For example:

CMD

 

  • There are some things which do not apply to ADLDS. That is anything which involves FSMO’s like PDC and RID which ADLDS does not have or Global Catalog – again no such thing in ADLDS.
  • A very useful switch for ADLDS is the /homeserver switch:

Usually by default repadmin assumes you are working with AD and will use the locator or attempt to connect to local server on port 389 if this fails. However, for ADLDS the /Homeserver switch allows you to specify an ADLDS server:port.

For example, If you want to get replication status for all ADLDS servers in a configuration set (like for AD you would run repadmin /showrepl * /csv), for ADLDS you can run the following:

Repadmin /showrepl /homeserver:localhost:50002 * /csv >out.csv

Then you can open the OUT.CSV using something like Excel or even notepad and view a nice summary of the replication status for all servers. You can then sort this and chop it around to your liking.

The below explanation of HOMESERVER is taken from repadmin /listhelp output:

If the DSA_LIST argument is a resolvable server name (such as a DNS or WINS name) this will be used as the homeserver. If a non-resolvable parameter is used for the DSA_LIST, repadmin will use the locator to find a server to be used as the homeserver. If the locator does not find a server, repadmin will try the local box (port 389).

The /homeserver:[dns name] option is available to explicitly control home server selection.

This is especially useful when there are more than one forest or configuration set possible. For

example, the DSA_LIST command "fsmo_istg:site1" would target the locally joined domain's directory, so to target an AD/LDS instance, /homeserver:adldsinstance:50000 could be used to resolve the fsmo_istg to site1 defined in the ADAM configuration set on adldsinstance:50000 instead of the fsmo_istg to site1 defined in the locally joined domain.

Finally, a particular gotcha that can send you in the wrong troubleshooting direction is a LDAP 0x51 “server down” error which is returned if you forget to add the DSA_NAME and/or port to your repadmin command. Like this:

lindakup_CMD2_ADLDS

3. Lingering objects in ADLDS

Just like in AD, you can get lingering objects in AD LDS .The only difference being that there is no Global Catalog in ADLDS, and thus no lingering objects are possible in a Read Only partition.

EVENT ID 1988 or 2042:

If you bring an outdated instance (past TSL) back online In ADLDS you may see event 1988 as per http://support.microsoft.com/kb/870695/EN-US “Outdated Active Directory objects generate event ID 1988”.

On WS 2012 R2 you will see event 2042 telling you that it has been over TombStoneLifetime since you last replicated so replication is disabled.

What to do next?

First you want to check for lingering objects and remove if necessary.

1. To check for lingering objects you can use repadmin /removelingeringobjects with the /advisory_mode

My colleague Ian Farr or “Posh chap” as we call him, recently worked with a customer on such a case and put together a great PowerShell blog with a One-Liner for detecting and removing lingering objects from ADLDS with PowerShell. Check it out here:

http://blogs.technet.com/b/poshchap/archive/2014/05/09/one-liner-collect-ad-lds-lingering-object-advisory-mode-1946-events.aspx

Example event 1946:

Event1946

2.  Once you have detected any lingering objects and you have made a decision that you need to remove them, you can remove them using the same repadmin command as in Iain’s blog but without the advisory_mode.

Example command to remove lingering objects:

Repadmin /removelingeringobjects lds1:50002 8fc92fdd-e5ec-45fb-b7d3-120f9f9f192 DC=Fabrikam

Where Lds1:50002 is the LDS instance and port where to remove lingering objects

8fc92fdd-e5ec-45fb-b7d3-120f9f9f192 is DSA guid of a good LDS server/instance

DC=Fabrikam is the partition where to remove lingering objects

For each lingering object removed you will see event 1945.

Event1945

You can use Iain’s one-liner again to get a list of all the objects which were removed.

As a good practice you should also do the lingering object checks for the Configuration partition.

Once all lingering objects are removed replication can be re-enabled again and you can go down the pub…(maybe).

I hope this is useful.

Linda.

Speaking in Ciphers and other Enigmatic tongues…update!

$
0
0

Hi! Jim Tierney here again to talk to you about Cryptographic Algorithms, SCHANNEL and other bits of wonderment. My original post on the topic has gone through a rewrite to bring you up to date on recent changes in this space. 
So, your company purchases this new super awesome vulnerability and compliance management software suite, and they just ran a scan on your Windows Server 2008 domain controllers and lo! The software reports back that you have weak ciphers enabled, highlighted in RED, flashing, with that “you have failed” font, and including a link to the following Microsoft documentation –

KB245030 How to Restrict the Use of Certain Cryptographic Algorithms and Protocols in Schannel.dll:

http://support.microsoft.com/kb/245030/en-us

The report may look similar to this:

SSL Server Has SSLv2 Enabled Vulnerability port 3269/tcp over SSL

THREAT:
The Secure Socket Layer (SSL) protocol allows for secure communication between a client and a server.
There are known flaws in the SSLv2 protocol. A man-in-the-middle attacker can force the communication to a less secure level and then attempt to break the weak encryption. The attacker can also truncate encrypted messages.

SOLUTION:
Disable SSLv2.

Upon hearing this information, you fire up your browser and read the aforementioned KB 245030 top to bottom and RDP into your DC’s and begin checking the locations specified by the article. Much to your dismay you notice the locations specified in the article are not correct concerning your Windows 2008 R2 DC’s. On your 2008 R2 DC’s you see the following at this registry location
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL:

clip_image001

“Darn you Microsoft documentation!!!!!!” you scream aloud as you shake your fist in the general direction of Redmond, WA….

This is how it looks on a Windows 2003 Server:

clip_image002

Easy now…

The registry key’s and their content in Windows Server 2008, Windows 7, Windows Server 2008 R2, Windows 2012 and 2012 R2 look different from Windows Server 2003 and prior.

Here is the registry location on Windows 7 – 2012 R2 and its default contents:

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel]
“EventLogging”=dword:00000001
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Ciphers]

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\CipherSuites]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Hashes]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\KeyExchangeAlgorithms]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Client]
“DisabledByDefault”=dword:00000001

Allow me to explain the above content that is displayed in standard REGEDIT export format:

  • The Ciphers key should contain no values or subkeys
  • The CipherSuites key should contain no values or subkeys
  • The Hashes key should contain no values or subkeys
  • The KeyExchangeAlgorithms key should contain no values or subkeys
  • The Protocols key should contain the following sub-keys and value:
    Protocols
         SSL 2.0
            Client
                DisabledByDefault REG_DWORD 0x00000001 (value)

The following table lists the Windows SCHANNEL protocols and whether or not they are enabled or disabled by default in each operating system listed:

image

*Remember to install the following update if you plan on or are currently using SHA512 certificates:

SHA512 is disabled in Windows when you use TLS 1.2
http://support.microsoft.com/kb/2973337/EN-US

Similar to Windows Server 2003, these protocols can be disabled for the server or client architecture. Meaning that either the protocol can be omitted from the list of supported protocols included in the Client Hello when initiating an SSL connection, or it can be disabled on the server so that even if a client requests SSL 2.0 in a client hello, the server will not respond with that protocol.

The client and server subkeys designate each protocol. You can disable a protocol for either the client or the server, but disabling Ciphers, Hashes, or CipherSuites affects BOTH client and server sides. You would have to create the necessary subkeys beneath the Protocols key to achieve this.

For example:

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Client]
“DisabledByDefault”=dword:00000001
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Server]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 3.0]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 3.0\Client]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 3.0\Server]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.0]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.0\Client]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.0\Server]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.1]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.1\Client]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.1\Server]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.2]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.2\Client]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.2\Server]

This is how it looks in the registry after they have been created:

clip_image005

Client SSL 2.0 is disabled by default on Windows Server 2008, 2008 R2, 2012 and 2012 R2.

This means the computer will not use SSL 2.0 to initiate a Client Hello.

So it looks like this in the registry:

clip_image006

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Client]
DisabledByDefault =dword:00000001

Just like Ciphers and KeyExchangeAlgorithms, Protocols can be enabled or disabled.
To disable other protocols, select which side of the conversation on which you want to disable the protocol, and add the “Enabled”=dword:00000000 value. The example below disables the SSL 2.0 for the server in addition to the SSL 2.0 for the client.

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Client]
DisabledByDefault =dword:00000001 <Default client disabled as I said earlier>

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Server]
Enabled =dword:00000000 <disables SSL 2.0 server side>

clip_image007

After this, you will need to reboot the server. You probably do not want to disable TLS settings. I just added them here for a visual reference.

***For Windows server 2008 R2, if you want to enable Server side TLS 1.1 and 1.2, you MUST create the registry entries as follows:

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.1\Server]
DisabledByDefault =dword:00000000

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.2\Server]
DisabledByDefault =dword:00000000

So why would you go through all this trouble to disable protocols and such, anyway? Well, there may be a regulatory requirement that your company’s web servers should only support Federal Information Processing Standards (FIPS) 140-1/2 certified cryptographic algorithms and protocols. Currently, TLS is the only protocol that satisfies such a requirement. Luckily, enforcing this compliant behavior does not require you to manually modify registry settings as described above. You can enforce FIPS compliance via group policy as explained by the following:

The effects of enabling the “System cryptography: Use FIPS compliant algorithms for encryption, hashing, and signing” security setting in Windows XP and in later versions of Windowshttp://support.microsoft.com/kb/811833

The 811833 article talks specifically about the group policy setting below which by default is NOT defined –

Computer Configuration\ Windows Settings \Security Settings \Local Policies\ Security Options

clip_image008

The policy above when applied will modify the following registry locations and their value content.
Be advised that this FipsAlgorithmPolicy information is stored in different ways as well –

Windows 7/2008
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\FipsAlgorithmPolicy]
“Enabled”=dword:00000000 <Default is disabled>


Windows 2003/XP
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa]
Fipsalgorithmpolicy =dword:00000000 <Default is disabled>

Enabling this group policy setting effectively disables everything except TLS.

More Examples
Let’s continue with more examples. A vulnerability report may also indicate the presence of other Ciphers it deems to be “weak”.

Below I have built a .reg file that when imported will disable the following Ciphers:

56-bit DES

40-bit RC4

Behold!

Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\AES 128]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\AES 256]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\DES 56]
“Enabled”=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\NULL]
“Enabled”=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 128/128]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 40/128]
“Enabled”=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 56/128]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\Triple DES 168]

After importing these registry settings, you must reboot the server.

The vulnerability report might also mention that 40-bit DES is enabled, but that would be a false positive because Windows Server 2008 doesn’t support 40-bit DES at all. For example, you might see this in a vulnerability report:

Here is the list of weak SSL ciphers supported by the remote server:
Low Strength Ciphers (< 56-bit key)
SSLv3
EXP-ADH-DES-CBC-SHA Kx=DH(512) Au=None Enc=DES(40) Mac=SHA1 export

TLSv1
EXP-ADH-DES-CBC-SHA Kx=DH(512) Au=None Enc=DES(40) Mac=SHA1 export

If this is reported and it is necessary to get rid of these entries you can also disable the Diffie-Hellman Key Exchange algorithm (another components of the two cipher suites described above — designated with Kx=DH(512)).

To do this, make the following registry changes:

Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\KeyExchangeAlgorithms\Diffie-Hellman]
“Enabled”=dword:00000000

You have to create the sub-key Diffie-Hellman yourself. Make this change and reboot the server.

This step is NOT advised or required….I am offering it as an option to you to make the vulnerability scanning tool pass the test.

Keep in mind, also, that this will disable any cipher suite that relies upon Diffie-Hellman for key exchange.

You will probably not want to disable ANY cipher suites that rely on Diffie-Hellman. Secure communications such as IPSec and SSL both use Diffie-Hellman for key exchange. If you are running OpenVPN on a Linux/Unix server you are probably using Diffie-Hellman for key exchange. The point I am trying to make here is you should not have to disable the Diffie-Hellman Key Exchange algorithm to satisfy a vulnerability scan.

Advanced Ciphers have arrived!!!
Advanced ciphers were added to Windows 8.1 / Windows Server 2012 R2 computers by KB 2929781, released in April 2014 and again by monthly rollup KB 2919355, released in May 2014

Updated cipher suites were released as part of two fixes.

KB 2919355 for Windows 8.1 and Windows Server 2012 R2 computers

MS14-066 for Windows 7 and Windows 8 clients and Windows Server 2008 R2 and Windows Server 2012 Servers.

While these updates shipped new ciphers, the cipher suite priority ordering could not correctly be updated.

KB 3042058, released Tuesday, March 2015 is a follow up package to correct that issue. This is NOT applicable to 2008 (non R2)

You can set a preference list for which cipher suites the server will negotiate first with a client that supports them.

You can review this MSDN article on how to set the cipher suite prioritization list via GPO: http://msdn.microsoft.com/en-us/library/windows/desktop/bb870930(v=vs.85).aspx#adding__removing__and_prioritizing_cipher_suites

Default location and ordering of Cipher Suites:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Cryptography\Configuration\Local\SSL0010002

clip_image010

Location of Cipher Suite ordering that is modified by setting this group policy –

Computer Configuration\Administrative Templates\Network\SSL Configuration Settings\SSL Cipher Suite Order

clip_image012

When the SSL Cipher Suite Order group policy is modified and applied successfully it modifies the following location in the registry:

HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Cryptography\Configuration\SSL0010002

The Group Policy would dictate the effective cipher suites. Once this policy is applied, the settings here take precedence over what is in the default location. The GPO should override anything else configured on the computer. The Microsoft Schannel team does not support directly manipulating the registry.

Group Policy settings are domain settings configured by a domain administrator and should always have precedence over local settings configured by local administrators.

Being secure is a good thing and depending on your environment, it may be necessary to restrict certain cryptographic algorithms from use. Just make sure you do your diligence about testing these settings. It is also well worth your time to really understand how the security vulnerability software your company just purchased does it’s testing. A double sided network trace will reveal both sides of the client – server hello and what cryptographic algorithms are being offered from each side over the wire.

Jim “Insert cryptic witticism here” Tierney

Does your logon hang after a password change on win 8.1 /2012 R2/win10?

$
0
0

Hi, Linda Taylor here, Senior Escalation Engineer from the Directory Services team in the UK.

I have been working on this issue which seems to be affecting many of you globally on windows 8.1, 2012 R2 and windows 10, so I thought it would be a good idea to explain the issue and workarounds while we continue to work on a proper fix here.

The symptoms are such that after a password change, logon hangs forever on the welcome screen:

clip_image002

How annoying….

The underlying issue is a deadlock between several components including DPAPI and the redirector.

For full details or the issue, workarounds and related fixes check out my post on the ASKPFEPLAT blog here http://blogs.technet.com/b/askpfeplat/archive/2016/01/11/does-your-win-8-1-2012-r2-win10-logon-hang-after-a-password-change.aspx

This is now fixed in the following updates:

Windows 8.1, 2012 R2, 2012 install:

For Windows 10 TH2 build 1511 install:

I hope this helps,

Linda

Previewing Server 2016 TP4: Temporary Group Memberships

$
0
0

Disclaimer: Windows Server 2016 is still in a Technical Preview state – the information contained in this post may become inaccurate in the future as the product continues to evolve. More specifically, there are still issues being ironed out in other parts of Privileged Access Management in Technical Preview 4 for multi-forest deployments.   Watch for more updates as we get closer to general availability!

Hello, Ryan Ries here again with some juicy new Active Directory hotness. Windows Server 2016 is right around the corner, and it’s bringing a ton of new features and improvements with it. Today we’re going to talk about one of the new things you’ll be seeing in Active Directory, which you might see referred to as “expiring links,” or what I like to call “temporary group memberships.”

One of the challenges that every security-conscious Active Directory administrator has faced is how to deal with contractors, vendors, temporary employees and anyone else who needs temporary access to resources within your Active Directory environment. Let’s pretend that your Information Security team wants to perform an automated vulnerability scan of all the devices on your network, and to do this, they will need a service account with Domain Administrator privileges for 5 business days. Because you are a wise AD administrator, you don’t like the idea of this service account that will be authenticating against every device on the network having Domain Administrator privileges, but the CTO of the company says that you have to give the InfoSec team what they want.

(Trust me, this stuff really happens.)

So you strike a compromise, claiming that you will grant this service account temporary membership in the Domain Admins group for 5 days while the InfoSec team conducts their vulnerability scan. Now you could just manually remove the service account from the group after 5 days, but you are a busy admin and you know you’re going to forget to do that. You could also set up a scheduled task to run after 5 days that runs a script that removes the service account from the Domain Admins group, but let’s explore a couple of more interesting options.

The Old Way

One old-school way of accomplishing this is through the use of dynamic objects in 2003 and later. Dynamic objects are automatically deleted (leaving no tombstone behind) after their entryTTL expires. Using this knowledge, our plan is to create a security group called “Temp DA for InfoSec” as a dynamic object with a TTL (time-to-live) of 5 days. Then we’re going to put the service account into the temporary security group. Then we are going to add the temporary security group to the Domain Admins group. The service account is now a member of Domain Admins because of the nested group membership, and once the temporary security group automatically disappears in 5 days, the nested group membership will be broken and the service account will no longer be a member of Domain Admins.

Creating dynamic objects is not as simple as just right-clicking in AD Users & Computer and selecting “New > Dynamic Object,” but it’s still pretty easy if you use ldifde.exe and a simple text file. Below is an example:

clip_image002
Figure 1: Creating a Dynamic Object with ldifde.exe.

dn: cn=Temp DA For InfoSec,ou=Information Security,dc=adatum,dc=com
changeType: add
objectClass: group
objectClass: dynamicObject
entryTTL: 432000
sAMAccountName: Temp DA For InfoSec

In the text file, just supply the distinguished name of the security group you want to create, and make sure it has both the group objectClass and the dynamicObject objectClass. I set the entryTTL to 432000 in the screen shot above, which is 5 days in seconds. Import the object into AD using the following command:
  ldifde -i -f dynamicGroup.txt

Now if you go look at the newly-created group in AD Users & Computers, you’ll see that it has an entryTTL attribute that is steadily counting down to 0:

clip_image004
Figure 2: Dynamic Security Group with an expiry date.

You can create all sorts of objects as Dynamic Objects by the way, not just groups. But enough about that. We came here to see how the situation has improved in Windows Server 2016. I think you’ll like it better than the somewhat convoluted Dynamic Objects solution I just described.

The New Hotness (Windows Server 2016 Technical Preview 4, version 1511.10586.122)

For our next trick, we’ll need to enable the Privileged Access Management Feature in our Windows Server 2016 forest. Another example of an optional feature is the AD Recycle Bin. Keep in mind that just like the AD Recycle Bin, once you enable the Privileged Access Management feature in your forest, you can’t turn it off. This feature also requires a Windows Server 2016 or “Windows Threshold” forest functional level:

clip_image006
Figure 3: This AD Optional Feature requires a Windows Server 2016 or “Windows Threshold” Forest Functional Level.

It’s easy to enable with PowerShell:
Enable-ADOptionalFeature ‘Privileged Access Management Feature’ -Scope ForestOrConfigurationSet -Target adatum.com

Now that you’ve done this, you can start setting time limits on group memberships directly. It’s so easy:
Add-ADGroupMember -Identity ‘Domain Admins’ -Members ‘InfoSecSvcAcct’ -MemberTimeToLive (New-TimeSpan -Days 5)

Now isn’t that a little easier and more straightforward? Our InfoSec service account now has temporary membership in the Domain Admins group for 5 days. And if you want to view the time remaining in a temporary group membership in real time:
Get-ADGroup ‘Domain Admins’ -Property member -ShowMemberTimeToLive

clip_image008
Figure 4: Viewing the time-to-live on a temporary group membership.

So that’s cool, but in addition to convenience, there is a real security benefit to this feature that we’ve never had before. I’d be remiss not to mention that with the new Privileged Access Management feature, when you add a temporary group membership like this, the domain controller will actually constrain the Kerberos TGT lifetime to the shortest TTL that the user currently has. What that means is that if a user account only has 5 minutes left in its Domain Admins membership when it logs on, the domain controller will give that account a TGT that’s only good for 5 more minutes before it has to be renewed, and when it is renewed, the PAC (privilege attribute certificate) will no longer contain that group membership! You can see this in action using klist.exe:

clip_image010
Figure 5: My Kerberos ticket is only good for about 8 minutes because of my soon-to-expire group membership.

Awesome.

Lastly, it’s worth noting that this is just one small aspect of the upcoming Privileged Access Management feature in Windows Server 2016. There’s much more to it, like shadow security principals, bastion forests, new integrations with Microsoft Identity Manager, and more. Read more about what’s new in Windows Server 2016 here.

Until next time,

Ryan “Domain Admin for a Minute” Ries


Updated 3/21/16 with additional text in Disclaimer – “Disclaimer: Server 2016 is still in a Technical Preview state – the information contained in this post may become inaccurate in the future as the product continues to evolve.  More specifically, there are still issues being ironed out in other parts of Privileged Access Management in Technical Preview 4 for multi-forest deployments.   Watch for more updates as we get closer to general availability!”

Viewing all 74 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>