Tuesday, December 08, 2009

Rest in peace, ExtremeSwank OpenID and OAuth

ExtremeSwankOpenID and ExtremeSwankOAuth, both libraries authored by John Ehn, have been discontinued according to the project sites respective home pages which have a new note that reads: “Note: This … Consumer is no longer in development.”

ExtremeSwankOpenID was stagnant in development lately, and when a recent OpenID vulnerability was identified as impacting the ExtremeSwankOpenID library due to a hidden “feature” in the .NET Framework’s handling of HTTP responses, it appears the library was retired rather than fixed.  This library was one of only two OpenID implementations written in .NET that were recognized in many OpenID circles.  It also touted a unique feature (which I never investigated personally) of allowing desktop applications to use OpenID to authenticate their users.

ExtremeSwankOAuth is one of many .NET OAuth implementations, and the reasoning for its retirement is less clear.

Notwithstanding ExtremeSwankOpenID’s recent lack of development, it was ironically John’s library that was under active development and supported OpenID 2.0 while a very early version of DotNetOpenId wasn’t being developed and only supported OpenID 1.1.  It was seeing DotNetOpenId’s own users switch to John’s library that motivated me to re-engage development of DotNetOpenId, now rechristened DotNetOpenAuth, which is now the only OpenID implementation for .NET that I know of – and a dang good one too if I may say so.

Although I did not know John personally, I’ve shared a few emails with him and he seemed a courteous and motivated programmer.  I wish him well on his future pursuits.

Thursday, December 03, 2009

DotNetOpenAuth v3.3 is released

It’s been nearly six months since v3.2 was released.  So what’s in v3.3 that took so long to bake?  Well, a lot of it was waiting for and getting used to Code Contracts to mature enough to bet on the technology. 

The most exciting changes though are the new OpenIdSelector control, and the new project template that helps you get going fast and strong with a new web site that accepts OpenID and/or InfoCard to log users in.  Seriously, you gotta get this version and try it out.  You can see a live demo of the new login UX now.

Go download DotNetOpenAuth now.  And get the project template too.

As usual, you can get more of the details of the changes on the VersionChanges wiki page.

Do you like what you see?  Don’t forget to contribute so that future versions can keep rocking!

Click here to lend your support to: dotnetopenid and make a donation at www.pledgie.com !

Thursday, October 22, 2009

Feedback requested: New OpenID RP login UX prototype

OpenID RP login UX

Live demo location: http://openidux.dotnetopenauth.net/

Design considerations

The DNOA login UX design document contains the design spec, and some of the reasoning that went into that design.

One high-level goal of all this work is to produce a set of HTML, CSS, and JS files that can work on any web platform, so that ruby, python, php, coldfusion, and (of course) ASP.NET RP web sites can benefit from a better UI for logging users in.

Interesting scenarios to experiment with and/or test

  • Login by clicking on Members Only. This invokes the full page redirect login UI.
  • Login by clicking Login in the upper-right corner of the page. This invokes the popup dialog UI.
  • Visit the account management page and add additional OpenIDs or InfoCards to your account so you can log in with multiple identities yet be recognized as holding just one account.
  • Login multiple times, using various OPs. Notice first that we highlight the button you chose the prior time. This helps the user not splinter his identity on a return visit in the event he has accounts with more than one displayed OP.
  • Notice that in the login UI some OPs support checkid_immediate, and on a return visit, a green checkmark appears in the lower-right corner of an OP button when an immediate login is available. If a green checkmark is not visible on an OP button, a popup window will be used to guide the user through the initial login process. Some OPs (such as Verisign and Yahoo) do not support checkid_immediate, and will never display green checkmarks.
  • When logging in, try using the OpenID button. Notice that as soon as you finish typing that discovery on that identifier begins and a login button appears within the text box. Next time you visit, the UX will remember what identifier you typed in and help you log in again.
  • Try using the OpenID button with an identifier that delegates to multiple OPs. Notice how the Login button that appears to help you go through checkid_setup (if no checkid_immediate requests come back positive) is a split button, allowing you to actually pick which OP to log in with, and these OPs are in priority order (adjusted for OPs that are down or misbehaving, which are moved to the bottom).
  • Use Internet Explorer, and log in with your InfoCard.

Special release notes

In this iteration, I've elected to go with the popup dialog approach to displaying the login UI rather than a popup browser window. This is still alterable, and your feedback and/or preferences on this decision is most welcome.

The current set of OP buttons displayed include 4 OPs: Google, Yahoo, Verisign and MyOpenID. The last two of these do not fit the qualifications given in the design document, but they are included here to assist in the feedback process, and because I don't know how to make four buttons (Google, Yahoo, OpenID and InfoCard) look good, so I jumped up from three to six.

In the OpenID text box area, after authentication completes a green checkmark is displayed, but sometimes no login button appears to complete login. This is a UX issue I haven't figured out how to solve yet. But the way to proceed with login is to click the original, large OpenID button again.

The browsers I've tested with are IE8, Chrome 3, FireFox 3.5 and Safari 4. If you test with other/older browsers, please leave feedback about how your experience was. But currently I'm not targeting older browsers, so any bug reports regarding backward compatibility may not be fixed.

How to leave feedback

Leave a comment.

Saturday, October 10, 2009

Minify your EmbeddedResource .js and .css files in your MSBuild project

If you write a C# or VB.NET class library that contains ASP.NET controls that also have .js or .css files embedded in your assembly, you probably want to minify those files for optimal download size in production, but keep the files readable for coding and debugging. What if you could add a single <Import> to your class library’s project file that would automatically minify these files in Release builds, while leaving them untouched in debug builds?

Well, I wrote a .targets file and associated MSBuild task that uses Dean Edwards’ excellent Packer algorithm to make .js files really small. Just import the .targets file right after your last <Import> and you’re done.

You can get it here:


Just click "Artifacts" then "Download all (.zip)" in the upper-right corner.

Or the source for it if you're interested:


Just click the "download" button to get it as a zip file.

Please take this as a tip and not an endorsement. This is not a Microsoft product—it’s just an open source project that I wanted to point out to you for you to evaluate whether it’s appropriate for your project. As far as the license goes, this MSBuild task uses code from Dean Edwards that is under the LGPL license, so the whole MSBuild task .dll ships under the LGPL license. While I don’t interpret that to mean that your shipping project must be under any particular license, you should consult your own attorneys and not take my word for it.

(cross-posted from http://blogs.msdn.com/vsproject/)

Friday, October 09, 2009

VS2008 project template for OpenID and InfoCard relying parties

I finally built a project template to make it easier to write an OpenID relying party web site using C# and ASP.NET.  Up to this point all we had were the sample RPs that ship with DotNetOpenAuth, which were deliberately kept simple.  They didn’t use a real database, didn’t follow some best practices, and weren’t very real. 

Now you can start your next web site with OpenID and InfoCard logins already working!  Complete with role authorization, account management that allow for multiple OpenID/InfoCards per account, login and account creation.

And did I mention it’s free? (donations gratefully accepted)

Download it and copy the file "Project Templates\*.vsi" file into your %USERPROFILE%\Documents\Visual Studio 2008\Templates\ProjectTemplates folder and it will appear in your New Project wizard in Visual Studio 2008.

Yes, the site is still unbeautified, but that’s so you can brand it to look like yours.

Have fun.  And let me know what you think of it or can contribute.

Friday, September 25, 2009

Optimal OpenID UX finally underway

I’m finally making progress on building a set of HTML and javascript files that can be used on any OpenID relying party web site to allow visitors to easily log in with OpenID, without even knowing what OpenID is.  I mentioned my goal to do this some time ago, and now I have a small partially functional prototype.  Please try it out, and keep coming back and letting me know what you think of it and where you’d like to see it go.

Try out the OpenID login experience.  Remember to comment on what you like and dislike, and what aspects you’d like to see added or changed.

At the moment, I’m struggling to decide whether to go with a fully bona fide popup window or a Javascript in page dialog.  So I provide two links at the top of the page so you can try out each one.  If we don’t go with the full popup window, we’ll have to either redirect the whole page to the Provider, which is sub-optimal for the user and for the RP, or we can use a popup window once the user has selected their OP.

Monday, September 07, 2009

How to easily fetch OpenID attributes, regardless of the Provider

In a previous article, I bemoan the pain of writing an OpenID Relying Party that wants to fetch user attributes from their OpenID Provider, because of the at least 4 ways in which those attributes must be requested.  And then later I promised that DotNetOpenAuth would offer help to alleviate that pain.  That help has come.  It actually came way back on June 26, 2009.  But only now did I officially document that help.

Introducing the AXFetchAsSregTransform “behavior”, which is now fully documented on the project wiki site.  I’ve spent some time recently (and will spend more in the near future) documenting the common scenarios that people have questions about, and since it is on the wiki instead of only on this blog, it will be more likely to be updated as new versions of the library come out.

The AXFetchAsSregTransform behavior makes it so that all you have to do is work with ClaimsRequest and ClaimsResponse – no matter what Provider you’re talking to.  If the Provider only supports AX, it Just Works because the special behavior will automatically translate your sreg request into an AX request, and then translate the response back from AX to sreg.  Woot.

Saturday, June 27, 2009

DotNetOpenAuth v3.2 is done

DotNetOpenAuth v3.2 just came off the presses.  Lots of feature work and a few interop fixes in this release.  The biggest highlights being:

  • Very simple story for both RPs and OPs interested in interoperating with others whether they use sreg or one of the several AX formats (finally!)
  • OAuth 1.0a support
  • PPID generation for OPs to protect customers' privacy.

Go download it.

As usual, see our VersionChanges wiki page for a more complete list of the work done for v3.2.  (There are lots more noteworthy changes that I don't describe above).

Wednesday, June 24, 2009

How to get ILMerge to work with .PFX files

ILMerge is an excellent tool for “linking” multiple assemblies into one.  But one of its switches, /keyfile:, which allows it to sign the resulting merged assembly, only accepts .snk files.  It reports no error if you feed it a password-protected .pfx key-pair file, but the resulting assembly is invalid. 

>ilmerge /keyfile:some.pfx /out:merged\some.dll some.dll someother.dll

>sn -v merged\some.dll

Microsoft (R) .NET Framework Strong Name Utility  Version 3.5.30729.1
Copyright (c) Microsoft Corporation.  All rights reserved.

merged\some.dll is a delay-signed or test-signed assembly

>sn -R merged\some.dll some.pfx

Failed to read token from assembly -- The public key for assembly 'merged\some.dll' was invalid.

So ILMerge was generating an assembly that strong name verification reported was delay-signed, and which sn.exe could not re-sign either. 

I shot an email off to Mike Barnett and he was very responsive and interested in helping.  He suggested that I try an ordinary .snk file (a keypair file not protected with a password) and that worked fine.

So how do you get ILMerge to work with .pfx files?  First extract the public key from the pfx file, then use that public key to have ILMerge delay-sign the merged assembly, then use sn.exe to re-sign.  Not too bad, really:

>sn -p some.pfx some.pub
>ilmerge /keyfile:some.pub /delaysign /out:merged\some.dll some.dll someother.dll
>sn -R merged\some.dll some.pfx
And now we have an ILMerge'd assembly, signed by your PFX file. Hurray!

Saturday, June 20, 2009

Help is coming for the Sreg/AX interop problem for OpenID

Just to get your mouth watering for DotNetOpenAuth v3.2...

V3.2 has a new "behaviors" plugin capability that lets RPs and OPs get additional functionality with very little effort.  For example, OPs can add PPID identifier support very easily with just a few lines of code. 

But of most interest I suspect is the sreg/AX interop behavior, which if activated in your web.config file (1 line), will cause an RP or an OP to see just sreg, yet on the wire do sreg and/or any of the 3 known AX attribute formats in order to maximize interoperability while keeping things extremely simple on the RP or the OP side.  So for example, you can prepare an sreg attribute request and send it to the OP, and if the OP only supports AX (and discovery can tell) then the sreg extension is automatically converted to AX in the right format and sent to the OP, then when the response comes back, the AX response is implicitly translated back to sreg so your web site can just deal with sreg.

If the OP doesn't advertise which attribute extensions and formats it supports, this optional DNOA behavior "spreads" the sreg extension to cover all possibilities to maximize the chance of getting the answer back that you want.  (Woot!)  No more hand writing all that interop code that makes the OpenID attribute extension story so embarrassing.  :)

The behavior does a similar trick for OPs, where all request formats look like sreg, and then when the response is sent out, it is converted back to whatever format the request came in.

You might be asking "Why are we making everything look like sreg?  Why not AX since it's newer/better?"  Fair question.  Sreg covers all the use cases for most OpenID sites at the moment, and since it's a subset of AX in almost every respect, an RP converting from sreg to AX implicitly does not lose any data.  And since it's a simpler extension, the object model to read/write it is simpler, making the RPs and OPs job simpler as well.  It seemed like the right thing for the times.  When AX finally gets its attribute type URI story together we can deprecate this behavior and we can all just use a single AX attribute format.

Friday, June 12, 2009

Reverse engineering ASP.NET Membership passwords and salts

I’m working on a project that was using the ASP.NET SQL Membership and I needed to remove the Membership provider from the system since we wanted more control over the user tables.  Our existing users had passwords that ASP.NET Membership had hashed and salted, and we needed to be able to maintain those user accounts, which means we have to be able to validate logins against the salted passwords. 

I understood how password salts work in general, but I could not find any documentation for how it was implemented exactly in ASP.NET Membership.  Fortunately it wasn’t too hard to figure out, and here is the method that can validate user passwords on the aspnet_Users table without using the Membership.ValidateUser method:

private static HashAlgorithm passwordHasher = HashAlgorithm.Create("SHA1");

private bool ValidateUser(string username, string password)
    var user = GlobalApplication.Database.Users.FirstOrDefault(u => u.UserName == username);
    if (user == null) return false;

    byte[] saltBytes = Convert.FromBase64String(user.Membership.PasswordSalt);
    byte[] passwordBytes = Encoding.UTF8.GetBytes(password);
    byte[] bytesToHash = new byte[saltBytes.Length + passwordBytes.Length];
    saltBytes.CopyTo(bytesToHash, 0);
    passwordBytes.CopyTo(bytesToHash, saltBytes.Length);
    byte[] hash = passwordHasher.ComputeHash(bytesToHash);
    string base64Hash = Convert.ToBase64String(hash);
    return user.Membership.Password == base64Hash;

Tuesday, May 26, 2009

Caching results of .NET IEnumerable<T> generator methods

If you're already familiar with generator methods and want to jump to intelligent caching of their results, skip further down in this blog post.

In C#, generator methods are methods that use yield return to return an IEnumerable<T> where the elements enumerated over are generated on-demand.  These are useful for a couple of scenarios.  One is deferred execution – not doing work until you absolutely need to. The other is because the method may generated infinite results, or more results than the caller may care to enumerate over – thus avoiding unnecessary computation altogether.

Here’s an example non-generator method. This showcases some simple use cases although it’s obviously not computationally intensive in this example:

/// Returns a sequence of elements as a complete list.
public IEnumerable<int> GetNumbers() {
	List<int> results = new List<int>();
	for (int i = 3; i < 10; i++) {
	return results;

And it's "generator method" style equivalent:

/// Returns a sequence of elements, generating them on-demand.
public IEnumerable<int> GetNumbers() {
	yield return 1;
	yield return 2;
	for (int i = 3; i < 10; i++) {
		yield return i;

In the first example all the results are generated and collected immediately and returned as a batch. In the second example, the initial call to GetNumbersGenerator() doesn’t execute anything until the Enumerable<int> actually is queried for its first element.  For each enumerated element, only enough code is executed to determine the next element.  The end of the method isn’t executed until the end of the sequence is reached. 

The important thing to note from these examples is that the signatures of the methods are the same.  They both return IEnumerable<int> and yet one defers execution and the other proactively generates all results before returning.  Why not return a IList<int> from the non-generator method?  Because that locks you into this style and also implies that the caller might have modification access to the list.  By returning IEnumerable<T>, the method can be re-implemented as a generator method later if desired, and the caller has no doubt that it is receiving a read-only copy.

Now consider how these methods might be used:

public void PrintNumbers() {
	IEnumerable<int> numbers = GetNumbers();

	// Print to the screen.
	foreach(int element in numbers) {

	// Print to the log file as well.
	foreach(int element in numbers) {

If GetNumbers() is implemented as returning a List<int> object, then the work to generate the sequence of numbers is only done once. But if GetNumbers() is implemented as a generator method, the work is done twice: once in each foreach loop.  Since the point of generator methods is to decrease CPU work load, this obviously is counter-productive.  Also, since the caller code is unchanged and the signatures of the methods are the same, this code may be written without any realization of the inefficiency. 

What we need is a way to consume generator methods, leveraging their goodness for deferred execution, without the risk of causing processing to be done twice where that processing is expensive and the cost of storing a cached result is relatively inexpensive.

Introducing IEnumerable<T>.CacheGeneratedResults()

To solve this problem, I have written an extension method called CacheGeneratedResults.  Since it’s an extension method, it automatically is available on all IEnumerable<T> objects.  It preserves the deferred execution goodness of generator methods, but avoids repeat sequence generation if it is enumerated over multiple times.  It does this by caching all generated results as they are pulled by the caller in an hidden List<T>.  To fix the above example, all you have to do is add .CacheGeneratedResults() to the code:

public void PrintNumbers() {
	IEnumerable<int> numbers = GetNumbers().CacheGeneratedResults();

	// Print to the screen.
	foreach(int element in numbers) {

	// Print to the log file as well.
	foreach(int element in numbers) {

In the above example, GetNumbers() may or may not be implemented as a generator method, but this code will always execute the code to build up the sequence of numbers only once either way.  This is similar to calling the LINQ extension method IEnumerable<T>.ToList() instead of CacheGeneratedResults(), except that ToList() will always pull all results from a generator method rather than just those needed by the caller, and it always makes a copy of the results as a new List<T>, even if the object you call ToList() on is already an IList<T>, thus doubling the memory you need to store the list.

The CacheGeneratedResults() method checks the IEnumerable<T> instance that is passed to it to see if it needs caching.  Lists, collections, and arrays are passed straight through to the caller with no caching added since they don’t represent extra work for repeated enumerations.  For objects that only expose IEnumerable<T>, as generator methods do, they wrap the generator method’s IEnumerable<T> object by a private type that implements the same interface.  This wrapper intercepts all calls to the IEnumerator<T> and injects the cache between the caller and the live object to protect against repeat processing.

I’ve released the CacheGeneratedResults method and several unit tests for it under the liberal Ms-PL open source license.  You can get it from Github.

Monday, May 11, 2009

Uri.EscapeDataPath and HttpUtility.UrlEncode are NOT the same

For some reason Microsoft defined URI escaping twice: Uri.EscapeDataString and HttpUtility.UrlEncode seem to cover the same need. There’s another pair: Uri.EscapeUriString and HttpUtility.UrlPathEncode which again seem to be redundant with each other. But in particular I found a small difference in behavior between the first two methods that should be called out.

System.Web.HttpUtility.UrlEncode escapes the tilde (~) character. System.Uri.EscapeDataString does not. For every other character their behavior appears to be the same (in my tests anyway). One overall difference though is that HttpUtility.UrlEncode uses lowercase hex encoding whereas Uri.EscapeDataString uses uppercase hex encoding. The RFC 3986 says uppercase should be used.

Incidentally, contrary to the MSDN documentation for Uri.EscapeDataString, turning on the IRI parsing option in the (web) application’s .config file does NOT turn on RFC 3986 compliant URL escaping, so the default RFC 2396 escaping is always used. So since OpenID and OAuth require that RFC 3986 URI escaping be used, I had to write my own RFC 3986 escaping “upgrader” method:

/// <summary>
/// The set of characters that are unreserved in RFC 2396 but are NOT unreserved in RFC 3986.
/// </summary>
private static readonly string[] UriRfc3986CharsToEscape = new[] { "!", "*", "'", "(", ")" };

/// <summary>
/// Escapes a string according to the URI data string rules given in RFC 3986.
/// </summary>
/// <param name="value">The value to escape.</param>
/// <returns>The escaped value.</returns>
/// <remarks>
/// The <see cref="Uri.EscapeDataString"/> method is <i>supposed</i> to take on
/// RFC 3986 behavior if certain elements are present in a .config file.  Even if this
/// actually worked (which in my experiments it <i>doesn't</i>), we can't rely on every
/// host actually having this configuration element present.
/// </remarks>
internal static string EscapeUriDataStringRfc3986(string value) {
 // Start with RFC 2396 escaping by calling the .NET method to do the work.
 // This MAY sometimes exhibit RFC 3986 behavior (according to the documentation).
 // If it does, the escaping we do that follows it will be a no-op since the
 // characters we search for to replace can't possibly exist in the string.
 StringBuilder escaped = new StringBuilder(Uri.EscapeDataString(value));

 // Upgrade the escaping to RFC 3986, if necessary.
 for (int i = 0; i < UriRfc3986CharsToEscape.Length; i++) {
  escaped.Replace(UriRfc3986CharsToEscape[i], Uri.HexEscape(UriRfc3986CharsToEscape[i][0]));

 // Return the fully-RFC3986-escaped string.
 return escaped.ToString();

Tuesday, April 21, 2009

Recent OpenID relying party vulnerabilities

The OSIS I5 OpenID interop testing is well underway.  Last weekend while testing some OpenID relying party web sites, John Bradley happened upon a web site that failed a particularly alarming test.  Further investigation revealed that the security hole affected all OpenID relying parties based on Janrain’s Ruby OpenID library.  Perhaps Janrain is using its Ruby library for RPXNow, because I discovered that RPXNow had the same security hole.

Janrain acted quickly.  They fixed RPXNow and released an update to the OpenID Ruby library within a day or so (version 2.1.5) after we reported the bugs to them. 

What does this mean for OpenID relying parties?  If you are using Janrain’s Ruby OpenID library (if you’re based on Ruby you probably are), make certain you are using the very new 2.1.5 version.  RPXNow customers don’t need to do anything as the patch was applied at the service.

Without going into the exploit details since there are still vulnerable relying parties that haven’t upgraded yet, let’s just say that this security hole was particularly devastating as it allowed a hacker to spoof anyone’s identity at the RP.  In English: “anyone could log in as anyone”.  Well, some basic knowledge of how OpenID works or and a hacker tool would be required. 


RPXNow customers as well as Ruby OpenID library users have been vulnerable, potentially for several months.  This means that if your site used RPXNow to allow OpenID logins, your users’ web accounts may have been hijacked, even if you haven’t heard any reports of it. 

If your site uses RPXNow or the Ruby OpenID library and stores private information for your users, you owe it to your users to notify them that their private data may have been compromised and/or their accounts/identity stolen.  Again, RPXNow has already been patched so in the future users will hopefully be safe, but the fix cannot be retroactive, and previously hijacked accounts are still victims.

I haven’t seen Janrain make any announcements regarding this security vulnerability.  I hope that in their private channels to their RPXNow and Ruby library customers they have advised them of the problem and that they should contact their respective customers to warn them of the potential loss of private data. 

I personally feel awful about this.  As neat as OpenID is, one of its weaknesses is that a user cannot be confident that an arbitrary RP he/she’s about to log into is a secure implementation of OpenID, and thus bugs like this can greatly reduce public trust in using OpenID to secure their identities.  But that’s why we do OSIS OpenID testing… to find and correct bugs like these.  I just wish we never found anything serious.

Thursday, April 16, 2009

DotNetOpenAuth 3.0 released

Download it now.

Previously named DotNetOpenId in its v1.x and 2.x releases, the v3.0 release is rechristened DotNetOpenAuth to reflect its support for multiple authentication and authorization protocols.  Sporting OpenID, OAuth and InfoCard support in its initial incarnation, it has been re-architected and largely rewritten to make adding more protocols fast and less error-prone.

Even if you’re already using DotNetOpenId 2.x and have no interest in InfoCard or OAuth, this is a worthy upgrade.  It’s faster, more stable, and better tested.  This new version is already being used as the standard for OSIS I5 OpenID interop testing, adding assurance that sites that use this library are secure and interoperate with many other sites and OpenID libraries.

In the making since August 30th, DotNetOpenAuth took 229 days to write.  Valued at nearly $1.9 million by Ohloh.net, this is truly the culmination of a lot of work of many developers and cryptography experts.  Although I wrote the library, I included some code from the Mono project for the Diffie-Hellman algorithm that OpenID requires.

  • New OAuth support! Both for Service Provider and Consumer roles.
  • RP+OP: discovery results cached for faster repeat logins (Issue 198).
  • RP+OP: Exceptions are now much more predictable: the host need only catch ProtocolException to handle all unexpected error cases.
  • RP+OP: OpenID extensions without simultaneous authentication (not that any such extensions exist).
  • RP+OP: Better interop with some remote servers that omit certain common HTTP headers.
  • RP: New InfoCard Selector ASP.NET control
  • RP: Classic ASP officially supported via our new COM server, including support for the Simple Registration extension.
  • RP: Signed callback arguments so relying parties can be confident their data was not tampered with during authentication.
  • RP: OpenIdAjaxTextBox now batches authentication attempts to several OPs specified in the user's XRDS document simultaneously in search of one that will authenticate without further user interaction.
  • RP: Smaller authentication request messages (shorter URLs).
  • RP: All callback arguments on return_to URL are now signed to protect against tampering (Issue 147).
  • RP: More reliable logins due to nonce checking that is per-provider endpoint instead of global (Issue 175).
  • RP: Added support for using ASP.NET State Server and other serialization-based session stores (Issue 185).
  • RP: More efficient reuse of allocated objects by ASP.NET controls.
  • OP: Ability to customize the lifetimes of each shared association type for added security.
  • OP: Even OpenID 1.x RPs are now protected from replay attacks on positive assertions (Issue 176).
  • OP: New ASP.NET MVC OpenID Provider sample.
  • 430+ unit tests (180+ more than DotNetOpenId 2.x).

Notes to web sites upgrading from DotNetOpenId 2.x:

The public API, while very similar, has changed its namespace. Hosting sites will need to accommodate to the changes!

Monday, March 30, 2009

How to pretty much guarantee that you might get an email address with OpenID

OpenID itself is just an authentication protocol.  It takes OpenID extensions to get more information about the user like their name or email address.  In fact there are two popular extensions that can provide this kind of information: Simple Registration (sreg) and Attribute Exchange (AX).  A web site that wants to accept OpenID logins (this site is called a relying party, or “RP”) and also gather the user’s email address at the same time may do so, but unfortunately it is quite complicated to get the best user experience.

OpenID Providers (aka “OP”) can support either or both of these extensions.  And while the sreg extension is straightforward and consistently implemented, AX is divided.  Let me explain.  If you want an email of a user and you’re using the sreg extension, just ask for the value for “email”.  Simple.  But if you’re using AX, you have to ask for these three attributes:

  1. http://axschema.org/contact/email
  2. http://schema.openid.net/contact/email
  3. http://openid.net/schema/contact/email

Why on earth?  Well, AX is extensible, so any attribute URI can be used to refer to some value that you want.  Unfortunately, before AX was a finalized spec several popular OPs picked up support for it and made up different ways of describing the simple user’s email attribute.  The very unfortunate thing is that once AX standardized on one Type URI form for the common attributes (#1 on my list above), many of these OPs didn’t bother to update their code to support the official attribute type URI.

What that means for RPs that can request authentication against arbitrary OPs is that they have to request all three of these attributes and then check for any of these three attributes to have values in the AX response.  But that’s not all, of course…

Some OPs don’t support AX at all, so you also have to send an sreg extension request to fetch the email address, and an RP then has a total of four places in the response to check for an email address.  Why not just use the unified sreg, you ask?  Because Google doesn’t support sreg – only AX. 

Oh, and Google will only give you an email address if the RP indicates that it is an AX “required” attribute.  Google completely ignores attribute requests marked as “requested”.

And Yahoo! doesn’t support either sreg or AX extensions at all.  They plan to, but as yet they don’t give out any user information to RPs. 

So if you request email addresses via sreg and AX, and for AX you ask for the email in all three forms, and if you mark them as required, you have a pretty good chance of maybe getting a user’s email address.

OpenID is really cool.  But retrieving attributes about a user is not.  AX is a great spec, but very, very poorly adopted.

Monday, March 16, 2009

Need access to that internal? Don’t touch that dial!

The blessing and curse of open source is that the source can be easily changed. 

Internal types and members don’t need to be backward compatible with previous versions. This makes fixing bugs and feature enhancements much easier in future versions of the software.  Making types and members internal cuts down on documentation that must be written, read and maintained.  It protects callers from misusing a type or member and unknowingly introducing a bug, which could be a security problem.  It makes consuming the public area easier since reading over a smaller public API is quicker than reading over an enormous one.

Coding Horror: I’ve read a suggestion that since DotNetOpenId is open source, all types and members within it might as well be made public rather than internal so that it’s easier to use.  Just because something is open source does not mean that we throw design virtues out the window.  The internal scope exists in every statically typed language because it is so useful – not because closed source software somehow needs it to hide their implementation details. 

Coding Horror: I’ve also read emails that casually mention that the writer wanted access to a method in DotNetOpenId that looked useful but was internal.  So they made it public and started using it.  This person did not understand that that method was not designed for use outside of a very specific scenario, and they introduced a significant security hole in their application by misusing it.  It turned out that there was already a public member that did exactly what they wanted.  Had they focused their efforts looking over the public API for the right way to get the feature they needed, as they would have been forced to do if it was a closed source library, they would have found it and never had the security hole.

There is only one valid scenario I’ve seen or heard of where DotNetOpenAuth had its types legitimately made public when they were designed to be internal.  It is for the upcoming OSIS interop tests, and the tests specifically need to do the wrong thing, in order to test that the remote party reacts by rejecting the request.  A good library is designed to make doing the right and secure thing easy to discover and easy to do.  Doing the insecure thing should be difficult or impossible.

In DotNetOpenAuth’s case, doing the insecure thing was impossible in many cases, so yes, types had to be elevated from internal to public in order for these interop tests to work.  But since this is a corner case and not something that any production web site should be doing, these types are only made public in a dedicated branch in source control that will never be merged back into main development, although updates from main can certainly be merged into the osis branch. 

So my plea for you, dear reader, if you see a useful-looking type or member in DotNetOpenAuth or DotNetOpenId, please ask yourself if a public member with the functionality you’re looking for already exists.  And if you don’t think there is, ask the dotnetopenid@googlegroups.com mailing list before you make the change.  Do yourself a favor: don’t touch that [access] dial.

Wednesday, March 11, 2009

DotNetOpenAuth 3.0 Beta 2 released

DotNetOpenAuth, previously named DotNetOpenId, is getting nearer to its major 3.0 release.   With beta 2, we have a security reviewed, feature complete library for .NET use of the OAuth and OpenID protocols. 

Although Beta 1 was very rough and was not recommended for use in production, Beta 2 has passed enough security, interop and stability tests to warrant live web sites to use this version.  It’s not “release” quality yet, but mostly that’s due to needed stabilization time and to hear feedback from early beta 2 adopters to apply any final interop fixes for the final version.  DotNetOpenAuth v3.0 Beta 2 has some very significant features that are new since the last release of DotNetOpenId v2.x.

Download beta 2 from Google Code or Ohloh.  (The project sites are still called DotNetOpenId, but you’re at the right place).

Major enhancements since beta 1:

  1. Much more stable
  2. Classic ASP support
  3. Tamper protection of callback arguments
  4. ASP.NET State Server and other serialized session stores support

Check out the VersionChanges wiki page for a more complete rundown of the changes since v3.0 beta 1 and earlier versions of DotNetOpenId.

Please leave feedback on the new version here as a comment or at the dotnetopenid@googlegroups.com mailing list.  Questions?  Send them to the same mailing list, or post them at StackOverflow.com and tag them with “dotnetopenid”.

Saturday, March 07, 2009

Replay protection for OpenID 1.x relying parties

If you’re writing an OpenID Provider, you should have a strong appreciation for the security of your customers’ identities that you will be protecting.  One aspect of that protection is against replay attacks, where a man-in-the-middle sniffs the identity assertion from a Provider and replays it against the same relying party and manages to log in as the victim.  OpenID 2.0 provides built-in protection against replay attacks, but that leaves OpenID 1.x users vulnerable.

The recent OpenID Providers hosted by the big Yahoo!, Google and Microsoft have mitigated against the security problems in OpenID 1.x by refusing to log their users into OpenID 1.x relying parties.  This is secure for their users, but less helpful in getting them to log into those sites.  There are still quite a few OpenID 1.x relying party web sites out there.  It would be great if we could allow them to log in and yet still offer them protection.

It turns out that replay protection for OpenID 1.x is not new.  On the relying party side, replay protection can be added with custom parameters added to the return_to parameter of the authentication request.  The Janrain OpenID and DotNetOpenId libraries do this already.  But since not all OpenID 1.x relying parties can be relied on to have implemented their own replay protection, an OpenID Provider cannot assume any particular relying party is safe unless it is an OpenID 2.0 site.

New with DotNetOpenAuth (a.k.a. DotNetOpenId v3.0) there is a way to have your cake and eat it too.  OpenID Providers that use DotNetOpenAuth as their OpenID library will provide replay protection for all their users, regardless of the OpenID version supported by any arbitrary relying party web site.  It does this by refusing to use a shared association during authentication if that authentication request comes from a 1.0 RP.  Instead, it generates its own private association and changes the assoc_handle parameter in the response.  The RP is then forced to verify the assertion by calling back to the Provider.  This is where the Provider can apply its own replay protection, which is exactly what it does.

With replay protection now extended to all versions of OpenID relying parties, it seems that the only security hole left in OpenID 1.x that would justify a Provider in refusing to work with them is RP site verification (a.k.a. RP discovery).  But to date every Provider, even the big ones previously named, work with OpenID 2.0 RPs even if they don’t happen to support RP discovery, so this isn’t even an issue.

Will we see Yahoo! and others start working with 1.x OpenID RPs?  I doubt it.  But I think they could (securely) if they wanted to.

OpenID association poisoning

As part of the OpenID protocol a relying party often establishes shared secrets (called ‘associations’) with identity providers that are used to verify identity assertions.  It occurred to me that an OpenID relying party might easily introduce a major security hole in the process of establishing an association that could allow identity spoofing.

Each association is assigned a handle, which is a name by which the relying party and the provider will refer to the shared secret in later transactions.  The potential security hole is possible because the Provider alone determines the association handle.  If the relying party is not careful in saving associations it creates, a rogue Provider could hijack another Provider’s association with the relying party and thereby gain the ability to assert the identity of any user from the other Provider.  Here’s a scenario:

  1. Victim hosts his identity with GoodOP, and has logged into a vulnerable RP and saved some private data.
  2. Hacker hosts EvilOP, which is a carefully contrived Provider rigged to hack into RPs.
  3. Hacker attempts to log into RP as any account hosted by GoodOP and can thereby discover the handle CompromisedHandle of the shared association between RP and GoodOP.
  4. Hacker instructs EvilOP to assign CompromisedHandle as the handle for the next association it creates with an RP.
  5. Hacker starts a login at RP with a Claimed Identifier that points at EvilOP.  The RP then establishes an association with EvilOP as a preliminary step to the login process. 
  6. EvilOP tells the RP of the new association and says the handle for it is CompromisedHandle
  7. RP is vulnerable and overwrites the shared secret it has with GoodOP with the new one it established with EvilOP.  Yet CompromisedHandle is still associated with GoodOP in the RP’s associations table.
  8. Denial of Service: The RP can no longer log in users from GoodOP, because the shared secret between them is wrong and the RP will reject identity assertions from GoodOP due to invalid signatures.
  9. Identity Spoofing: EvilOP now can write identity assertions on behalf of GoodOP such that RP thinks they are from GoodOP.  Hacker can use EvilOP to write assertions and log in as anyone who has an account with GoodOP.

The good news is that having come up with this possible security hole, I did a check of DotNetOpenId and Janrain’s OpenID Ruby library.  Neither one was vulnerable to this.  Since all of Janrain’s libraries are similar to each other, I ended my investigation because it was likely that all the other Janrain libraries were also secure in this regard. 

Still, this is another argument for web sites to use standard libraries for their OpenID support rather than trying to implement OpenID themselves.  There are just too many potential security holes for a webmaster to avoid them all unless authentication is truly his focus and passion.

Saturday, February 28, 2009

Fixing the OpenID login user experience

The user experience of OpenID at Relying Party web sites is so important to get right.  OpenID is right for your web site's visitors – no doubt in my mind about that.  But we need to make sure it's very easy for your visitors to use so you don’t lose them when they've been pre-wired for the password anti-pattern. 

Several big companies like Yahoo and Google have invested a lot of effort into figuring out how to present OpenID in a way that a user that is unfamiliar with OpenID can quickly learn or simply use.  The irony is that perhaps the best way to get people using OpenID is to not even tell them that that is what they’re using!  The sad truth is that users have been trained to trust web sites and passwords – both bad things.  We can’t undo that damage and also convince them that learning OpenID is worth their time at the same time.  So instead, webmasters can focus on fixing web sites to avoid the password and individual user account problem using OpenID – without telling the user. 

Some years down the road, users may have figured out the underlying protocol, or might not.  But who cares, really?  How many users put http:// in front of their web addresses but have no idea what it means?  There is a lot of power in OpenID that can (currently) only be harnessed if users understand OpenID and how to leverage it.  But making this power easier and safer to use will take time.  In the meantime, let’s make the parts of it that have been solved from a UX point of view well used so web surfers get used to the right way to do things.

To that end, I’ve been learning JQuery so I can write an optimally easy OpenID login UX (User eXperience).  My goal is to add the new UI to the DotNetOpenId/DotNetOpenAuth library in v3.0 or v3.1 so that it’s as easy for relying parties as just dropping in an ASP.NET control to use this super-easy UI.  Here’s a screenshot:


It’s absolutely not done yet.  And I don’t claim many original elements of this UI.  I’ve applied ideas that many other people have been coming up with and sharing with the community.  My goal in the shipping version will be to make it simple HTML with CSS that customizes everything so that theming can be applied based on the web site that’s hosting it. 

You can download and try out the interactive static HTML preview of this by downloading this zip file and opening up default.html: http://groups.google.com/group/dotnetopenid/web/openidlogin.zip

I really want to hear your feedback on this.  If this does get included in DotNetOpenId/DotNetOpenAuth 3.x, I want to make sure it fits your needs. If you think you’d be interested in an easy way to get this login UX on your web site please try this one out and let me know what you think of it, both good and bad.  Send your thoughts to dotnetopenid@googlegroups.com.  And drop a penny in the bucket.

Wednesday, February 04, 2009

DotNetOpenId v3.0 Beta 1 released

Tonight DotNetOpenId, soon to be renamed DotNetOpenAuth, released beta 1 of the major v3.0 release.  You can download the bits from Ohloh.  Although downloads should remember that as a beta this version should not be used in production, there are several new features that should be worth investigating and building a web application around while the final release is still in development:

  • OAuth support (Both for Service Provider and Consumer roles.)
  • RP+OP: Exceptions are now much more predictable: the host need only catch ProtocolException to handle all unexpected error cases.
  • RP+OP: OpenID extensions without simultaneous authentication.
  • RP: Signed callback arguments so relying parties can be confident their data was not tampered with during authentication.
  • RP: Smaller authentication request messages (shorter URLs).
  • OP: Ability to customize the lifetimes of each shared association type for added security.
  • Over 400 unit tests (150+ more than previous version).

The biggest addition is obviously OAuth support, which is an entirely new protocol that actually has little-to-nothing to do with OpenID, except that they work great together.  To do this the entire library was rewritten on a new reusable messaging stack that both the OpenID and OAuth protocols share. 

Also keep in mind that with the product rename, the namespace has changed, and a little bit of the public API as well.  This means that this version is not simply a drop-in replacement for DotNetOpenId v2.0, and host sites will have to adjust their code accordingly.

But as always, your feedback and donations for this free, open source software are appreciated!

Monday, February 02, 2009

DotNetOpenId is looking for a new home

DotNetOpenId started on Google Code.  But we’re outgrowing it.  We’d like to move to a shared hosting server where we can have automated tests, nightly builds, and a better web site and online documentation.  This new web site will cost real money though, which is beyond my FOSS-hobby budget.

Please if you like DotNetOpenId, consider making a donation toward making this happen!

Click here to lend your support to: dotnetopenid and make a donation at www.pledgie.com !

Sunday, February 01, 2009

DotNetOpenId v3.0 to feature built-in OAuth support

The next major release of DotNetOpenId, slated for a release in or around March 2009, will add OAuth support to the mix.  If you don’t know what OAuth is, it basically provides a way for your site’s visitors to authorize your site to download their email address book without giving you their email address and password.  It’s of course much bigger than that, but that’s the easiest way to start thinking about it.

A little history of the last several months

The DotNetOAuth library that I started a few months ago has come a long way. Its purpose was to be a sandbox to author a new user agent redirect-based messaging framework that would serve OAuth, OpenID, and any similar framework for the future. The framework's charter included being easy, maintainable and discoverable, and very unit testable.

The framework's first application was an implementation of the OAuth protocol (thus the DotNetOAuth library name). Then I added OpenID support to DotNetOAuth, porting a few files from DotNetOpenId but mostly rewriting it all from scratch in order to take advantage of the new messaging framework. Today DotNetOAuth can do OAuth (both Service Provider and Consumer roles) and OpenID (both Provider and Relying Party roles) and I've ported the samples from DotNetOpenId over to DotNetOAuth and they work great.

This marks an important milestone for both DotNetOpenId and DotNetOAuth.  DotNetOpenId has a lot of experience out in the wild and little changes here and there to cooperate with various systems has already been made.  DotNetOAuth had the new messaging framework and added support for OAuth.  It makes sense (and in fact was part of the initial vision of what this might evolve into) to merge the two libraries so we can capture the benefits of both.

A new name for DotNetOpenId

With this merge comes a new name for the library: DotNet(OpenId) + DotNet(OAuth) = DotNetOpenAuth.  The name reflects the new capabilities of the library, and the technologies it may incorporate in the future.  Future directions may include Shibboleth, InfoCard, and the Next Big Thing for authentication and/or authorization, whatever that may be.  DotNetOpenAuth is the library to build up protocol implementations for .NET.  It is licensed under the Microsoft Public License (Ms-PL), which is a very liberal open source license that allows the software’s royalty-free use in both open source and closed source applications.

Why the new name?  That was a hard decision.  DotNetOpenId has quite a following already and there was some incentive to keep the name and leverage the momentum.  On the other hand, we want people looking for OAuth support to not pass by DotNetOpenId without considering it because of its name.  In the future we want to incorporate new protocols as well which will continue to make OpenId just one of many features the library offers.  So in the end, and after talking to Scott Hanselman and Jason Alexander about it (and getting agreement in the problem, and differing suggestions as to the proposed solution), I decided to go ahead and change the name for this major release.

In its first release(s), DotNetOpenAuth will offer an easy upgrade path for users of the DotNetOpenId v2.x releases by including the old namespace and classes in the assembly so that the new library can be dropped in and used immediately, giving webmasters the choice of when to update their code to call the new APIs.  Let me know what you think of this idea as I haven’t written these shims yet and so feedback on whether you need this would be very helpful!

What about DotNetOpenId 2.x?

Thousands of you are already using DotNetOpenId 2.x.  What does this mean for you in terms of the work necessary to upgrade to the new version?  More than usual, but not a whole lot.  First rest assured that due to its maturity and popularity, support for DotNetOpenId v2.5 (and perhaps even a v2.6) will not be cut off immediately upon the release of v3.0.  I expect to respond to bugs filed against the v2.5 versions for several months.

And when you decide to make the upgrade to v3.0, I’m working to ensure the transition will be as smooth as possible.  Although v3.0 has an all new “DotNetOpenAuth” namespace, the public API for OpenID in that namespace is very similar to the old ones in the DotNetOpenId namespace so it may be as easy as updating your “using” line in C# or “Imports” in VB.NET.  Depending on the features you were using you may have a little more work to do to get your code to compile.  Watch for a follow-up post describing the new object model and why I think you’ll like it much better than the old one even though on the outset it looks so similar. 

If there is sufficient demand for it, I’ll add a set of shims in the assembly with the old namespace so you can use all your existing code with the new library and have it Just Work.  Let me know whether you feel this feature is worth the effort.  Leave a comment, or email dotnetopenid@googlegroups.com

Where can one download the new version?

The v3.0 release is slated for a March 2009 release.  But watch this blog or the dotnetopenid mailing list for an announcement about the Beta 1 release in the next few days.  Of course it’s all open source, so you could download the source code to it now, but I recommend you wait a few more days so you can see it in digital shrink wrap.

Saturday, January 17, 2009

Why using RPXNow is a bad idea

Janrain has been a great influence in the OpenID community and I thank them for all their efforts.  They are, nevertheless, a company that must generate profits, and their recent invention of RPXNow is one attempt at doing that.  It generates revenue for them: check.  But is it really in your web site’s best interest to use RPXNow instead of using OpenID directly?  I don’t think so.

I have not used RPXNow myself, for reasons I will describe.  So what I will discuss is from my reading the documentation about their service and seeing how it works from an end user’s point of view. 

Confusing to users

When users login, their OpenID Provider will usually tell them that they’re logging into yoursite.RPXNow.com instead of www.yoursite.com.  Even if your visitors are willing to go ahead and login in spite of this oddity, you are teaching your users to get phished by disregarding what ought to be a warning message to them.

RPXNow allows you to customize this “realm” URL that is displayed, but doing so requires more work on your part, and costs more per month for the service.  And RPXNow was all about making it easy, so doing the extra work to get the right realm displayed to your users is something of a step backward for their primary selling point, I’d say.

Less stability

RPXNow.com introduces an intermediary between your site and the OpenID Provider.  One of the criticisms of OpenID already is that your users won’t be able to log into your site if their Provider is down or cancels their account.  Adding RPXNow between your web site and the Provider adds yet another possible point of failure for your users.  It is worse, in fact, because users may have multiple Providers so they can still log in if one goes down; but if you use RPXNow and RPXNow goes down, they are helpless and you won’t see any logins at all. 

OpenID has also been criticized because logins can take a bit longer due to the 1-2 hops between your server and the OpenID Provider to complete authentication.  This may be a moot point, but it can only get worse when you add yet another third party in the authentication protocol.

Less security flexibility

Your site may need to make its own decisions about which Providers it is willing to accept OpenID logins from.  Or it may need to control the policies that should apply when dealing with those Providers.  Since RPXNow completes these steps for you, you lose out in your ability to customize the process to the same extent you could if you were using OpenID directly at your site.

Proprietary protocol

OpenID was designed exactly to allow your sites to accept logins from many other OpenID Providers.  Doing this right and securely is in fact a big job, but there are numerous free and open source libraries that have done this heavy-lifting for you and you just need to hook up to it. 

RPXNow’s sales pitch is that they handle the complexities of the OpenID protocol for you.  But to get that you have to talk to them in their proprietary protocol.  They give you a library for that so that it’s “easy”, but what have you gained from this?  Without RPXNow, you need an OpenID library and to hook it up to your site.  With RPXNow, you need their RPXNow library and to hook it up to your site.  The frontends to either library that you will have to interface with turns out to be very similar.  So it’s not significantly easier to hook up RPXNow than it is OpenID itself (assuming you picked a decent OpenID library). 

There is one advantage to RPXNow abstracting OpenID away from you that I can think of however.  It is their job to stay on top of patching their implementation of the OpenID protocol and keeping you secure instead of yours.  For a big company, that actually is more of a risk than a benefit.  But for many web sites, this is an advantage because otherwise they may get OpenID ‘working’ and then consider it done, and ignore all the security updates and new OpenID versions that may come out that may put their users at risk.

Vendor lock-in

Janrain claims to make it easy for you to stop using their RPXNow service if your needs change.  But they do not volunteer the complications that may be involved in parting ways. 

Remember how the realm is often ‘yoursite.rpxnow.com’?  Much worse than simply confusing users, it can end up locking users out of their accounts.  Google, and others to follow, choose to use a feature of OpenID that allows them to protect their users from collaborating relying parties so that their usage on the Internet cannot be tied together.  But the way this is done is using that same realm URL.  If your realm URL ever changes, all your users who log in using their Google OpenIDs will have to create new accounts with your site and will no longer be able to log into their old ones!  This would be very upsetting to your customers, and you would get emails/calls in about this from every one of your loyal Google customers.

Let’s say you start with yoursite.rpxnow.com and decide to switch to the more costly option of sticking with RPXNow but using your own domain name for the realm, or just switching off of RPXNow altogether.  Either way, you’re screwing over your Google customers (and potentially other OpenID Providers that choose to follow Google’s example).  The only way to use RPXNow without this eventual end result is to pay the money to RPXNow to use just your own domain name at the start.  But again, that’s more work and more money.  And if you just accept OpenID yourself directly you can avoid the more money, and avoid the other issues we’ve discussed at the same time.

Alternatives to the benefits they offer

RPXNow does make logging in easier for your users than the typical OpenID text box.  With some work on your end, you can achieve a similarly easy UI for your users on your own.  Or better, the OpenID libraries that you can use directly can have this functionality built-in so that it’s still just as easy, and yet you’re hosting it all yourself.

RPXNow also offers user login and account creation statistics.  But this too can be achieved relatively easily using other means like Google Analytics, which is a free service.


Since RPXNow introduces several problems, web developers should avoid it for now in favor of an OpenID library.  Janrain would do well to repackage RPXNow as a product that can be purchased instead of a service in order to avoid most/all of the issues I list above.

Thursday, January 08, 2009

Remotely enable RDP

Have you ever been away from your work PC, tried to Remote Desktop (RDP/mstsc) into it, only to realize that you’ve forgotten to enable RDP before you left work?  Ever shake your head at the irony that if you could only remote in, you could enable RDP?

Well now you can:

Method 1

The simplest way is to run a free tool:


Method 2

If you’d prefer to not run an unknown tool and give it admin access to your remote machine, you can do it by hand:

  1. Fire up regedit.exe on your local machine.
  2. File -> Connect Network Registry -> your remote machine name for which you have admin access.
  3. File -> Import… -> and import the following file:

    Windows Registry Editor Version 5.00



    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server]


  4. Reboot your remote machine:
    shutdown \\yourremotemachine /f /r /t 0
Method 3

If you can use WMI you can use the Win32_TerminalServiceSetting class in the root\cimv2\TerminalServices namespace. The SetAllowTSConnections method will allow you to enable the ts connections. You will need to set both the AllowTSConnections and the ModifyFirewallException params to 1.


I’m not sure how to use WMI myself.  If someone know how to please comment.