Sunday, October 02, 2011

How to get meld working with git on Windows

Inspired by these instructions, I followed these steps:

  1. Install Python 2.6
  2. Install PyGTK All-in-one installer
  3. Install meld

Then you need to configure git to be able to find and invoke meld.  If you’re using the git bash shell, this can be done with these commands:

git config --global merge.tool meld
git config --global mergetool.meld.path /c/Users/andarno/Downloads/meld-1.5.2/bin/meld

Of course you may want to set your PATH to include Python permanently rather than the above command which only works for your active window.  The two git config commands have a persistent effect already, however.

Finally, you’re now ready to use meld to resolve merge conflicts.  I haven’t found a way to get “git gui” to invoke the merge tool correctly, but this command from the git bash shell works great:

git mergetool 

Monday, August 29, 2011

Immutable collections with mutable performance

In my last post, I detailed the differences among read/write, read only, frozen and immutable collection types.  I described how immutable collections come with a hit to the garbage collector due to the garbage they generate during mutations.  I have a very positive update on that topic.

See my update on my Microsoft blog for the update.

Sunday, August 21, 2011

Read only, frozen, and immutable collections

[Update: a more recent post with new data on attainable performance of immutable collections]

The topics of immutability and functional programming has fascinated me lately.  Mostly because of my work on the Visual Studio Common Project System (CPS) which is a large, highly multi-threaded code base that only remains sane because of its reliance on immutable types in many areas.

In my research and conversations on this topic, I’ve learned to appreciate the differences between several adjectives often used interchangeably among programmers that I’d like to share with you, detailing the differences including some pros and cons of each.  At the bottom of the post I include a summary table and my own personal verdict for what kinds of collections (mutable and immutable) I prefer.

Although the principles here apply to any type in any language and platform, I’ll be using for some references and examples the collection types in the .NET base class library.

First a little terminology as I will use it in this post:

  1. Mutable (a.k.a read/write): a collection or type that allows for in-place updates that anyone with a reference to that object may observe.
  2. Immutable: a collection or type that cannot be changed at all, but can be efficiently mutated by allocating a new collection that shares much of the same memory with the original, but has new memory describing the change.
  3. Freezable: a collection or type that is mutable until some point in time when it is frozen, after which it cannot be changed.
  4. Read only: a reference whose type does not permit mutation of the underlying data, which is usually mutable by another reference.

The de facto read only standards in .NET

There are not yet any freezable or immutable collections included in the  .NET base class library.  But we have a couple of read-only views of data, as outlined below. 

De facto standard #1: IEnumerable<T>

Often considered a convenient way to express data in a read-only way, IEnumerable<T> actually provides very few, if any, guarantees and almost no protections.

  • NO thread-safety to issuer or receiver. If the underlying collection changes while being enumerated from another thread, data corruption may result or exceptions may be thrown.
  • NO immutability guarantee to the receiver. The underlying collection may be changed.
  • NO immutability guarantee to the issuer. The receiver may cast the enumerable to ICollection or some other read/write type and may mutate the data through that interface if the underlying object allows it.
  • NO performance guarantee to the receiver. The enumerable may represent a deferred execution query (LINQ for example) that may incur significant cost including network overhead with each enumeration.

Avoiding these shortcomings usually requires enumerating exactly once into a cloned collection, which comes with a perf and memory hit, and doesn’t entirely protect you from thread-safety issues that may occur during that one enumeration.

De facto standard #2: ReadOnlyCollection<T>

Somewhat misleadingly named, this collection is neither immutable nor a general collection. It is merely a wrapper around a mutable list.  Perhaps a more accurate name would be ReadOnlyListFacade<T>. 

This collection type is better than IEnumerable<T> in several ways, but still deficient in others:

  • YES, immutable guarantee to the issuer.  The collection owner never releases a reference to the mutable collection, as the ReadOnlyCollection<T> is a genuine wrapper around the mutable data, making it impossible for a receiver of the collection to mutate the data.
  • YES, thread safety to the issuer, since the receiver cannot mutate the data, the underlying collection may be mutated without fear that the issuer with throw an exception or corrupt its own data.
  • YES, reasonable performance guarantee to the receiver.  The IList<T> that the ReadOnlyCollection<T> wraps is virtually never populated via deferred execution.  Therefore enumerating the ReadOnlyCollection<T> repeatedly can generally be done with high performance and without side effects.
  • NO thread-safety to the receiver.  If the receiver is reading the ReadOnlyCollection<T> while the issuer is mutating the underlying collection, the client may get an exception or perhaps even incorrectly perceive the contents of the collection.
  • NO immutability guarantee to the receiver. The underlying collection may be changed.

Considerations among collections


Collections that are not thread-safe cannot be made thread-safe in a shareable way.  For example, although you can put lock { } statements around all uses of a collection to make it thread-safe, you cannot share references to your collection or any part of it outside your class because outside holders of that reference will not synchronize access to it with the same lock object you do (and ICollection.SyncRoot seems increasingly unpopular if it were ever known at all).  This suggests the importance of thread-safety built into collection classes.

Freezable collections

There are no freezable collection types in .NET (yet anyway).  So a discussion on them is somewhat speculative.  If the traditional List<T> class were to pick up a “Freeze()” method and an “IsFrozen” property, there would still be missing a type-safe way of communicating that a method expects or returns a frozen collection. You would have to perform a runtime check to see what is allowed or provided. Collection types that guarantee immutability are preferable for this purpose. If a method accepts an immutable collection parameter or returns an immutable collection, you have 100% confidence that that collection cannot ever change, period. If a List<T>.Freeze() method returned a FrozenList<T> object, and both the original List<T> and the FrozenList<T> pointed to the same data structure that could no longer be changed, that would provide a type-safe way of requiring or guaranteeing immutable data, however.

While one might consider that guarantee more theoretically aesthetic than practically useful, I can speak from some experience now that I avoid a lot of collection cloning code (and the perf problems that showed up as a result) because I know that a collection I just received cannot possibly be changed. It’s gotten to the point that when I see a method accepting an IList<T> I ask myself “Why? Is that method expecting the caller to further mutate that list during or after its invocation and that the method will need to see that mutation?” Call me a functional programming geek, but if a method does not expect the caller the tamper with a collection that it passes to the method, I now greatly prefer to see IImmutableList<T> as the method parameter.

Before freezing, freezable collections offer the high construction performance and compact memory utilization of traditional mutable collections.  After freezing, any further mutation requires shallow cloning of the entire collection, with the perf and memory hit that comes with it. 

One possible danger of using a collection that is freezable (but unfrozen) is that someone you share a reference to the collection with might want an immutable copy of the collection and may freeze the collection behind your back, not realizing that your class still “owns” the collection and thus expects it to not be frozen.


The performance of mutable collections, particularly when they are initially sized adequate for their contents, can’t be beat by immutable collections.  However the performance of immutable collections can be remarkably good.  Much better than one might expect, and certainly good enough for due consideration in your design.

Immutable collections primary shortcoming is that raw add performance tends to be 2-3X slower than the amortized cost of its mutable collection counterparts.  This is chiefly due to the mandatory “spine rewrite” of the binary tree and the memory allocations that go along with it.  I just compared against amortized cost of mutable collections because mutable collections have the shortcoming of occasionally breaking the bounds of their pre-allocated memory, which usually means allocating twice as much and then copying all the data from the old location to the new location.

Before you write off mutable collections due to their perf impact on initialization, keep in mind that this may rarely be where the performance problems in apps come from.  In my experience the perf problems never come from creating a new collection, but rather from cloning a collection within a lock to provide immutability and thread-safety guarantees, which completely vanish when you switch to genuinely immutable collections.  Also, the spine rewrite that leads to the slower fill performance can be avoided by using an immutable collection that optimizes around that scenario, as described in the below discussion on garbage collection.

Memory efficiency

When dealing with collections, there’s no tighter data structure than an array, which is the underlying data structure of choice for must flat collections.  It has almost no overhead beyond the data stored, and that’s hard to beat.  On the other hand, since arrays have fixed size, it usually means there is memory wasted in the slack between the last element in the collection and the last slot in the preallocated array.  And when you have more elements than you have slots in the array, the array growth algorithm tends to be to allocate a new array of double the length and move all the data over, which means even more slack space wasting your memory.

Immutable collections can’t afford (in memory or performance) to use arrays because changing the collection would require cloning the array every time.  Instead, performant immutable collections can use binary trees that allow for incremental, non-mutating updates to the collection.  Without going deep into the theory of it, it allows (for example) an immutable collection of 500 elements to be mutated into a new collection with one element added or removed while sharing almost all the memory between the two collections.  But because binary tree data structures require a heap-allocated “node” to represent each element in the array, with references to two other nodes in the tree, there is an overhead of at least 12 bytes per element in immutable collections.

Garbage generation and collection

Garbage generation is also an important concern when considering the memory impact of immutable collections.  Every mutation of an immutable collection necessarily allocates more memory.  And while most memory is shared across the different versions of the collection, when a particular version of a collection is no longer referenced, it automatically gets garbage collected.  This is a convenient feature of immutable collections in that they optimally share and automatically free memory.  However, since each mutation of the collection makes the last version of the collection uninteresting and thus available for garbage collection, it generally means that more garbage is produced than for the mutable counterpart. 

What’s the problem with allocating objects if they are freed as soon as they are not used any more?  The problem comes at garbage collection time. The more memory you allocate and release, the more frequently the garbage collector has to run, which usually suspends all other threads in your .NET application, potentially causing perf problems during or sometime after all that memory had been allocated and freed. 

This garbage collection pain is usually most poignantly felt when initializing very large immutable collections.  Without appropriate optimizations for this scenario, a large immutable collection will produce many times as much garbage during construction as its own final size in memory at completion.  So if you built a collection that requires 1MB of RAM just for the collection itself, you might have just blown through 4MB of RAM during its construction (freeing 3MB immediately after).  A well-written immutable collection class could optimize for this initialization case by allowing itself to reuse (i.e. mutate) tree nodes during initialization only, when such reuse could never be detectable outside as a “mutation”, which would solve the garbage generation problem for immutable collections, unless your collections tend to mutate very rapidly even after initial construction.


  Read only Frozen Immutable
Thread-safe No Yes Yes
Type-safe declaration Yes TBD Yes
Immutable guarantee No Yes Yes
Allows mutation Yes, shared No Yes, isolated
Isolated mutations performance Poor Poor Great
Shared mutation performance Great, except when underlying data structure is outgrown and must be cloned. N/A.  No mutations allowed. N/A. All mutations are isolated.
Storage data structure Array Array Binary tree
Memory across isolated versions Full duplication Full duplication Efficient sharing
Garbage generation Low Low Potentially high [update]

If you enjoyed this post, you may enjoy Eric Lippert’s blog posts tagged Immutability.  Eric Lippert wrote a good post on different kinds of immutability back in 2007.

And please let me know if you found this post useful or interesting.

Saturday, July 16, 2011

C# await for MSBuild

The async CTP that adds the C# await keyword doesn’t include an awaitable MSBuild.  It’s easy to add yourself.  Just copy and paste the the BuildSubmissionAwaitExtensions class from the code below somewhere in your project and you’ll be able to await on MSBuild.  Also included below is a sample of what it might look like.

Friday, July 15, 2011

C# await for WaitHandle

The async CTP that adds the C# await keyword doesn’t include an awaitable WaitHandle.  It’s easy to add yourself.  Just copy and paste the following code somewhere in your project and you’ll have an awaitable WaitHandle.

Warning: don't use this on AutoResetEvents, or the behavior may not be what you expect.

Monday, June 13, 2011

What is 2-legged OAuth?

Although there is an official spec for OAuth 1.0, the spec only outlines what the community refers to as "3-legged OAuth".  An alternative form of OAuth is loosely referred to as "2-legged OAuth", and there are far too many variants of this and not a single finalized spec to conform to.  As a result, there are various ways and forms to achieve what people, correctly or incorrectly, refer to as 2-legged OAuth.  In this post, I will attempt to clarify what (at least in my mind) 2-legged OAuth really means.

I will not delve into the gritty details of the spec, but I will outline the flows and explain a bit.

3-legged OAuth

First, let's start with the spec's description of the OAuth flow (3-legged).  Here is an illustration:

You can clearly see the 3 "legs" of this flow:

  1. The consumer (the application that wants to access data) creates an OAuth session by obtaining an unauthorized request token from the Service Provider (the host of the data).
  2. The consumer directs the user to the service provider, with the unauthorized request token in hand, so the service provider can ask the user to release the protected data to the consumer.  The user logs in as necessary, and grants the requested permission.  The service provider redirects the user back to the consumer.
  3. The same request token the consumer held before is now authorized.  The consumer submits it to the service provider in a direct web request to exchange it for an access token.  

The access token may now be used by the consumer to submit requests to the service provider that will access the user's private data.

By the way: notably absent from this 3-legged flow is the consumer obtaining its own "consumer key" and "consumer secret".  The process of obtaining these is explicitly outside the scope of the OAuth spec altogether, and is therefore defined by each service provider.  This process is only completed once per application, usually by the application developer him/herself, and is not considered one of the legs of OAuth.

2-legged OAuth

In 2-legged OAuth, the consumer tends to be installed on the user's machine, or is perhaps a widget embedded in a web page.  The key scenario difference as you step into 2-legged OAuth is that the consumer is not requesting access to any user data.  Instead, it is merely establishing an account with the service provider with no previous data in it at all, which it can subsequently use to store and later retrieve data.

However, since this particular installation or widget has its own "space" on the service provider to store data, the consumer can obtain user data directly from the user him/herself, and send that to the service provider, and later retrieve it. So we see that although 2-legged OAuth doesn't start with any user data, it may end up with user data that the consumer itself puts there.

Because no pre-existing user data is ever shared with the consumer, there is no need for the service provider to obtain authorization from the user.  Therefore the second of the three legs can be entirely skipped.  The modified flow is illustrated here:

Note that the "6.2" section is absent, and the User role has absolutely no interaction, leaving only two legs.  Section 6.1 is also slightly different: instead of the service provider issuing the consumer with an unauthorized request token, this request token is pre-authorized.  The consumer can then immediately exchange it for an access token and discard the request token.

At this point you may be saying to yourself: "That's silly -- why doesn't the service provider just issue the access token instead of a request token, and save another leg?"  Well, it could, but then that wouldn't be OAuth any more.  You could make the case that the above 2-legged flow is still OAuth because all the existing APIs and flows work as spec'd... the skipped legs doesn't alter the remaining legs so it requires little or no changes in a standard OAuth implementation to support 2-legged as well.

What's not 2-legged OAuth

I've recently seen a list of links that all claim to describe "2 legged OAuth", but are nothing like what I just described.  Instead, what they describe is what I would refer to as 0 legged OAuth.  That's right: none of the 3 legs are preserved in the flow they describe.  Here is what they describe, in a nutshell:

  1. The consumer's developer obtains a consumer key and secret.
  2. (skips the entire OAuth 3-legged flow here)
  3. Consumer accesses protected resources by submitting OAuth signed requests for resources using its consumer key, an empty access token, and signs the request with the consumer secret and an empty access token secret.

Remember from the 3-legged discussion above, that #1 in this list is not considered one of the three legs: it's a one-time step taken by the app developer.  Nor is #3 in this list considered one of the 3 legs, as it is simply the actual request for the protected resource, and is alone repeated at each request.

Also consider that whereas genuine 2-legged OAuth can end up with the consumer storing user data in a private consumer-user data store at the service provider, this 0-legged OAuth cannot end up with user data in its service provider account, because all instances of the consumer share exactly the same account, and that would mix all the users' data together.

This 0-legged OAuth should look remarkably similar to username/password authentication, and in flow that's exactly what it is, where the consumer app is the owner of the username/password instead of the user.  Why would someone go to the trouble of using/inventing 0-legged OAuth instead of just using username/password then?  Two reasons: 1) it may be easier to leverage existing OAuth-supporting code than adding support for username/password, and 2) OAuth provides signatures that mitigates against replay attacks that HTTP basic authentication can be vulnerable to.

What's else is not 2-legged OAuth

Twitter has an interesting special case of OAuth as well, which also entirely skips the 3-legged flow, yet it is still per user.  The user navigates his/her browser to, and obtains an access token directly, with no request token preliminary step.  The user copies and pastes (manually!) this access token into the consumer app, which then simply access the user's private data without ever going through any of the 3-legged flow.  This also has been (incorrectly) referred to by some as "2 legged OAuth", but I hope this discussion can help you see this is a misconception.


Let's summarize terms and use cases then:

  • 3-legged OAuth: includes user authorization through a web browser for the consumer to access the end user's previously established private user data with a service provider.
  • 2-legged OAuth: skips user authorization, but still has non-empty request and access tokens.  Consumers gain access to an empty account that they may then fill with user data the consumer obtains directly from the user.
  • 0-legged OAuth: skips all three legs by having consumers simply access their own protected resources at the service provider using OAuth signatures based on a private consumer key and secret.  Analogous to username/password for the consumer app.
  • Twitter access token option: also skips all three legs, but still provides access to a user's pre-existing data via a non-empty, manually obtained access token.

Please comment with your thoughts, and indicate whether you agree or disagree.  With so many application-specific half-specs out there claiming to describe "2-legged OAuth", I expect some comments to this post telling me I'm wrong.  That's fine.  I'd be interested in hearing you make a logical case for how you reduce OAuth 3-legs to 2-legs then.  And to reiterate, I'm not invalidating the use cases of what these 0-legged flows are doing, I'm merely suggesting we call it like it is: "0-legged OAuth".