Thursday, October 14, 2010

.NET Obfuscator Product Review

CliSecure .NET Obfuscator Product Review

Forward

.NET assemblies are (in general) remarkably easy to decompile and obtain reasonably intelligible source code.  A .NET obfuscator can be run as a post-build step to make decompiling the assembly much less useful to thieves who are trying to steal your intellectual property.  There are many .NET obfuscators out there. 

In this post, I review the SecureTeam CliSecure tool, version 5.2.0.7. 

Introducing the test case

My “test case” DLL is not a simple “Hello, World!” app.  It’s dotnetopenauth.dll, which is just over 1MB in size and pushes the limits of the .NET Framework enough that a few bugs in the CLR, compiler and build tools were discovered while this assembly was written.  It defines its own .NET configuration sections, compiles with Code Contracts’ IL rewriter and ILMerge’s IL rewriter and is strong-name (delay) signed.  It also very carefully uses log4net.dll when present, but doesn’t whine when the log4net is not present (defying most people’s assumptions about .NET’s runtime dependency rules).  This DLL is enough to make most obfuscators fail in some way or another.

That said, the disclaimer is that this is just one test case, and is not intended to warrant or otherwise guarantee results on your own assemblies.  Your mileage may vary. 

Usability

Since in course of this review I was a relatively new user to this software, and usability most impacts a new user, I’ll start with usability.  If you care more about functionality than usability, just skip this section.  (Does anyone buy an obfuscator because it’s user-friendly?)

I find CliSecure easier than most obfuscators that I’ve tried, but it still has its oddities.  Launching CliSecure opens a non-intimidating GUI application with few enough controls that it looks like something a newbie can get a handle on, and it sports the new Ribbon UI that’s been so hot lately. 

image

But the first two buttons on the toolbar, New and Open, don’t apply to a first-timer.  The app opens with a “New” project, so the New button is a no-op, and “Open” opens CliSecure projects, not DLLs to obfuscate.  The next buttons, Save and Options, didn’t seem to apply either.  Finally a small button called “Add” on the Ribbon suggests maybe that’s the one I was looking for.  Bingo!

There’s a field at the bottom that displays a derived a sub-path where the obfuscated DLL goes.  Cool.  But when I changed the DLL I was obfuscating, the sub-path didn’t change with it automatically. 

The Options button led to a dialog box with a grid and an explanatory paragraph for each configurable setting.  Most of the explanations were adequate, although the most interesting one to me (Control Flow Obfuscation: Basic or Advanced) didn’t explain one over the other.  The help documentation did include this detail including the the pros and cons of each option.

I found the Help documentation quite decent – complete with screenshots and explanation of the high level concepts needed to make informed obfuscation decisions. 

Noteworthy features

  1. Support for strong-name and delay signing.

Whirlwind obfuscation run

CliSecure has a whole array of powerful obfuscation options.  But beware: they’re all off by default.  When obfuscating with the default settings in CliSecure the assemblies selected are not obfuscated at all but success is still reported. 

After setting some reasonable obfuscation options, clicking “Build” (the button isn’t called “Obfuscate”) resulted in a few seconds of obfuscation during which there was an accurate progress bar.  When it was done, there was no output window, no warnings or errors, and the “building” window just disappeared.  Fortunately, near the button of the CliSecure window is a field that says where the “secured” assembly will be saved to, and it’s conveniently a read-only text field so I can copy the path out into Windows Explorer to open that folder (a click button to open the folder would have been a bonus though).

While the CliSecure UI was up after obfuscating my DLL, I was pleased to see it didn’t have any open file handles to the input or output DLLs, so I could run builds in another process without having to close CliSecure first.

CliSecure obfuscation settings

There is a plethora of options offered in the GUI app (all off by default):

  1. Input.  You can obfuscate several assemblies at once.  Besides just batching for convenience, this should allow you to obfuscate internal members without breaking other assemblies with privileged InternalsVisibleTo allowing them to call these internal members.
  2. Code Protection. This is probably the most unique obfuscation feature in CliSecure, and truly worth your careful consideration.  It actually encrypts your IL and makes it completely invisible to .NET Reflector, only decrypting the IL at execution time at a per-method level.  But this decryption requires that your app run with Full Trust, which means you can’t use it on assemblies you send up to a shared host web server, among other things.  But it should be fine for an assembly that you ship to a customer.  This comes with a performance hit to the app, however.
  3. Renaming. This performs the most basic obfuscation function: renaming of types and members.  CliSecure even will rename your “public” API and fix up any assemblies that call into that assembly.  A great way to protect your own “internal” class libraries used by your multi-assembly application.  It helps you get away from InternalsVisibleTo, which some argue encourages bad design.
  4. Control Flow. Changes the idioms that compilers use for looping constructs, breaking a decompiler’s ability to reconstruct for and while loops.
  5. Method Call Obfuscation. This feature protects the public APIs that your assembly calls in other assemblies.  Depending on what you’re trying to obfuscate, this feature can be very important.  Without it, every call to a method in .NET’s base class library will be apparent when people decompile your code, making it easier to figure out what you’re doing.  CliSecure will replace these discoverable calls with dynamically generated delegates so a static analyzer or decompiler cannot discover them.  This comes with a performance hit to the app, however.
  6. Merging. CliSecure comes with ILMerge-like behavior built-in, allowing you to conceal even more code by merging your helper library .dll with your application .exe and thus removing all the public surface area your library would have otherwise left exposed. 

For my next test, I turned on everything except Code Protection or Method Call Obfuscation so I could still use .NET Reflector to see what obfuscation had taken place (and because DotNetOpenAuth.dll runs in shared hosting web servers and requiring Full Trust isn’t an option).  I also didn’t use the Merging feature.  Although DotNetOpenAuth uses ILMerge and could therefore possibly benefit from this feature, not every SKU of DotNetOpenAuth would be obfuscated, so making every SKU take a dependency on running CliSecure doesn’t make sense.

Obfuscation quality

With so many protections turned on, obfuscation took a few seconds to complete this time.  In .NET Reflector I saw just the public namespaces, and within those just the public types and members.  All the obfuscated stuff was under the default (empty) namespace in a flat structure.  Not only is this good for obfuscation, it’s good for the stuff that you want to be discoverable because all the obfuscated stuff doesn’t clutter up the stuff you want to be discoverable.  Extra points for clean discoverability of public stuff.

image

The internal types were mostly obfuscated, with the necessary exception of the members that implemented public interfaces, and some internal types that are used to implement .NET .config file sections.  I think these could have been obfuscated and were not, but configuration section types aren’t typically where people store their sensitive intellectual property anyway.  Some of the types and members had such weird names that .NET Reflector crashed a few times as I was looking through the assembly.  That’s just a bonus (since it discourages those snoopers).

Some methods were so obfuscated that .NET Reflector gave up on their implementations:

image

Others were decompiled, but quite uselessly so:

image

While most of this is unintelligible, I did manage to read that an HtmlLink object is instantiated here, which is puzzling since I turned on Method Call Obfuscation and thought that this type of thing would be hidden.  I also found some internal strings that didn’t appear to have the obfuscated/encrypted values I expected since I turned on “String Obfuscation”.  That said, .NET Reflector was unable to analyze where these strings were used, so that’s a half-win.

The bizarrely named methods were no more intelligible when you use .NET Reflector’s click-through to see the method implementation: the method “implementation” is merely a static field that holds a delegate and is initialized somewhere else that isn’t evident. 

I’d give the obfuscator effectiveness a rating of 9.5/10.  Yes, there were a couple words I could read here and there and the unobfuscated configuration section types, but it seems impossible to reverse engineer, and it obfuscated the names of properties, which other obfuscators say cannot be done.  And remember, I haven’t even turned on Code Protection, which presumably would give it, what, a score of 15/10? Smile

Verifiable Code

This is the trip cord for almost every obfuscator.  If “peverify.exe” reports no errors on the original assembly, an obfuscator must produce a verifiable obfuscated assembly.  Also, if the obfuscator injects code into the assembly that generates and executes IL at runtime, this code must also be verifiable (although peverify.exe cannot validate this unfortunately).

CliSecure passed this test by generating verifiable code when Renaming, Control Flow and String Obfuscation were turned on.  But it generated 32 peverify errors when Method Call Obfuscation was turned on, and 2 more peverify errors when Code Protection was added.  Since these two protections also impact performance and require the protected app to run with full trust, leaving these two protections off anyway makes sense for me.  But for apps that would like to use these protection mechanisms the peverify errors they cause could cause verification/test issues at best and runtime failures at worst.

Does it run?

So although CliSecure passed the peverify.exe test, would the assembly actually run when dropped in-place of a non-obfuscated DLL?  I had to find out.

I built the entire solution of DotNetOpenAuth and all its samples.  Since CliSecure didn’t rename public types and members, a post-build in-place obfuscation should theoretically keep the sites working.  Some of these sites are configured for only ASP.NET Medium Trust, so it would help validate that the obfuscator continues to run under this permission level as well.

Verdict: the obfuscated assembly appears to work.  I qualify with “appears” to work because it’s a very complex assembly with many code paths and I haven’t finished testing all of them.  But the core scenario works.

Debugging support

Any time you obfuscate an assembly you make debugging more difficult.  The end user may report callstacks that are totally useless both to your user and to you!  A good obfuscator will provide a way for you as the code owner to de-obfuscate stack traces, or even interactively debug your assembly with a reasonably similar experience to an unobfuscated assembly (provided you have the source code).

Amazingly enough, the obfuscated DLL was still debuggable using the original .pdb symbols file.  The callstacks were obfuscated but stepping through the code still worked when the source code is available.  Not all the breakpoints seemed to fire which may be a VS2010 bug or a bug in the obfuscation I don’t know.  But stepping over and stepping into worked great once the debugger was stopped. 

The callstack de-obfuscator failed, in that the callstack it generated was still illegible.  That’s a real bummer when customer reports come in with a callstack that you then need to decipher.

Command line obfuscation support

CliSecure comes with a command-line version of their tool that takes all parameters at the command line, or takes a convenient single parameter that is the CliSecure project file that contains all the details within it.  While there is no MSBuild task included, the command line syntax seems simple enough that an MSBuild “Exec” task could easily handle kicking off the process as part of an automated build.

Summary

This is by far the most feature rich .NET obfuscation tool I’ve seen.  Its obfuscation protections are superb, even with the couple of semi-faulty options turned off.  Debugging is pretty good, although de-obfuscating callstacks needs some work.  During this review the SecureTeam behind the product were very responsive and fixed many bugs I reported rapidly, so I fully expect a new version coming near you will fix the remaining issues.

Friday, July 02, 2010

Review on the Dell Studio 15 laptop

So I bought a new laptop from Dell two weeks ago.  Here are the highlights:

The good

  1. Very speedy (of course I paid for that).
  2. Sleep/wake finally works well.  Waking is very fast, and actually reliable!
  3. The propping back legs on the laptop are conveniently placed and the heat on the underside of the laptop are also lending to holding the laptop on your lap without getting hot spots on your legs.
  4. The face recognition auto-login feature is cool, but I suspect a photo of me would log me in, which makes it more of a toy than a tool.
  5. I like the backlit keyboard.  And the touchpad has two-finger zoom (which can be disabled) and a scroll circle feature which is pretty cool (although I haven’t used it).
  6. It’s remarkably lightweight, especially considering how souped up the hardware is.

The Bad

  1. When left on for long periods, the mouse touch pad grows to be burning hot, making it unusable.
  2. It didn’t come with a TPM chip.  These things were invented and shipping in laptops years ago.  What’s up with that?
  3. No room for a smart card slot, that combined with no TPM chip, means I have to use a flimsy USB card reader to remote into work. 
  4. It doesn’t come with a Windows DVD, or software to burn a recovery DVD based on the backup partition on the hard drive.
  5. No hardware lights for disk access, or caps lock or anything else.  The lack of classic feedback for whether the computer is busy is a little unnerving.
  6. When the caps lock key is pressed, a small display appears that permanently steals focus from the active window (at least when you’re in Remote Desktop), so you’re typing away, press caps lock, and suddenly your typing isn’t going anywhere.  Whoops.
  7. The face recognition auto-login feature (and webcam) remains active while the screensaver is on.  This keeps the CPU hotter than it needs to be, and it means if you approach your laptop (perhaps to see your loved ones in your photo screensaver) the screen saver exits and you get logged in.  Where the real problem here though is that if the screen saver reduced your screen resolution and the face recognition auto-login exits the screen saver to log you in automatically, the screen resolution isn’t restored to normal, and the desktop is wacked.

The Ugly

  1. The WiFi is very unreliable.  It frequently drops the connection entirely and can’t find any hotspots.  I have to disable/re-enable the Wireless Connection to get back online.
  2. Bluetooth is even more unreliable.  Pairing with my Bluetooth mouse was an exercise in patience.  And it keeps losing the mouse, requiring a restart.  In the meantime, most Bluetooth dialogs/windows hang, making troubleshooting the problem virtually impossible.
  3. YouTube HD videos hang in the middle for minutes with a black screen mid-movie, and the video driver can otherwise randomly crash the machine for no apparent reason at all.

Verdict

I really like this laptop, but for me, the Ugly bits are deal-breakers.  I rely on the Internet for most of my work, and my Bluetooth mouse is much more usable than a touch pad.  When neither work well or reliably, the laptop is of very limited use.  Tech support suggested I upgrade the drivers on my brand new laptop to resolve the problems, which doesn’t make sense to me anyway given they should have put the latest drivers on when they shipped it, but that didn’t help anyway.

Tuesday, April 13, 2010

DotNetOpenAuth v3.4.3 released

DotNetOpenAuth has just seen a minor release to v3.4.3.  Fixes center around corner case interoperability issues that cause a very small percentage (<0.5%) of OpenID users to be unable to log into your relying party web sites.  A few other random fixes as well. 

Go download it now.

The OpenID “dot bug”

The most noteworthy fix was a very difficult one to pull off, namely the bug where OpenIDs with trailing dots being unsupported.  Back in the 1990s, classic ASP had the infamous “dot bug” where a trailing dot appended to a URL path would reveal the source code of the server-side script, which was a fatal security hole that was (of course) patched.  I think that this might have inspired the .NET Framework’s Uri class design to include automatically removing trailing dots from each path segment in a Uri instance.  Since FAT and NTFS file systems don’t support trailing dots on filenames, this doesn’t cause any issue if the web is run by Windows file systems. 

But when these URLs are actually OpenIDs, and those OpenIDs contain path segments that are base64 encoded where one of the two assignable characters is a period (ala Yahoo’s pseudonymous OpenIDs), then approximately 1.5% of base64-encoded OpenIDs have trailing periods.  So what’s the problem?  When an OpenID positive assertion comes into an OpenID relying party web site based on .NET with a claimed_id that ends with a period, .NET will quietly strip the period from the claimed_id, causing the login to fail or (arguably worse) to succeed but with OpenID discovery misdirected to the wrong URL (one where the trailing dot is stripped). 

The .NET Framework provides no (supported) way to turn off this dot-stripping behavior.  If your relying party web site is running with Full Trust you can set some internal flags using reflection to suppress the behavior, but it has some nasty side-effects.  If you’re on medium trust, you’re sunk.

But I’m pleased to say that DotNetOpenAuth has a solution, handling both medium and full trust, that is as good as the .NET Framework will allow until a fix in the platform is made.  I won’t bore you with all the gory details on this post, but suffice it to say, that if you just download and use the new version, you’ll be working with OpenIDs even with trailing dots.  Phew.

Tuesday, March 09, 2010

How to upgrade your Blogger OpenID to a decent one

If you host your blog on Google’s Blogger service you may have discovered that your blog is an OpenID you can use to log into various web sites that act as OpenID relying parties.  But Blogger’s support for OpenID is limited to OpenID 1.1, which is very old and not supported by many relying parties nowadays.

You can upgrade your Blogger hosted OpenID to the new OpenID 2.0 version and log into many more web sites, all while still using your Google account to log in, thanks to Google Profiles.

Here’s how to upgrade your Blogger OpenID to OpenID 2.0:

  1. Create a Google Profiles profile if you haven’t already done so.
  2. Visit http://www.blogger.com/, logging in if necessary.
  3. On the blog you use for an OpenID click Layout.
  4. Click Edit HTML.
  5. In the Edit Template area, add the following HTML within the <HEAD> tag of your template:
    <link rel='openid2.provider' href='https://www.google.com/accounts/o8/ud?source=profiles' /> 
    <link rel='openid2.local_id' href='http://www.google.com/profiles/YOURGOOGLEPROFILE' />
    <link rel='openid.server' href='http://www.blogger.com/openid-server.g' />
  6. Click Save Template.

Once you add OpenID endpoints to your blog, Blogger will automatically deactivate its own OpenID 1.1 support. Since Google Profiles only supports OpenID 2.0 RPs, the above instructions also re-asserts Blogger as the OpenID 1.1 Provider so that 1.1 RPs still work. So what we have is the best of both worlds now.

Thanks to Breno de Medeiros of Google for the tip on how to keep OpenID 1.1 RPs working.

Saturday, January 16, 2010

DotNetOpenAuth’s “call home” reporting

A few months ago I asked how people would feel if DotNetOpenAuth collected feature statistics and sent them back to the library's authors so we get a better feel for what features are used, errors that are common.  The feedback I got was positive, so v3.4 has reporting turned on by default, but you can opt out either entirely or just omit certain details from the report by adding some simple tags to your web.config file. 

This turns it completely off:

<dotNetOpenAuth>
  <reporting enabled="false" />
</dotNetOpenAuth>

This makes the reporting pseudonymous, in that no URLs from your own web site are sent back in the report, but a random GUID is still included in your report so that we know that the origin of the report is the same across multiple reports.

<dotNetOpenAuth>
  <reporting includeLocalRequestUris="false" />
</dotNetOpenAuth>
How is the report sent?

There’s an adjustable frequency that defaults to once daily.  When certain features in DotNetOpenAuth are accessed, a quick time check is made, and if it’s time to send a report, a thread pool thread is queued to generate and HTTP POST the report to https://reports.dotnetopenauth.net/.  We use a thread pool thread to minimize the performance impact this reporting has on your web site.  In fact the entire reporting feature is finely tuned to have virtually no impact on your site’s performance.

If your site isn’t running for at least one reporting interval, a report will not be sent.  So most of your self-hosted “localhost” sites run by Personal Web Server in Visual Studio will not generate these reports.

What’s in the report?

So what all information is actually in this report anyway? Well, here's a sample:

{be791692-2573-41b4-bd6f-6f4760cf186c}
DotNetOpenAuth, Version=3.3.4.10015, Culture=neutral, PublicKeyToken=2780ccd10d57b246 (official)
.NET Framework 2.0.50727.4927
====================================
requests.txt
http://localhost:4856/login.aspx
http://localhost:54347/User/Authenticate
====================================
cultures.txt
en-US
====================================
features.txt
OpenIdLogin
OpenIdButton
AXFetchAsSregTransform
OpenIdRelyingParty StandardRelyingPartyApplicationStore StandardRelyingPartyApplicationStore
ClaimsRequest
FetchRequest
ClaimsResponse
XrdsPublisher
====================================
event-Yadis.txt
4	XRDS referenced in HTTP header
8	XRDS referenced in HTML
1	XRDS in initial response
====================================
event-PositiveAuthenticationResponse.txt
2	https://www.myopenid.com/server
2	http://localhost:45235/provider
====================================
event-NegativeAuthenticationResponse.txt

At the top of the file is a self-identifying GUID.  It means nothing, except that the GUID is the same for all reports that come from your web site, allowing the reporting database to update the records for your web site (whether we even know the URL of your web site or not) with the latest report. 

Then we have the DotNetOpenAuth version and CLR you’re using.  It turns out that we not only use this for statistical purposes, but it allows the reporting server that receives the report to check whether the version you’re using is on a list of versions with known exploitable security holes.  Since for many web sites authentication is a feature that’s completed and never considered again, someone using an old version of an authentication library may be exposed to security holes that a newer version would correct.  So what happens if the version of DotNetOpenAuth that’s reporting in is one with known security holes?  Not much.  But if you have DotNetOpenAuth error logging (usually via log4net) enabled, an ERROR is logged with a message describing the problem and warning the web developer to upgrade.  In the meantime the library on the web site continues functioning.  I actually debated with myself whether to install a kill-switch, since perhaps disabling authentication on a web site altogether is better than leaving the site operational with a nasty security hole.  But decided against that… and I suspect most web site owners would agree that it’s not my decision to make whether to shut down your web site. :)  So I leave it at emitting a warning in your ERROR log.

The requests.txt section gives just the first few URLs on the site that either host DotNetOpenAuth’s ASP.NET controls or programmatically process OpenID/OAuth/InfoCard messages.  Currently the longest this list of URLs can get is 3, since some sites like blogs may have login controls on every one of their blog post pages and I mostly just want to get an idea of who’s using the library.  Note that these URLs have their query strings stripped off before being included in the report to try to avoid accidental private information disclosure.  You can omit this section in your reports by setting the <reporting> element’s includeLocalRequestUris=”false” attribute.

The cultures.txt section just reports what the browser that’s accessing your web site says is its primary culture.  This will help me know which languages to offer localized error messages for.  Each report is limited to only reporting 20 cultures.  You can omit this section in your reports by setting the <reporting> element’s includeCultures=”false” attribute.

The features.txt section lists which parts of the DotNetOpenAuth library have been used by your web site.  Pretty straightforward.  You can omit this section in your reports by setting the <reporting> element’s includeFeatureUsage=”false” attribute.

The event-Yadis.txt section is just some random statistics on how OpenID identifier pages are constructed.  You can omit this section in your reports by setting the <reporting> element’s includeEventStatistics=”false” attribute.

The event-*AuthenticationResponse.txt sections tell me which remote parties DotNetOpenAuth is able to successfully interop with and which ones have issues.  Note that no user information is collected or reported.  You can omit this section in your reports by setting the <reporting> element’s includeEventStatistics=”false” attribute.

Please comment with your thoughts, including any questions or concerns you may have. 

Friday, January 15, 2010

DotNetOpenAuth v3.4 now available

You can go download DotNetOpenAuth v3.4 today.  Highlights of the new version include:

  1. Support for Google Apps for Domains issued OpenIDs.  This required special work since Google has their own flavor of OpenID discovery that had to be supported until something like Google’s scenario get’s standardized.
  2. Identifier discovery extensibility (this is how Google Apps support was enabled, but the extensibility is exposed for others as well – but use with caution!)
  3. A new ASP.NET MVC OpenID web project template.
  4. Twitter image POST via OAuth fixed.
  5. New SSO web-ring samples added, so organizations looking to use OpenID for their SSO solution can see how it might be done.
  6. Minor bug fixes.

Please note that this is the first version to have statistical reporting enabled by default, which reports feature usage statistics and the URL of the site hosting the library to the library authors.  To opt-out of this feature, you should add this to your web.config file:

<dotNetOpenAuth>
  <reporting enabled="false" />
</dotNetOpenAuth>

The details included in the reports may be selectively turned on or off as well, if you are willing to contribute statistics but don't want the URL to your web site exposed, for example.  More information can be found in my follow-up post: DotNetOpenAuth’s “call home” reporting.

Don’t forget to donate to the cause if you like the library.