Friday, November 23, 2007

Verizon DSL redirects bad URL requests

I use Verizon DSL.  Verizon just activated a service that gets their DNS server to lie in response to requests for non-existent domain names.  Instead of returning a "domain not found" error, they redirect you to their own search engine, complete with their own ads. 

No, thank you, Verizon, I'll take the error message and fix my own typos.  Verizon doesn't advertise it, but customers can opt out of this service by visiting this URL:

http://netservices.verizon.net/portal/link/help/item?case=c33208

Monday, November 19, 2007

Visual Studio 2008 RTM today

Microsoft Visual Studio 2008 is done, and can be downloaded today to MSDN Professional subscribers.

See a previous post for some of the most exciting features of this new release.

Monday, November 05, 2007

Visual Studio 2008 and .NET 3.5 to ship this month

In a press release today, Microsoft announced it will ship Visual Studio 2008 and .NET Framework 3.5 this month.

This is a truly remarkable landmark release.  My favorite new features include:

  1. LINQ
  2. WPF designer support built into the IDE
  3. Target .NET 2.0, 3.0 or 3.5 all from the same IDE

Monday, October 01, 2007

WPF Gmail clone: end of the road

Due to patent and copyright concerns and trying to avoid any trouble from Google, I cannot release the source code to the WPF Gmail clone sample.  I have decided instead to break the sample apart into little pieces, change the way they look and behave enough to not infringe on Google's interests, and then post the sample bits with source.  I'll include any tips, tricks and bug workarounds that I ran across in the development of each piece.

I hope to begin a stream of posts on this subject this week.

Thursday, September 27, 2007

Advertising bubble to burst or a new tax: take your pick

The dot-com bubble burst

I fully expected the dot-com bubble to burst.  The entire premise driving all the enormous bank loans, the promise of huge returns from the newly discovered worldwide market, was mistaken.  Obviously.  It's old news now, so it won't impress you for me to recite it as I understood it before it happened, but indulge me while I recite the reasoning, because I predict it will happen again.

The premise behind the dot-com boom was that a small upstart company could reach the ends of the earth with their marketing and sell an unprecedented amount of their products and services.  What they failed to account for was the finite quantity of expendable income in each household and business.  Just because everyone can suddenly sell their wares to virtually everyone on the earth, that does not mean that everyone on the earth can suddenly afford to buy everyone's wares that are suddenly available to them.  Consider: these web sites were built to make money by collecting that money from people.  For that to be successful, these new customers would have to reallocate their money to this web site from something else they would have spent their money on. 

It's so basic, but somehow people missed the repercussion of this: overall, the web couldn't generate the enormous amount of money it was supposedly destined to.  Take Frank, who used to buy their diapers at the grocery store.  Now Diapers.com wants Frank to buy diapers from them instead.  Frank decides to go with the online diapers, and spends less at the grocery store.  The grocery store now makes less money.  Money wasn't created out of thin air here, it was just spent in a new place.  Now consider that many web sites were selling diapers.  There is a fixed number of people in the world even interested in diapers, and grocery stores were happy to supply them.  Add a few dozen web sites that want to sell them, and a few people might switch, especially with online discounts, but in the end no new money was generated.  And each web site would only collect a small number of people. 

Now instead of diapers (hey, I just had a baby a few months ago), just substitute in each product or service that was available.  With a finite number of people with a finite budget, you don't get the orders of magnitude growth people were predicting.  When this realization spread around, we had the enormous implosion we know as the dot-com bubble burst.

This didn't come out as clearly as I had hoped, and as clearly as I think I have it in my head.  I hope it makes sense to you.

An advertising bubble that's getting thin

Now consider the overall feeling people have for the web: if it's not free, it's worthless.  Ironic, but very true.  No one wants to pay for online email, storage, search or backup.  There are plenty of free services on the Internet right now offering these, sponsored by advertising.  Why pay when you can get it free, right?

Well, why do those advertisers sponsor your free storage and email?  Because they hope you'll click on their ads and spend your money on their wares.  But wait... we just agreed that people don't like paying for stuff on the Internet.  If it's digital products or services (highly profitable because it can be produced for virtually nothing) people will expect it to be free.  How do you make a business out of that?  We just covered that as well: advertising.  You can make anything free by sponsoring it with advertising. 

So what we have is a self-supporting circle.  A perpetual motion machine.  An outlet that provides power from a plug that plugs into... itself.  If nearly everyone gets onto the advertising bandwagon in order to provide free services, no one can make money and the system will collapse.

Except for one market: tangible, retail goods.  People will expect to pay for their diapers for a long time yet.  If a diaper manufacturer chooses to advertise on a web site and a person clicks their ad, that person is likely to be willing to pay money for the product.  This is where the new Internet tax will come into play.

Those products that actually can make money in and of themselves (like diapers) will effectively be sponsoring the entire free Internet.  Since we've seen how advertising cannot pay for itself in an endless circle, those who actually make money by selling material goods will end up supporting the whole infrastructure.  Where will they get the money to pay for all these free services on the Internet?  Why from all of us, of course, through the diapers they sell.  With every product you buy, whether on the street or on the Internet, a huge chunk of what you'll be paying will go toward advertising expenses.  And thus everyone who buys real goods (which is everyone) will be paying for the "free" services on the Internet. 

When everyone is paying for everything that is supposedly free, doesn't that sound a bit like communism?  Capitalism suggests that if people pay for the services they use (as opposed to all services regardless of use through taxes), that competition will bring the best innovation and products to market and the invisible hand gets to work.

The middle ground

But people won't pay that much for diapers.  Competition will come in and those who don't advertise as much will offer significantly lower prices on diapers than those who do and people will figure that out and buy the cheaper ones.  The advertising revolution that makes everything free will not have the funding it requires.  Advertisers won't make the money they thought they would, and will cut back on their advertising budgets.  Web sites that were once free will have to charge something either for all their services or for some premium offering.  And the whole advertising industry that supposed to boom on the Internet in the coming years will be much smaller than the experts seem to anticipate.

I believe we're at least halfway to the end result now.  I don't actually expect for us to reach the extremes I described here in the process of reaching the end.  Both the factors that went into my dramatic extremes will work to keep us somewhat in balance along the way to realizing that just like the dot-com burst, the advertising arena isn't a magical bullet to an enormously huge economy.

Monday, September 10, 2007

Back to Blogger

I left Blogger back in October 2005 to switch to Community Server as my blogging platform.  I had the fortune of working at an institution that was happy to lend me some server space where I could host it myself.  Now I need to move off of their server, which left me wondering which blog platform to choose.

The search for a blog system

I considered GoDaddy, which offers a red-carpet of a Community Server setup for just a few dollars a month, but they limit database size to 200MB and mine was over 400MB.  Also, there was no convenient story (that I could figure out anyway) for transporting all of my existing posts and comments to that server, since they do not let you restore databases that were backed up from a non-GoDaddy server, and their interface for executing database commands directly against their server is a joke (exactly one operation at a time -- no 1MB SQL file that will regenerate all my data at once).

I was about to switch to BlogEngine.NET and stick with GoDaddy as my hosting provider.  BlogEngine.NET had the advantage of no dependency at all on a database but rather uses XML files on the web server (actually that being an advantage is arguable).  It might have worked, but their themes don't look as nice, and I found many features lacking.  I actually spent an afternoon adding OpenID support to it, but OpenID on .NET 2.0 (which is what GoDaddy is limited to) requires unsafe C# code which GoDaddy of course will not allow.  So I realized I was spending a lot of extra time building up an open-source blogging engine in order to use features that wouldn't work with my hosting provider.

After considering one other hosting provider and finding them unreliable, I looked back to my roots: Blogger.  I left it a couple years back because I found it limiting in what I could do to the template.  Looking at it closely lately, I see that I can do almost everything to it that I could with any other blogging engine.  I can't make it support OpenID or InfoCard (as I would have liked on my blog) but I couldn't do it with any of those other blog platforms either so I haven't really lost anything.

Moving my posts to the new blog

But I still had the problem of moving all my posts from Community Server to Blogger.  Microsoft's Windows Live Writer offered a smart client app that could download posts from my Community Server blog and then post them to Blogger, but the process was one post at a time and tedious.  To make matters worse, the publishing date for every post would show up as the same day (today).  I really didn't want that.  Blogger also doesn't let you pre-date posts using their web UI.  So I couldn't manually fix each one either.

Well, Blogger offers a .NET API for manipulating your posts programmatically.  In about an hour I whipped up a C# app that would read the atom feed from Community Server and upload every post up to Blogger, preserving the content and publishing date of each post (anyone want the program?).  I did run into the problem that Blogger only allows 50 posts per day.  Of course I didn't know about this policy, so I banged my head against the wall for a couple hours trying to figure out why posts were supposedly successfully uploaded (their API returned no errors) and yet they didn't show up in Blogger.  I'll finish uploading my posts in 24 hours when my counter is reset.

I was not able to transfer the comments.  They weren't included in the atom feed from Community Server that I was reading, and Blogger's API doesn't appear to provide an interface to add comments programmatically.

Forwarding users to the new location

It's simple enough to forward visitors who used my home page and newsfeed URLs to the new locations.  But each of my 64 posts will need redirectors as well.  That's a lot of tedious redirecting pages to write.  Not sure how to best handle that -- yet.  But I think I'll write a script to automate it. 

And about those inter-post links that are now bad

I have lots of intra-blog post links in my blog.  All those links now have invalid URLs.  Grrr.... another C# app to whip up.  After uploading the posts to the new location, I'll download atom feeds from the old and the new blog system, find the correlating posts and build up a URL substitution dictionary, and then programmatically go through each post on Blogger and do a search-and-replace for each link. 

In fact that same app should be able to create the redirect pages I need as well.

Tuesday, September 04, 2007

An update on the cloned Gmail interface using WPF

I've spent a couple dozen hours now working into the details of the Gmail clone I began about a month ago. I've been learning more of the ins-and-outs of WPF's RoutedCommand model and how to process keystroke input. While WPF has a very powerful routed event model, it's so very different from WinForms that it's taken me several hours to get the hang of it -- I'm still working on it too.

Anyway, although I am not yet free to post the source code or to publish a working binary of the app for you to try out, here are a couple of recent screenshots from the current version:

Wednesday, August 01, 2007

New WPF developer clones Gmail interface in 4 hours

[Update: A newer screenshot available on a more recent post]

Although Visual Studio 2005 and the appropriate extension can enable WPF development, with the release of beta 2 Visual Studio 2008 we have a seemless integration of Windows Presentation Foundation (WPF) application development to the IDE. 

I am disappointed that we haven't seen more WPF apps out there.  Having read a book on the new presentation framework that comes with every copy of Vista and can be downloaded for Windows XP, I wanted to see how quickly I could clone the Gmail interface.  I like the Gmail interface, and it really demonstrates the power of AJAX.  Nevertheless I suspect that Gmail is the product of many thousands of hours of work to make the interface what it is.  How fast could a WPF application replicate that?

How about 4 hours?


Click for full-size screenshot.

In those four hours I got data-binding for the Inbox working along with the right layouts so that it resizes appropriately.  The look isn't perfect yet, and there's only that opening screen that I have working so far.  Four hours isn't enough to replicate the whole Gmail interface of course, but it's pretty amazing what can be done in such a short time!

When I've got more to show off, I'll try to publish the .xbap so you can try it out, and perhaps even the whole source code as a working sample.

Anthony discusses the difference between Visual Studio 2005 with the WPF extensions CTP and Visual Studio 2008.

Sunday, July 29, 2007

SPSS.NET wrapper library now hosted on CodePlex

The C# library I wrote that wraps SPSS IO library (spssio32.dll) for managed applications is now hosted on CodePlex.  It hasn't been developed much in the last few years, but for what is already there it's quite useful and has (nearly) all the functions exposed by spssio32.dll.  Many developers have already started using it.  I recommend you try it out if you're interested in managed applications interfacing with SPSS Data files.

CodePlex project: http://www.codeplex.com/spss

Thursday, July 19, 2007

Finally, an OpenID provider that takes Information Cards as authentication

I don't know why this took so long to surface.  And maybe it just took a while to discover.  But honestly I don't know why there is only one service (that I can find) that offers this.  It's the perfect combination of phishing protection and usefulness in single sign-on that will make the web significantly safer.

Without going into a whole history of OpenID and InfoCard (aka Windows CardSpace as part of .NET 3.0), let me just draw the problem and solution for you.  OpenID is open to a variety of phishing attacks that are especially dangerous because "one login to rule them all", once stolen, can become as useful to the phisher (and dangerous for you) as the One Ring in the wrong hands.  All your sites immediately open up to the phisher of just one login.  What makes this especially precarious is that OpenID relies on the site taking your OpenID to redirect you to your own OpenID provider -- something that could be spoofed pretty easily so the site itself can steal your credentials.

Some OpenID providers (such as www.myopenid.com) have mitigated this threat by starting to place browser cookies on your computer so that if you don't see a picture you chose on your login screen then you have reason to be suspicious.  In my opinion, not good enough. 

Enter InfoCard: Microsoft's completely open and decentralized authentication solution that is completely phishing-proof because there are no credentials to steal.  If someone lured you into using your InfoCard on their phishing site, all they would get is a random series of characters from your InfoCard that they would find completely unhelpful in masquerading as you on other sites.

The problem with InfoCard is that there are (to date) almost no sites out there that accept InfoCard logins.  OpenID has a few hundred site lead on InfoCard.  So by combining these two technologies, you get the protection of InfoCard with the widerspread adoption of OpenID. 

All that has to be done is use an OpenID provider that accepts InfoCard as your login credentials.  Instead of the one username/password pair that you use at your OpenID provider to login, that could be stolen, you would just submit your InfoCard and you're in.  If someone who wasn't your OpenID provider was pretending to be, they wouldn't be any closer to masquerading as you.

So which OpenID providers have offered this elegant solution?  Just one that I can find: www.signon.com.  Hurrah for leading the way!  I'm switching my OpenID from MyOpenID.com to signon.com just for the InfoCard.  (Besides, it's faster to sign-on with a couple of InfoCard clicks than to type out a username and password).

Kim Cameron maintains an identity blog and discusses the theory behind combining these technologies if you want a more in-depth read.  I suggest adding Kim to your RSS feed.

For the record, I'd personally prefer to see all sites take InfoCard directly.  It would speed things up a bit.  But what I really want are Information Cards for my credit cards so I can transact business online without revealing my credit card numbers to every merchant.

Monday, July 02, 2007

How to (not) write an especially precarious app on .NET (Compact Framework)

As the .NET Compact Framework developers work to add features, fix bugs, and refactor code, we often have to determine whether a given change could break existing customer code.  The ideal is that NetCF 3.5 will run all apps that ran on NetCF 2.0 and 1.0.  We run hundreds of apps and many, many tests before shipping each product to check backward compatibility.  The .NET Framework (both desktop and CF) makes heavy use of internal classes to allow us the freedom to change the internals of the framework without breaking customer code.  But there are still ways that customers can write apps that may break on future versions.

Compare on exception text

Some exception types are very general and don't tell you much about the error.  One of these is InvalidOperationException, which can be thrown for a wide variety of reasons for many classes in our BCLs.  Developers usually look at the Exception.Message property to get an idea of what went wrong.  This is by design.  What is not by design is for developers to write code in their apps that looks at the Message property and makes code path decisions based on it.  Here is an example: (highly contrived, admittedly)

    class Program {
static void Main(string[] args) {
try {
XmlSerializer serializer = new XmlSerializer(typeof(MailAddress));
} catch (InvalidOperationException ex) {
if (ex.Message == "System.Net.Mail.MailAddress cannot be serialized because it does not have a parameterless constructor.") {
// Print message to user saying he chose a bad type to serialize
}
}
// Success!
}
}

This can break in at least two instances:

  1. The text in Exception.Message is localized to the computer running your app, so this code will break if it was run on, say a Spanish computer for example.
  2. A subsequent version of .NET Framework may change the Exception.Message text to be more descriptive, accurate or whatever. 

Either one of these likely cases may cause your code path to take a wrong turn (in this case to assume success).  Instead, you should write code that can analyze exceptions based primarily on exception type and/or error code (enumerable or integer) where available.

Note: It is generally ok to display the Exception.Message to a user of an app in the form of a MessageBox or a log file (there are security considerations in doing this, however) and let the user choose how to proceed.

Compare on Exception type

Another bad way of doing exception handling is to do absolute equality checking on exception types.  Here's another bad (and contrived) example:

    class Program {
static void ThrowBadArgument(int positiveValue) {
if (positiveValue <= 0)
throw new ArgumentException("Must be positive", "positiveValue");
}
static void Main(string[] args) {
try {
ThrowBadArgument(-5);
} catch (Exception ex) {
if (ex.GetType() == typeof(ArgumentException)) {
// Oops, the user provided a non-positive number!
}
}
}
}

Now suppose that the developer (or vendor) supplying you with your ThrowBadArgument method decided it was more appropriate to throw an ArgumentOutOfRange exception.  This would break your code and again cause your program to inappropriately assume success.

Here is a corrected example:

    class Program {
static void ThrowBadArgument(int positiveValue) {
if (positiveValue <= 0)
throw new ArgumentOutOfRangeException("Must be positive", "positiveValue");
}
static void Main(string[] args) {
try {
ThrowBadArgument(-5);
} catch (ArgumentException ex) {
// Oops, the user provided a non-positive number!
}
}
}

Note in this latter example how the throwing method is using the derived class.  But catching the base class in this way allows us to catch either ArgumentException or ArgumentOutOfRangeException equally well.

And if you need to check the exception type using an if statement (to check the type of Exception.InnerException for example) be sure to use the is keyword rather than Exception.GetType() == typeof(...).  For example:

// good example
if (ex.InnerException is ArgumentException) {
// do stuff based on inner exception
}
// Another good example (albeit harder to read)
if (typeof(ArgumentException).IsInstanceOfType(ex.InnerException)) {
}

// BAD example
if (ex.InnerException.GetType() == typeof(ArgumentException)) {
// do stuff based on inner exception
}

Finding public methods using reflection and parameter names

There's no good way to do this.  You should only find methods based on the parameter types, not parameter names.  The reflection API makes it easy to do it right, and harder to do it wrong.  Here is an example on how to do it right, and wrong:

    class Program {
static void Main(string[] args) {
// Good example
MethodInfo m = typeof(int).GetMethod("ToString", new Type[] { typeof(CultureInfo) });
Console.WriteLine(m.Invoke(3, new object[] { CultureInfo.CurrentCulture }));

// Bad example
foreach (MethodInfo method in typeof(int).GetMethods()) {
ParameterInfo[] parameters = method.GetParameters();
if (parameters.Length == 1 && parameters[0].Name == "provider") {
m = method;
break;
}
}
Console.WriteLine(m.Invoke(3, new object[] { CultureInfo.CurrentCulture }));
}
}

You should not take a dependency on parameter names of methods you call.  The .NET compilers take care of this for you, but if you go around them by using reflection and use parameter names rather than parameter types to find the method overload you want, you're asking to be broken if those method parameter names ever change (and they can!)

Finding internal-only methods or types using reflection

Again, there is no right way to do this.  Getting into the internals of a library by using reflection requires full trust and means you're just asking for your app to break in the next version of the library when those internals get changed.

In conclusion...

If you follow these tips, your apps will be more likely to perform well on current and future versions of the .NET Framework, on your own as well as your customers' locales.

Friday, March 30, 2007

How to get sound working on Virtual PC 2007 with Vista guest OS

Virtual PC 2007 added a new sound system specifically for using Vista as a guest and host OS.  But when you install Vista as a guest OS, there is no sound!  A search on Google and Live Search didn't turn up anything about how to use it. 

Eventually I found that after you install the VM Additions, the sound driver is silently copied into the guest OS's "C:\Program Files\Virtual Machine Additions" folder.  All you need to do to get sound working is "update" your audio controller driver within your Vista guest OS and tell it you Have Disk... and point it at that folder and voila!  Beautiful sound.  (without any restarts either).

Thursday, March 08, 2007

Microsoft releases .NET Compact Framework 2.0 SP2

This morning Microsoft released the second service pack for their .NET Compact Framework 2.0.  Lots of work and bug fixes were included in this service pack, with the priority being quality while not introducing new bugs.  You can read about the changes and get the SDK and device downloads at Microsoft's Download site.

Wednesday, February 28, 2007

A C# programmer's first experience wading in Boo

Boo is a .NET static-typed language with a python scripting feel.  It's neat because you have less to type (and maintain) and get much of the same functionality.  SharpDevelop also has lots of built-in support for it, which made trying it out easy.  I decided to try it out when writing a program that would do all the mind-work for me in playing the Clue board game

Here I summarize my experience in bullet form.

Pros:

  1. Easy to pick up and learn as you go (mostly).
  2. Type inference means less code.
  3. Very responsive user base on Google Group.
  4. Great example of good prototyping language that can grow into large projects.
  5. Decent IL is emitted by the compiler.

Cons:

  1. No support for member shadowing (child class redefines a parent's member as a different type or attributes)
  2. I miss some of the compile-time errors I get from common mistakes that the C# compiler gives me.
  3. The compile-time errors the Boo compiler does give are difficult to read to figure out what the root cause was.
  4. Arrays are confusing (for me, but I'm not a python developer).
  5. The debugger in SharpDevelop (at least for Boo projects, I don't know if it's for all languages) does not support watches, or setting the instruction pointer, or attaching to running processes.  It's amazing what the open-source community has done so far, but a lot of work remains to be done.

Summary

Try Boo out and give feedback.  I have to switch to another language for now to get my project finished, but it's a good start for the language.

Tuesday, February 27, 2007

A new .NET OpenID implementation written in C#

Quite a bit of stir has been raised about Janrain's popular .NET implementation of OpenID being written in Boo rather than C#.  Personally, I take my hat off to the folks at Janrain who worked diligently to provide this free, open source OpenID implementation.  To me the language of the library is inconsequential because I reference it as a compiled assembly and the language never affects me.  The spirit of .NET is all about using the best language for the job.  To some others, it seems to affect them more.

A band of .NET developers has begun to port the library to C#.  I might have dismissed the effort as I wished to support the original author of the code with his attempt at Boo, but I recently learned that the Janrain implementation is retiring due to this upstart C# effort.  With that news, I'm throwing my hat in the ring by joining the C# effort.  The project site is on Google Code.  We also have a discussion board on Google Groups.

My immediate plans are to help port the library to C# as needed and build my ASP.NET controls into their library.  We're excited to be working together on this project, and look forward to making a great OpenID library.  We'd like for this library to make it drop-in easy to become an OpenID server (the way my existing controls make being an OpenID consumer).

There has been plenty of talk about InfoCard and OpenID working together to prevent phishing attacks.  I am investigating how to make a library that brings an OpenID server and an InfoCard consumer together for any web site in a drop-in fashion as well.  This InfoCard addition will probably come as a separate library as it won't be mono-compatible (until those folks re-implement .NET 3.0).  I'll keep you informed on this blog.

Thursday, February 01, 2007

Why your NetCF apps fail to call some web services

Here's the scenario: You are writing an NetCF app and trying to call a web service from that app.  You generated the code for the client proxy class using Visual Studio's "Add Web Reference" command.  Code is generated, you call into it, and you run your app.  The call fails with a cryptic error from the web service saying something about a malformed message.  If you try the same thing from a desktop app it works perfectly.  Sound familiar?  Read on for the reason and solution.

I'll address the common cause of xml element mis-ordering here.

Background

Web services and serializers

Calling web services involves serializing objects and sending the resulting xml to the web service, and deserializing the xml response back into objects.  This is usually totally transparent to the developer, who just invokes a method and the magic happens behind the scenes.  The serialized objects come from classes that are usually generated for you by Visual Studio or command-line tools like wsdl.exe or svcutil.exe.  They are constructed based on the service WSDL in such a way that their serialized format matches what the service is expecting, and such that they can be deserialized from the xml the service will respond with.  These classes are called "client proxy classes."

Both the desktop .NET Framework and the .NET Compact Framework use the XmlSerializer class for serializing and deserializing these objects.  When using the WCF stack, the desktop framework will use their recently added DataContractSerializer instead of the XmlSerializer.  Both of these serializers rely on reflection to query the generated client proxy classes and to generate the required xml. 

Reflection

The .NET runtime does not ever care about the order a given type's elements are declared.  For example, the class:

class Fruit {
int seeds;
string color;
}

Is equivalent to

class Fruit {
string color;
int seeds;
}

This makes sense.  Unfortunately though, because of this when you use reflection to query the members of a type, the order of those members is not guaranteed to be in declaration order, or even to be consistent as you switch between versions of the .NET Framework.  In fact, the order in which reflection returns members did in fact change between versions 1.1 and 2.0 of the framework. 

How non-deterministic ordering affects web services

Because the serializers in .NET rely on reflection, they are affected by the non-deterministic ordering of members that reflection provides.  The order of elements serialized will change with whatever order reflection provides members in.

The .NET designers anticipated this problem, and provided a way to force a specific ordering of serialized elements.  The xml serializer attributes that you can decorate your serializable classes with support an Order attribute that you can use to guarantee some desired ordering of elements.  For example, you could force the Fruit class used earlier to put before no matter what order reflection provides by changing the class to read like this:

class Fruit {
[XmlElementAttribute(Order=1)] int seeds;
[XmlElementAttribute(Order=2)] string color;
}

The attributes will retain the Order property's value regardless of reflection order, and that information can be used within the serializer to keep the order to the way the app developer intended.

The problem: why it "works" on desktop's framework and not on NetCF

Although neither desktop nor NetCF's framework guarantees ordering, it so happens that desktop's serializer preserves the order better than NetCF's.  Desktop's serializer isn't perfect either, but it's predictable enough that developers take its order for granted and then wonder why NetCF's serializer doesn't behave the same way. 

Unfortunately, the wsdl.exe and Visual Studio IDE developers were among those developers that seem to have forgotten that ordering is not guaranteed unless explicitly defined, and so neither generate the code in the client proxy classes to set the Order properties necessary to guarantee the correct ordering.  It seems they assumed that declaration order is the default, since the desktop framework (mostly) works that way. 

The wsdl.exe tool does offer an "/order" switch that will set order explicitly, but unfortunately this command line tool generates code that won't compile on NetCF projects because of the limited API exposed by NetCF. 

The workaround

So until the code is fixed in Visual Studio and/or wsdl.exe, you have a couple of options to get your NetCF projects to call these web services that require specific order:

  1. Use wsdl.exe /order to generate your client proxy class, and then remove all the code not supported by NetCF until your project compiles. 
  2. Use Visual Studio to generate your client proxy class, then run wsdl.exe /order in some other directory, and copy just those lines of source code from the resulting class into your project source file that give the explicit ordering.

The ultimate fix

There are a few fixes I'm personally working on to help alleviate this inconvenience for app developers.

  1. I'm trying to get Visual Studio Orcas to include explicit ordering for all generated proxy client classes.
  2. I have changed NetCF's XmlSerializer to preserve declaration order serialization as closely as possible (still not guaranteed).  You'll have to be running on NetCF 2.0 SP2 or later to get this fix. 
  3. The svcutil.exe tool (which deprecates wsdl.exe) automatically generates explicit ordering code where required with no extra steps for the app developer.

Summary

The XmlSerializer cannot guarantee element order either on desktop .NET Framework or the .NET Compact Framework unless the developer gives the order explicitly, although in some cases the "default" ordering will behave as you expect and in some cases it won't. 

The bottom line: where element order is important, use XmlElementAttribute.Order, XmlArrayAttribute.Order, and the other ordering attributes as necessary.

Tuesday, January 09, 2007

Getting OpenID user profile information using JanRain's .NET assembly

previously posted regarding the ASP.NET controls I wrote to wrap JanRain's .NET implementation of OpenID.  I have updated those controls to now automatically request user profile information from your visitors' OpenID providers as needed.  This post discusses how you can do that.

The JanRain library provides the AuthRequest.ExtraArgs NameValueCollection to add the profile variables you wish to get from the OpenID provider.  By filling in that collection, and then reading the response from HttpContext.Request.QueryString, you can readily retrieve the desired user data (including name, gender, zip, email, etc.)  It does take reading the spec (which is simple) on what those variable names and values are.

Or you can use my updated ASP.NET OpenID controls.  Just download and use them as outlined in my previous post, and then set any combination of these properties on the controls to get the profile information you want:

  • RequestNickname
  • RequestFullName
  • RequestEmail
  • RequestBirthdate
  • RequestGender
  • RequestPostalCode
  • RequestLanguage
  • RequestCountry
  • RequestTimeZone

Setting any of these properties to true should cause their provider to ask the user's consent to release the information you ask for (as the spec dictates) and provided they agree, the details will be given along with the authentication encryption.

You can optionally set the PolicyUrl property on the controls to let the user know where your privacy policy can be found before he releases the information.

Saturday, January 06, 2007

ASP.NET drop-in control to enable OpenID logins for your site

[9/28/07: Update: this control is now being hosted as part of the dotnetopenid project on Google Code]

OpenID is gaining ground, and with good reason.  A cross-platform, cross-browser single Internet sign-on using a distributed network is very appealing.  I'll assume though that you already know what OpenID is and why it is a good choice for your web site.  This post is about how to add support for OpenID to your web site very easily.

Most of the ease is attributable to the work of Grant Monroe (I believe) due to his work on the .NET implementation of OpenID.  While his library seems to be functional, it leaves some to be desired when it comes to actually using it on your site.  The steps he uses includes adding several lines to your Web.config file, adding a special .ashx handler class to your web project, and grabbing any request ending in login.aspx regardless of whether and which login page it is on your site.

I was able to leverage his library (written in Boo) with a C# library of my own to put a nice custom web control frontend on his OpenID implementation.  Now, aside from adding the library to your web site's Bin directory, all you have to do is add these two lines to your login page:

<%@ Register Assembly="NerdBank.Tools" Namespace="NerdBank.Tools.WebControls" TagPrefix="nb" %> 
<nb:OpenIdLogin ID="openIdLogin" runat="server" />

Not bad, eh?  All you need before you add these lines is to drop in a compiled version of NerdBank.Tools.dll and its dependencies.  You can download a drop, or download the source using Subversion.  Both Janrain.dll and NerdBank.Tools.dll are licensed under the LGPL.

If you try this and find you want more control over the appearance than what the OpenIdLogin control offers, use the OpenIdTextBox control, which provides a barebones but fully functional control that does the same thing so you have more control over the UI.

Being that Janrain.dll (where OpenID is implemented) is written in Boo, with which I am not too familiar, I added my custom web control to my own existing C# tools library.  Ideally Janrain's author can take my code and rewrite it in Boo so that it can be all in one assembly for convenience, but I don't see two assemblies as a big deal in the meantime.  In order for my strong-named NerdBank.Tools assembly to reference Janrain.dll, I had to recompile Janrain.dll with a strong name.  Other than that it's the original assembly as I downloaded it from its original site.  

The Janrain implementation does not provide (as far as I can tell) for requesting user details during the login process, as the OpenID specification allows for.  I also use Janrain's Ruby implementation of OpenID,  which does provide this behavior, so I'm going to investigate this more and either contribute to his project or figure out how to use the feature and build it into my OpenIdLogin control so you can just set properties and they are delivered to you in the OpenIdLogin.OnLoggedIn event.  I'll report via another post on this blog on my progress.