Wednesday, October 17, 2007

Going (shudder) laptop for the first time...

Those who know me well know that I've always looked sideways at laptops as overpriced, underpowered, and just plain finicky pieces of hardware. Compared to a desktop, they're tougher to upgrade, tougher to repair, and tougher to install (with all the wacky custom hardware). I've usually had access to a work laptop when necessary, but I've consciously never had one as my primary machine.

Those who know me well, then, may be surprised that I've finally decided to take the plunge. My boss initially offered me a top-of-the-line Dell mobile workstation- surely the best temptation for someone who usually has a screaming desktop that gets traded out about every year. Intriguing, for sure, but after looking at both the cost (~$4k for what I'd want- yow!) and the downsides, I decided to go a different way. Sure, the mobile workstation has a docking station included, somewhat faster processors available, and a larger screen, but...

I decided to go a completely different direction. Dell's got a sexy new 13.3" laptop out in the XPS m1330. Design-wise, I'd say this thing's right up there with some of the great stuff Apple turns out, and at half the price. LED-backlit screen, super-slim profile, brushed aluminum, ooh... The fastest proc I could get was a Core2 Duo 2.2GHz, but with 4G of RAM and a 160G 7200RPM drive, that'll be plenty. "OK, fine," you say, "but A 13.3 inch screen for a developer!?" It's not ideal, but I actually think I'll be OK on this one. I rarely use the laptop screen, except when I'm traveling (it's got dual video outs, so I'll keep my pair of 19" monitors at the office). Have you ever tried using a 15" or 17" monitor on a plane when the guy in front of you has the seat back? Won't be a problem for me with this one.

I'm also bucking the trend around here and leaving Vista on it. I've been running it at home for quite a while on a few machines without too much trouble, and even did my Optimus development in Vista, so I'm not expecting too many surprises. I'm also looking forward to trying out WAS in IIS7 for some of my dev work instead of hosting my net.tcp WCF services in a separate process.

Anyway, I'm going to pull the trigger in a few days and make it my primary box, as soon as I'm comfortable that it's stable and well burned-in. I'll post my joys and/or frustrations...

Monday, October 15, 2007

Make it easy for me to give you money!

Seems like more Business School 101- the easier you make it for people to give you money, the better off your business will be. Unfortunately, with my recent experiences, it seems like a number of the SSL certificate hawks haven't learned this lesson yet.

I was recently trying to order a wildcard SSL cert for our domain at work. I've been a die-hard Thawte customer for years, so I figured I'd go with what I knew. Apparently, to get a wildcard cert (eg, one cert for * instead of one cert per unique host). Thawte requires you to use their online chat app to order wilcard certs for some reason, instead of just using their normal online ordering. OK, fine. They have a form to fill out to get a chat session (name, email, repeat e-mail)- I filled it out, only to be presented with "Sorry, no agents available- try again later". I have to re-fill the form out every time I want to see if there's an agent available. Oy. After about four tries, I decided to go to a Thawte reseller.

Turns out the resellers are nice and cheap, too (~half price)- the problem is finding one that will take my money! The first two I hit don't have online ordering (for any certs!). The next one has a broken form validator that won't accept any phone number I type in. Does anybody test this stuff?

I ended up going to RapidSSL- they resell Equifax certs. Less well-known, but as long as it's included by default in our supported browsers, I don't much care.

Moral of the story: Make it easy for me to pay you! We've always got more work we could do in this department as well, but we have spent significant effort on streamlining our signup process...

Monday, September 17, 2007

Column defaults in LINQ to SQL

There's some weird behavior in LINQ to SQL Beta2 around column defaults. It seems that you have to make a choice between using a column default or having an editable value- you can't have both. For instance- a number of our tables have a CreateDate field with a column default of GETUTCDATE(). The default O/R designer behavior for this is to set IsDBGenerated = true, and ReadOnly = true. This results in a read-only property that uses the column default value of the current date/time when we insert a new row. Most of our inserts don't supply a value for CreateDate, so we get the current date/time, which is usually what we want. However, sometimes we're using the same DBML definition to migrate old data (where we'd specify a CreateDate in the past). This is where it gets problematic. The designer lets me flip that ReadOnly bit to false, and will generate a setter for CreateDate. Unfortunately, it's not usable- any value I set is ignored. Seems like it'd work fine for generated types where full change tracking is available- "not set" is distinguishable from "set to null" or "set to a value". I'm hoping this one's fixed for RTM- I've been too busy to file the bug on it.

Saturday, September 15, 2007

DataLoadOptions and non-default behavior

I guess this is a stupid developer trick, but I've made the mistake myself and have seen several others do the same. What's wrong with this code?

using(MyDataContext dc = new MyDataContext())
DataLoadOptions dlo = new DataLoadOptions();

var query = dc.MyFoos.Where(m=>m.SomeValue == 42);

return query.ToList();

It's the missing "dc.LoadOptions = dlo" call. Surprising to me that the "LoadOptions" property on the DataContext doesn't just have an already-created DataLoadOptions. I'm sure there's a good reason for this (something to do with the immutability of that object and all), but it seems to be a common mistake around here anyway, and non-obvious, especially if the DataLoadOptions setter is stuck back in a DataContext factory somewhere. It's happened enough that "did you set the LoadOptions property?" has become the first question I ask when someone complains of a particular child object value being null.

Friday, August 17, 2007


SQL Server is a bit limited compared to other DB platforms where common DB idioms like "Insert if not exists" and "Insert or update" are concerned. You can do it, but you have to jump through some hoops. LINQ surfaces many of the shortcomings of the underlying SQL platform... However, with all the goodies in C# 3.0, you can roll your own special purpose extension methods to paper over some of these shortcomings.

A recent forum discussion on MSDN was rolling about different ways to achieve the "Insert if not exists" behavior. I came up with a little extension method to do it. Granted, this isn't atomic- I'll probably bake in some behavior later to make it catch a duplicate key violation and refetch (eg, another process inserted the value before you), but it serves my needs well at the moment.

(sorry- code doesn't post so well on Blogger)

public static class FetchOrCreateExtension
public static T FetchOrCreate<T>(this Table<T> table, Expression<func<T,>> where, T newValue) where T:class
T existing = table.SingleOrDefault(where);
if (existing != null)
return existing;
// clone the DataContext
Type dataContextType = table.Context.GetType();
string ctxConStr = table.Context.Connection.ConnectionString;
using (DataContext newDC = (DataContext)Activator.CreateInstance(dataContextType, ctxConStr))
Table<T> writableTable = newDC.GetTable<T>();
return table.Single(where); // fetch on the existing context so the caching behavior is consistent

Make use of it by "using" the namespace you put it in- then it shows up on all your table objects. To fetch or create a Foo (matching a particular predicate), do:

FooTable x = dataCtx.FooTable.FetchOrCreate(f => f.Name == "SomeName", new Foo { Name = "SomeName", OtherData="OtherCreateData" });

I'm not trying to do any key inference or anything here- that's up to you. The newly-created object had better match the predicate you passed in (and nothing else) or you'll have a problem. Anyway, this thing does what I need- hopefully it can help someone else...

Thursday, August 9, 2007

Looks like I get my wish!

UPDATE: Hmph- now it's been marked Postponed. D'oh! I guess I'll just have to live with separate contracts and string coercion for the AJAX GET requests to deal with null values.

Cool- according to the comments on my connect requests (here and here), nullable support in WebGet/WebInvoke contracts is on DevDiv's list of "things we'd like to fix before RTM" (of Orcas). Being able to remove all those stupid conditional string->enum and string->int casts from my contracts will make all the time spent filing bugs and suggestions worth it!

Friday, August 3, 2007

Null semantics in LINQ to SQL

The fun with Orcas Beta2 continues! Other than some incompatibility problems with ComponentArt's AJAX controls (which we're expecting a true fix for any day now), all the big showstoppers have been worked around thus far.

Hit a new snag today- not a showstopper, but a change from B1 behavior for sure. Awhile back, Dinesh was talking about the null impedance mismatch between SQL and C# and how the LINQ to SQL team was proposing handling it. Basically, C# null semantics ruled, so LINQ to SQL was consistent with all the other LINQ types (in other words, null == null). That proposal stuck until Beta2, where it appears they've flip-flopped back to SQL null semantics (null != null, or anything else for that matter). I've started an MSDN forum discussion on it as well.

I can definitely understand both sides of this argument. That said, the change makes life a little harder. The following worked in Beta1 (when the "filter" arg was null, rows where "Name" is null came back):

public IList GetFoos(string filter)
var query = from j in jdc.ATable where j.Name == filter select j;

return query.ToList();

This query doesn't work in Beta2- since SQL null doesn't equal anything (including null), I get no rows back. The following hack restores the correct behavior in Beta2:

var query = from j in jdc.ATables where filter == null ? j.Name == null : j.Name == filter select j;

This works, but is a little verbose on the C# side (especially when I have lots of potentially null filters on a query), and generates SQL that is also verbose. Instead of generating an IS NULL statement when filter is null, it generates an inline CASE statement in the query. Works, but probably not ideal.

What I'd really like to do is wrap the comparison in a little expression generator that would result in an IS NULL getting generated on the SQL side. Something like:

var query = from j in jdc.ATable where CheckNullCompare(j.Name == filter) select j;

I might try to get to this over the weekend- I'm still not terribly clear on how I'd mix this, since it is sort of a mix of runtime values and expressions. More to come...

Monday, July 30, 2007

Orcas Beta2 : the good!

I've spent a number of the last posts mostly griping about what's broken in Orcas Beta2. I figured I'd change the tone a bit and talk about some of the new goodies that I like!

- WebHttpBinding - This thing is going to rock hard when the bugs get squeezed out. To be fair, it hasn't had nearly the soak time of the other WCF bindings, either. In general, I'm really impressed with the combination of WCF + WebHttpBinding for POX and REST-style messaging. I love using the same service implementation to handle server-to-server and client-server communication, without having to compromise performance or code to a "least common denominator". Well, I'm almost there- I still end up building AJAX-friendly versions of most of my service contracts, at least the ones that accept HTTP GET. I don't think serializing a DataContract instance as JSON on the query string was what the REST-afarians had in mind, and I'm in the "explicit MessageContract" minority, so I don't have much choice.

- New IDE goodies, refactors, cleanups

  • "Organize Using"- nicely handles one of my little pet peeves: sloppy/dead "using" statements. I see this as a long-needed tool for cleaning up after the oh-so-useful-but-mess-making "Resolve Using" command.
  • Multi-targeting- so I can be sure at compile-time that I haven't inadvertently introduced 3.5 dependencies in our 2.0/3.0 only projects
  • Intellisense improvements- Ctrl-for-transparency, showing both member docs and intellisense at once, Javascript intellisense(!)

- Extension methods. This one's gonna be tough not to abuse, but lemme tell you: hanging .HasContent() on every string, null or not (wrapper over ! String.IsNullOrEmpty()) is one really nice bit o' syntactic sugar...

- LINQ- I'm in love. LINQ to SQL is coming along nicely on the march to RTM, and does just about everything I need right now. Sad to see DataShape get hidden back away under the LINQ to SQL umbrella as DataLoadOptions in B2 (could be useful for other stuff, which I'm sure was the initial intent before it was locked down to SQL Server only), but oh well. LINQ to Objects is another one that'll be tough not to abuse- I already have a couple of places where we're joining LINQ to SQL and LINQ to Objects-based queries together to present a unified log and history view across multiple service tiers. I'm excited to see where LINQ takes us in the next few years- both in MS-built query providers and third party stuff. I really hope to see LINQ-enabled providers over WMI and LDAP from Microsoft soon. The lack of the latter (and yes, I'm aware of Bart DeSmet's work in this area, but it's not ready for prime-time) was the final nail in the coffin for ADAM on our project. Implementing our user store from scratch in SQL Server was much more palatable since LINQ to SQL removed enough of the impedance mismatch- it was just going to be too much double-definition and code->directory sync with ADAM- we're in a bit of a hurry. If I had a lot more time on my hands, I'd jump in and pick up the ball on LINQ to LDAP- I think it's a very interesting project, and Bart's got a great start on it.

Anyway, so far, so good- I haven't hit any issues I wasn't able to work around, which is good- we're planning on going live (or at least beta) with the Orcas Beta2 bits.

WebHttpBinding Streaming in B2

Yikes. The following statement from the .NET 3.5 B2 "known issues" made me a little nervous:

For WCF HTTP streaming, the stream that is returned to the client is not the same size as the stream size requested in some stress situations. The service can send extra bytes to the client side due to a known racing issue in the Visual Studio 2008 product code.
To resolve this issue:
Avoid using the HTTP streaming feature for WCF.

Um, I need HTTP streaming- my stress tests max out my server memory if I do things buffered, and the stress tests don't even simulate slow clients yet. I set up a test to try and reproduce the behavior (just to see how bad it is). I wasn't able to see that behavior, but I did see another nasty one. If a client disconnects from a webHttpBinding service while a response is being streamed, the HttpListener throws an unhandled exception on an I/O completion thread. In a self-hosted scenario (like mine), unhandled exceptions take the process down, which effectively means part of my app is down until someone brings it back up!

Luckily, the legacy .NET 1.1 unhandled exception backstop is still available via configuration, so that will keep my process alive until they get the problem fixed.

Ahh, the fun of developing on the bleeding edge.

Saturday, July 28, 2007

svcutil.exe busted in Beta2

This is mentioned in the "known issues", but in a backhanded way that I didn't notice 'til I'd already filed a bug ("Running some WCF-based project templates results in a crash of svcutil.exe crashing due to a signing issue"). If you need to use svcutil to generate client proxy code in VS2008 Beta2, you'll have to apply a little hack. Seems they built it for delay signing, but never actually signed it- if you run it, you get the error quoted at the bottom of the page. You can work around this by preventing strong name verification for the svcutil assembly. Run:

sn -vR "C:\Program Files\Microsoft SDKs\Windows\V6.0A\Bin\svcutil.exe"

And the error message:

Unhandled Exception: System.IO.FileLoadException: Could not load file or assembly 'svcutil, Version=, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. Strong name validation failed. (Exception from HRESULT: 0x8013141A)File name: 'svcutil, Version=, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' ---> System.Security.SecurityException: Strong name validation failed. (Exception from HRESULT: 0x8013141A)The Zone of the assembly that failed was:MyComputer

Getting Orcas Beta1 solutions working with Beta2

Minor issue- .sln files created by Orcas Beta1 do nothing when double-clicked in Explorer. Hovering over the icon in Explorer shows "Version: (unrecognized version)". Everything works fine if you just open the solution file in Visual Studio.

Turns out the trick is in the comment on the third line of the solution file. For solutions created by Beta1 (and I'm assuming, anything older), the third line looks like:

# Visual Studio Codename Orcas

In Beta2+ solutions, it is:

# Visual Studio 2008

Change the line in notepad (or whatever), and all is well- you can open the solution by double-clicking in Explorer again.

LINQ to SQL enum mapping

UPDATE: Figured this one out! I was always using a fully-qualified type name on my enums- for this to work, you have to prepend the global:: prefix. Unqualified type names work fine, as long as the type is visible from the generated LINQ entity class namespace. I filed this as a bug...


Has anyone successfully managed to use the Orcas Beta2 (sorry, VS2008) bits to do first-class enum mapping onto an int column? My Beta1 code is doing the "define another property that does the mapping from int->enum and back" trick, but I'd like to let the mapper do it for me now that it's supposedly supported. I've got my enum defined, but when I set the 'Type' field on the column properties in the O/R designer to the FQ name of the enum, I get the following error:

DBML1005: Mapping between DbType 'Int' and Type 'AnApp.AnEnum' in Column 'EnumDBColumn' of Type 'ATable' is not supported.

I know it's finding the type- I get a different error if I use a bogus type name. Grr. I'm hoping I'm just doing something dumb- I really want this to work!

Friday, July 27, 2007

WebGet and WebInvoke rock!

I'm loving the new WebGet and WebInvoke programming model in WCF- the URITemplate binding style is so natural. Just poke up a request URI with some placeholders in it, and WCF magically maps it to the right operation on my contract and plugs in the arg values... It'd be even cooler if it supported something akin to WPF binding paths (allowing you to bind querystring values to any property on any object), but I'm still thrilled with what we have, especially how much better it is than the Beta1 stuff!

This being beta software, however, there are a few rough edges. Unfortunately, WebGet operations don't support nullables. That means I have to botch up my existing contracts that have "optional" int typed args to take strings (so I can distinguish missing values from default values- I get this with my enums already because I always define a 0 "Undefined" value). If you try to use a nullable in a WebGet OperationContract, you'll end up with:

Operation 'Op' in contract 'Service' has a query variable named 'nv' of type 'System.Nullable`1[System.Int32]', but type 'System.Nullable`1[System.Int32]' is not convertible by 'QueryStringConverter'. Variables for UriTemplate query values must have types that can be converted by 'QueryStringConverter'.

I've filed a suggestion to see if they can add support for nullables. Go vote for it! I guess I'm the Nullable b*tch- I went and counted. Over the years, I've filed six different bugs on various .NET APIs support for nullables, and blogged about at least one of them.


The situation is worse when using enableWebScript with the JSON bits. The JsonQueryStringConverter says it supports Nullables (eg, its CanConvert method returns true when a Nullable comes in), but it barfs if the value is actually null when you call ConvertStringToValue. This is even worse- my service starts up fine, but fails with a NullReferenceException in the JSON deserializer the first time I omit a nullable value from my query string. Ick. Bug filed.

WebServiceHost woes in Orcas Beta2

Been playing around (actually, trying to get real work done) with shiny new Orcas Beta2 bits. I've been working on Orcas since the day Beta1 came out, and I've been eagerly awaiting the integration of WebGet/WebInvoke and full-fledged JSON serialization that Steve Maine's been talking about for Beta2. There are a couple of weirdnesses:

Steve suggests the use of WebServiceHost when hosting a WCF service that uses the new JSON stuff and WebGet/WebInvoke, because it wires up the right behaviors for you and checks a few other constraints (like only supporting one contract per endpoint address)... Turns out it also creates some new problems.

The enableWebScript endpoint behavior is what turns on all the JSON goodies. By default, WebServiceHost wires up a POX-emitting WebHttpBehavior (of which WebScriptEnablingBehavior is a subclass) on every webHttpBinding endpoint. It doesn't check first to see if there's already a webHttp behavior there. Looks like this means that if you want JSON serialization in B2, you have to use a vanilla ServiceHost and wire up the enableWebScripting behavior yourself. Otherwise, if you try to use a DataContract on a WebGet operation, you'll get the following InvalidOperationException on the .Open() call:

"Operation 'Bla' in contract 'Service' has a query variable named 'v' of type 'BlaDataContract', but type 'BlaDataContract' is not convertible by 'QueryStringConverter'. Variables for UriTemplate query values must have types that can be converted by 'QueryStringConverter'. Unfortunately, WebServiceHost hardwires webHttpBehavior on all webHttpBinding endpoints."

It looks like this won't be a problem forever: Reflector shows a WebScriptServiceHost that automatically plugs in enableWebScript (thanks, Lutz!), but it's marked internal. IIS users can probably use the WebScriptServiceHostFactory in a .svc file (since that's public), but I'll stick to the standard ServiceHost for now and just make sure I follow all the rules.

Friday, July 6, 2007

HTTP.SYS and HTTP port sharing pain

Part of the application I'm developing at work right now will need to serve a metric buttload of images and metadata to a lot of clients concurrently. I'm using Orcas Beta1 + WCF to build everything (gotta love that bleeding edge), and I'm blundering into a whole host of different security issues. The big one is XmlHttpRequest just plain not supporting the "same host, different port" scenario, regardless of the zone security settings. We're currently doing our development on Windows XP, though the app will deploy on Server 2003. For performance reasons, the mailbox servers won't run in IIS with the rest of the app- they run in their own service processes. That's fine on Server 2003- HTTP.SYS lets us divvy up port 80 between IIS and our app. The problem comes on our XP development machines. Neither the built-in VS webserver nor IIS 5.1 support HTTP.SYS- they hog their target ports all to themselves, which means the mailbox services have to run on different ports from the "main" webserver. One of those services handles JSON-encoded AJAX requests via XmlHttpRequest- this causes us problems since all the page code comes from localhost:80 (or whatever port VS chooses), but we're trying to fetch data from the JSON service at localhost:8081. XmlHttpRequest just plain doesn't support this (and is documented as such). Bummer.

So I'm stuck with one of the following solutions:

1. Set up Apache to listen as a reverse-proxy on yet another port and "merge" the URL space of the different services together (ie, Apache maps localhost:8082/Web to Cassini at localhost:8080, localhost:8082/MailData to my JSON service, and localhost:8082/MailImages to the image service (both listening on 8081- WCF uses HTTP.sys to share ports if it can). This takes care of the cross-port problems, but it's a hassle to show a mere mortal dev how to set up multiple instances of the app and actually have them work.

2. Upgrade all our dev boxes to Vista and develop against IIS7 instead of Cassini. Tempting, but I don't have time to repave right now, and doing an "upgrade" on the OS feels dirty.

3. Upgrade our dev boxes to Server 2003. Again, tempting, but not very cost-effective, and the same hurdles exist from #2.

4. Just host the services in IIS/Cassini for development (via WCF .svc files). I don't really like this option, especially since the production servers will likely be redirecting to a different host (same domain) for images and JSON mail data- too many things can go wrong when you develop in the same URL space and don't deploy that way (though we are contemplating hiding the different hosts behind a content load balancer, which would make this option a little more palatable).

5. Try and pull a switcheroo on the VS webserver to a version that supports HTTP.SYS (I swear that an HTTP.SYS-based WebDev.Webserver2 used to ship with VS2005- what happened to it?). This project looks like it allows exactly that, though I had to make a change to it, since it was always binding to '/', regardless of the vroot passed at startup- not very conducive to sharing ports... Oh well- at least he's kind enough to provide the source!

I'm currently leaning toward this last one. It's a one-time setup on every dev's box to "replace" the built-in VS webserver- then we just set up our project defaults to use the same port as our default configs. If someone wants to set up another instance, they can change that instance's ports in WCF config and VS debug config.

In all this, it's surprising to me that Orcas didn't ship with an HTTP.SYS-aware webserver... Oh well- I'll figure something out.

Friday, June 22, 2007

"Where did THAT term come from?"

My wife and I are all about trivia. We're faithful Jeopardy! watchers (thanks, TiVo!) and we've recently started playing PubQuiz a bit (our team got schooled last night on "board games" when asked to fill in a blank Monopoly board from memory- who still plays Monopoly???). We have a PC hooked up to our main TV- its primary purpose seems to be fulfilling our trivia jones.

"What other show have we seen that guy in?" - "What year did that come out?" - "'Bob's your uncle?' What the heck is that?"

In addition to the usual standbys (IMDB, Google,, etc), I recently found a site that's dedicated to the origins of words and phrases. Between their official list and their easily-searched forums, I have yet to poke one in that it didn't have. Very satisfying indeed for a family of trivia nerds.

Monday, June 18, 2007

Nasty .NET bug

Asynchronous I/O is a scary thing. It requires a carefully choreographed dance between potentially many different threads, all the way from the top of the user-mode stack down to the kernel level. I have a lot of respect for the crew that implements this stuff. It's complicated enough just to use it correctly at the app level- I can't imagine how much worse it is down in the kernel guts.

That said, I think I found a nasty bug in .NET's asynchronous file handling support this weekend. I have a service that accepts streamed file uploads and writes them to disk. In order to scale better, I'm using WCF's AsyncPattern ServiceContracts- this allows me to yield the server's request thread when it's blocked on an external I/O operation, leaving it free to start servicing another request. This, of course, necessitates using asynchronous I/O for both reading the stream from the client as well as writing the received bits to disk. It appears that there's a bug in the Asynchronous FileStream's implementation of BeginWrite- when multiple request threads are doing a BeginWrite on a "cold" application (eg, one that's just started up, most code is un-JITted, etc), one or more of the BeginWrite calls will hang and never come back. Not a friendly thing for an asynchronous operation to do. If I make a single request, then pile a whole bunch of concurrent requests on right after, everything is fine. My current workaround is to make a single BeginWrite call to a dummy tempfile opened for async (FileOptions.Asynchronous) before I start the ServiceHost. Stupid, but it works like a charm.

Here's a link to the bug I filed, for those interested in that kind of thing (includes sample repro code). Go vote for it, if you like reliable async I/O. Hopefully there's a "silly user" answer for this, but I'm not holding my breath.

Monday, May 14, 2007

Lessons learned from the vacation-from-h-e-double-hockey-sticks

Here are some lessons learned (or confirmed) by our recent trip:

- Don't carry every bit of plastic you have (already did this one- I'd recently "weeded" my wallet down to three credit cards from seven).

- Keep the set of unique cards between you and your spouse's wallets greater than zero (already had this one, it saved our bacon).

- Carry one or two cards on you, and keep one in your bag back at the hotel.

- Keep more cash on you (and back in your bag) than you normally do (this one also saved our bacon).

- Keep a photocopy of some kind of ID in your bag back at the hotel (I always have a couple photocopies of my passport stashed in various places when traveling internationally, but wasn't doing it domestic).

- Confirm last 4 digits of a credit card you're canceling with the operator before it gets canceled.

- Figure out the travel logistics of your trip ahead of time, and check into mass transit options for airport->hotel travel. In this case, there was a reasonable MetroRail ride from LAX to Anaheim that would've saved us some trouble and expense with returning our rental car.

- Don't rely on cabs unless you know how expensive they are in the city you're visiting.

- Don't rely on a hotel shuttle unless you can verify its schedule and frequency ahead of time.

- Even though it says to (why?), don't carry your long-term parking ticket with you.

The vacation-from-h-e-double-hockey-sticks (conclusion)

Continuing yesterday's post...

So we're kinda stuck, no ID for me, no working credit cards, but we have enough cash to enjoy Friday at Disneyland. We had purchased one-day park hopper passes online. I would've just bought main park passes, but they don't sell those online, and we didn't want to stand in line for tickets. Plus, there were a few things I wouldn't mind seeing at California Adventure... Fine, an extra $40 for two tickets. We figured we'd save a few bucks on parking by taking our hotel's free shuttle. On the shuttle ride, I got ahold of a real First Tech agent, who got us all fixed up (reactivated the dead card and informed me that all their debit cards have different numbers, even on the same account, thus Jenny's still works- woohoo!) Get off the hotel shuttle, and ah crap, we forgot the printed passes back in the room. We didn't realize it 'til the bus had left, not to return for 3 hours. Crap. Disney has a sophisticated online system with a unique ID on the pass (that's live-linked at the gate to prevent multiple usages)- guest services should be able to look us up and reprint one, right? Wrong. "Sorry sir, that's your ticket- we have no way to replace them". That's exceedingly lame, considering the mechanics of such a system would easily allow for that (and that it's probably a daily occurrence). OK, so a quick 10-mile, $60 roundtrip cab-ride later, we have our tickets.

No major mishaps at Disneyland- good times.

Remember how we'd save $15 on parking by taking the hotel shuttle? Yeah... The night clerk told me our return shuttle left Disney at 5:30- but apparently it was really 5:20. We showed up at the stop at 5:24, and the shuttle was gone- next one came at 9:20. We were all basically limping, so we took a cab back- another $25. That $15 we saved on parking sure was swell!

Saturday was relatively uneventful- we took MetroRail from Norwalk into Hollywood (a long ride with two train switches) and poked around- visited the Hollywood Museum and checked out a bunch of the other downtown Hollywood tourist haunts. The museum came highly recommended by Fodor's, but it seemed a little tired to me. There was a decent mix of old and new movie memorabilia (lots of costumes and props from various movies), but maybe I'm just not enough into old movies to really dig it. Another moment of panic at the front door when the little old lady working the admissions booth said our newly-reactivated credit card had come back as cancelled, but after watching her futz with the card reader and tell our friends the same thing about their card, I'm pretty sure she managed to hose it up somehow. Confirmed later when we bought dinner with that card. Whew. I was also sad that another museum listed in Fodor's we tried to see was gone- it was more of a "hands-on" museum that covered a lot of filmmaking equipment and techniques (lighting, foley, sound, etc).

Getting reservations in northern Orange County on Saturday night was tricky- after a number of tries we landed at J.T. Schmid's in Anaheim- looked close on the map, but was 12 miles from the hotel. Seared ahi appetizer was awesome, dinner was meh, dessert was great...

Took the car back to LAX, went through the inspection, all's well. As the guy was signing the paper, I made an offhand comment about the bondo and crappy repaint job, and he decides to fill out a damage report on it. Oy- shoulda kept my mouth shut! Luckily, it was already in the car's history report when he looked it up, but I'm half-expecting them to come back and try to bill me for it or something...

Got on the plane back to PDX, went out to the car, d'oh! Don't have the long term parking ticket- it was in my wallet! When we left, I was going to leave it in the car, but my wife pointed out that it says "Take this ticket with you". I couldn't figure out why, but I just did it... Stupid! Leave it in the car next time. I was figuring they'd bill us some exorbitant minimum amount, but they only billed us for four days. Apparently they either inventory the lot every day or they OCR the license plates off a security cam at the entrance- somehow they looked up our license number by the day we got in and just billed us full price for four days. Not bad...

Needless to say, I was quite happy to get home. It wasn't exactly a relaxing vacation, but we did have some good times with our friends, and I got my rollercoaster fix in for awhile.

Sunday, May 13, 2007

The vacation from h-e-double-hockey-sticks (day 1 and 2)

OK, so it wasn't QUITE that bad. We actually had a lot of fun on our recent LA vacation. Visited with friends, rode lots of throwup rides at Magic Mountain and Disneyland, and bummed around LA a bit. Perhaps a better description would be "The vacation where lots of crap went wrong and we wasted a lot of money on dumb stuff..."

It all started with the United web check-in. I went to print boarding passes the night before- mine was fine, but my wife's said "See a United Ticket Agent" where the "Check in" button should be. So, I called United customer service and got Patel in India... He couldn't help us, and doesn't know what the problem is, we have to see a United ticket agent at the airport. Flash forward to 5am at PDX- where we're praying the "self check in" will work. Nope. Anyone who's flown United knows what their morning check-in lines look like (why is that?). We never check bags just to avoid it, but we stood in line for a good 45 minutes. The ticket agent checked us in with no problems, but still no explanation as to what happened. Grr.

Next, the rental car. I'm a die-hard Enterprise customer- they've taken fantastic care of me for the last 12 years. Alas, LAX Enterprise was booked solid. No problem- I'd made a reservation at an off-airport Enterprise that was about 2 miles from the airport. Quick 'n cheap cab ride, right? Wrong. That's a $25 cab ride in LA- a $17.50 minimum applies to all fares from LAX, plus a $2.50 LAX surcharge on top, plus tip (no sense in stiffing the cabbie- just makes them mean). Yow. Oh well- off-airport car rental was half the price of the airport price anyway, and we'd only planned to rent for one day to drive to Magic Mountain and back. Oh, except the off-airport rental place is only open 9-6, which totally screwed up the next day's plans. They're also not open Sundays, so we couldn't return it the morning we came back. Oy. So we just extended the reservation through the weekend and set up an LAX dropoff (additional $40 fee). So that's $10 a day hotel parking and $35 a day rental for the car to just sit in the lot.

I upgraded to a V6 Nissan Altima- I wanted a bit more merging power for psycho LA freeway driving than the little Kia they were offering. Later, I realized the one we got had been in a wreck and repaired very poorly- the whole left side was bondo (more on that later). The OEM tires had also been replaced (likely after the wreck) with really bad Contintental tires- it peeled out with the slightest throttle input.

To Magic Mountain. Ahh, Magic Mountain- the best birthday present west of the Mississippi for a rollercoaster fanatic... We hit it on a Thursday morning, so the lines were pretty short. My wife was nervous- she's not a big fan of heights but bravely promised she'd ride with me. Long story short, I ruined my riding partner by taking her on the freakiest ride first (The "X", starting off with a 215-foot facing-the-ground drop at ~80mph). She was none too thrilled. So I was riding the rest of the day alone- much less fun.

After lunch, I rode Tatsu. It rocked. Then, while loading up on DejaVu, I noticed my left pocket was unzipped (shorts with zipper pockets are a must for roller-coastering- if you remember to keep the pockets zipped). Momentary panic, furious pocket-patting, followed by the realization that my wallet was gone. Tatsu has you riding horizontally in a flying motion, unenclosed. It also flips you around a lot- very fun, but also great at sending projectiles hurtling from your pockets at high speed. I'm surprised all I lost was my wallet- I had my phone, cash clip and iPod in the same pocket. Rode it again, trying to look straight down and see my wallet, but Tatsu covers 1/3 of the park, weaves between other rides and over a number of grassy areas and walkways. Oh yeah, and half the time you're being twisted around the track, making it tough to look down. Filled out the lost article paperwork at guest services (behind 5 other people who lost stuff on the same ride- they say it's 10x worse for losses than the next closest) and continued with the day, a little less exuberant at the prospect of replacing my wallet's contents. Waited around for the end-of-day "trackwalk" where employees look for lost stuff, but they didn't find it. Oh yeah, did I mention all this happened on my birthday?

Later that evening, I started calling the credit card companies to shut off the cards, just in case they happened to be in someone's possession. The set of unique cards between my wife's wallet and my own was one- her First Tech VISA that she had when we got married (which I later got a copy of, but still under her account). So we figured we'd just cancel all the others and use that one... I called the First Tech lost card line (a service bureau after hours- all they can do is cancel cards). I gave them my name and address. Tappity tap tap... "I see a debit card and a VISA- you lost them both?"


Tappity tap...

"OK sir, I've canceled your VISA ending in 1234".

1234? "Wait, no, that's the one card that we didn't lose- that shouldn't be under my name"

"Sorry sir, once the card's been canceled, I can't reactivate it"


Now we have no working credit cards- luckily we had cash or we would've been scamming our friends all weekend. Still don't know what we're going to do about the hotel bill and car rental- we don't have THAT much cash. I also realize that I'll need an ID to get on the plane Sunday morning...

Will the Knott's Berry Farm hotel goons break the Davis family's legs when they can't pay the bill? Will Matt get on the plane? Will they make it home? Stay tuned for the next exciting installment.

Monday, April 30, 2007

Made it back...

Sorry to disappoint, Josh, but we made it back alive from our campout. The boys seemed to have a pretty good time with the actual camping, and other than a monster sunburn for me (left the sunscreen and hat in the tent, d'oh), we made it back unscathed. The camporee activities seemed fairly unorganized, though- there was a lot of, "uh, what are we supposed to do" at several of the stations. The other problem was that the whole thing was set up as an orienteering course- good practice for young scouts (each patrol got a different starting point and path through the stations). The big problem was that the fire-building station had been moved (without notice) after the maps and directions were printed. That was our first station, so we blundered around the park for nearly an hour (with the adults questioning our own map-reading skills after awhile) before we saw the smoke. By this point, everything was jammed up, there were a whole pile of patrols waiting (with no evident sign-in or ordering system, so lots of boys milling around making trouble), and everybody was pretty well frustrated right out the gate.

I think we were actually pretty well prepared for the camping- mealtimes went quite well, and the patio firepit was a big hit for foil dinners and s'mores. Nice weather helped a lot... I'm glad I was able to borrow a friend's trailer to haul all the gear, but I'd like to work on getting the pile a little more compact next time.

Maybe I can do this... Might not matter- I think I'm going to have one scout in my 11-year-old patrol next year unless a bunch of 10 year olds move in between now and then.

Friday, April 27, 2007

Getting ready to camp...

I've mentioned in previous posts that I wasn't a very outdoorsy kid. Someone must have an odd sense of humor- awhile back, I was asked to be an Assistant Scoutmaster to a patrol of 11 year old Boy Scouts (and "Assistant" is a misnomer- I don't currently have any dedicated help). Anyway, we're going on our first campout this weekend, and I'm a little apprehensive. My family's idea of camping when I was a kid was "drag the camper out to Detroit Lake", and the last time we did that, I was probably 8. I got a one night campout during the "outdoor leader skills" training back in November, but other than that, this is my first "real" campout... Luckily, a bunch of the kids' dads are coming along (some of whom are experienced scout leaders and outdoorsmen), it's a camporee (lots of other scouts around) and the weather's supposed to be good. Still, I've been running around like a headless chicken for the past few days trying to scrape all the equipment together and figure out what I'm doing. My dad and our committee chair have been very helpful and gracious with my ineptitude, though.

Anyway, if I never post again, it's probably because I didn't come back. ;^) Wish me luck. I'll probably post some choice pictures if I do make it back.

Sunday, April 22, 2007

Reality TV at work

I'm assuming this is OK to post, since it's already been blogged about by someone with readership much higher than mine. My employer has been selected as the possible subject of a documentary/reality TV show about day-to-day life at a startup. The film crew showed up on Friday at the warehouse to do some test filming, and if they can actually sell the show, they'd probably be filming us for ten weeks sometime this summer.

I've got rather mixed feelings about this.

First- I can't imagine that we're that interesting to watch. I guess maybe with enough time compression, our work could be made to look interesting, and of course it'd be interesting to have footage to look back on if we "made it big". The CEO's certainly a character, but as far as I've seen, most of the interesting personalities in our company are the folks getting paid ten bucks an hour to scan mail. There just isn't that much conflict on the engineering side, and I really hope it stays that way (though the thought of manufacturing a little Springer-esque conflict for the cameras is intriguing).

Second- having cameras around all day seems like it'd be a big distraction. Constantly gating your words, making sure you don't say something stupid, confidential, etc. Plus there's always the worry that others might really amp up small conflicts just for the sake of good TV.

I guess I'm hoping that if the project does happen, they'll find the engineering group too boring to follow around all day. Marketing people are much more exciting. Right?

More to come- maybe...

Thursday, April 19, 2007

That ADD streak strikes again

I've said before that I must have at least some ADD tendencies, if not a full-blown case. I'm a guy with pretty varying interests. If I'm not involved in several different "extracurricular" activities, I get bored. You'd think I would learn. My life seems to follow this cycle: I'm bored, so I sign up for something to do (an activity, group, help a friend with X, etc). Then I'll get another request and sign up for something else, and so on (hm, Thursdays are free- sure, I can do that)... Everything will be humming along smoothly for awhile- the calendar's packed, but minimal conflicts. Then all at once, there will be a period where all those different activities require something from me at the same time. The stress-o-meter goes through the roof, I curse myself for taking on too much, slog through 'til everything's done (often with spikes in my grump factor along the way), then take a much needed break. Lather, rinse, repeat.

I'm right in the middle of that crest where everything is beginning to happen at once. I really enjoy most of the things on the list individually, but when the "context switching" gets out of hand, I don't really enjoy any of them. It gets to the point where I just want to pull the covers over my head and sleep (wake me up when it's over!).

Here's my current list:
  • Being married (best keep this one first)
  • Working at a startup where there's enough work to keep a small army busy (and we don't have a small army).
  • Editing my dad's book on steelheading and the accompanying DVD (the publisher's been very patient, but the clock's ticking)
  • Trying to prep my Boy Scouts for their first campout next weekend and keep them on track for their advancements
  • Playing horn in the Oregon Symphonic Band (a regular gig- one of my favorites)
  • Hacking on the Optimus Mini Three SideShow driver
  • Helping a friend with some microcontroller programming
  • Moving my bathroom remodel along its asymptotic approach to completeness
  • Singing in the choir at church
  • Working on some family history research
  • Subbing in the Oregon Sinfonietta

That's just the "active" list- then there are all those little things that I'd really like to do (that will probably one day end up on the "things I'm cursing" list), like:

  • Finishing a bachelor's degree. I've been poking at finishing one of my three majors at OIT (a class here, two classes there), but I haven't touched it for three terms now. I'm hoping I don't get hosed by credits expiring- I'm only left with courses to challenge and a few GEs that didn't transfer right from Purdue. Won't make any difference professionally, but probably a good thing to finish.
  • Learning piano. I took piano lessons for a year in college and loved it, but never put in the practice time to make it worthwhile. Dropped at the end of one of the aforementioned cycles.
  • Learning to dance. A guilty pleasure my wife and I share is watching Dancing with the Stars. I know. I'm a tool. Ballroom and swing always fascinated me, but I've never taken the time to really get beyond shuffling and mumbling counts to myself.
  • Landscaping my yard. I bought my house over six years ago and the yard was pretty lame. Every year I've been "meaning" to do something more with it than go mow the weeds every few weeks. Apologies to my neighbors (though I'm not the worst offender here in that area- maybe I'd get to it faster if I was).

That's probably about a tenth of my "someday" list. New things show up all the time. Maybe if I'm smart, I'll read my old blog entries and remind myself not to sign up for so many things next time.

In the meantime, somebody wake me up when it's over!

Saturday, April 14, 2007

On the kindness of strangers...

I got my "break" into the software industry partially due to the kindness of a couple of strangers a number of years ago. It's difficult to say where I'd be now without that break. Maybe I'd be in exactly the same place, maybe I wouldn't. But lately it's gotten me to thinking of ways I could give someone else the same kind of break.

A little background:

Like many American kids, I was a bum in the summers. No school, just sat around watching TV all day, messing with a computer, consuming mass quantities of Vitamin J (aka "junk food"- it's a wonder I'm not diabetic). As soon as I hit eighth grade, though, my dad put a stop to that. We'd just moved to a new subdivision, and many of the neighbors didn't have landscaping yet. The ol' man wasted no time farming me out to do their bidding (I can't remember how many times I heard, "Hey, my kid will put in your yard and sprinkler system!"). Even though the pay was fantastic for a 14 year old kid, as a generally sedentary geek-type, I absolutely hated the work. My raging hay fever didn't help matters, either.

During one of those loathsome work sessions, my neighbor Jake was toiling alongside me in his yard. I think I was bellyaching about how much I hated working outside when he asked what kinds of things I did enjoy doing. Obviously, the computer stuff came up. "Hmm...", he said. "My brother-in-law owns a software company. Maybe you should talk to him- there might be something you two could work out."

A few phone calls later, I was sitting in a room talking with Bob Rasmussen of
Rasmussen Software. I don't recall the exact content of that conversation, but it ended with him offering me a job. As is the case at many small companies, my actual role had a pretty fuzzy definition that's difficult to slap a label on. While I was there, I had varying responsibilities for bookkeeping, QA, production script maintenance, janitorial, debugging, feature development, system admininistration, shipping, and so on.

At that first job, I learned a lot of great lessons about the software business by being invited to look over Bob's shoulder. The importance of customer service; developing a sense of "code smell" (he would often refer to hacky code as "not kosher," a phrase I continue to use today); formal design and debugging techniques, and so on. It was my first experience being around someone who truly loved their work- a feeling I've strived (and mostly succeeded) to replicate. I even got my first experience with being fired after a stupid mishandling of a batch of customer inquiries. All valuable lessons indeed- they have contributed to my success in this industry on a daily basis, and I'm sure the experience was a contributing factor to my landing an internship with Intel soon after, which was again a jumping off point for many other adventures. A heartfelt thanks, Bob.

The question I struggle with now is: how can I "pay it forward" and give someone else the same kind of opportunity? I've always enjoyed enabling my colleagues on difficult technologies, but that's not really the same kind of "break" that I got. Maybe I'll have a chat with computer instructors at some of the local high schools- it seems like there's always some star-in-the-making that outshines even the instructor and is always "bored" in those classes.

So, dear readers, how did you get your start? Can you pin early success in your field to the actions of a small group of people? And do you have any ideas for me on how I could give someone else the same chance?

Monday, April 2, 2007

Managed UMDF breathes!

It's alive! After a long night of stupid user errors, I have a do-nothing UMDF driver up and running that's completely managed. There's a tiny C shim that loads .NET and my driver assembly, and everything else is managed from that point. I was hoping to get the automatic COM activation support working (for a completely unmanaged-code-free experience), but alas, 'twas not so.

Having crashed the driver several hundred times getting to this point, I'm pretty impressed with the robustness of the UMDF reflector. There's a scary display flash sometimes when a user-mode driver fails, but the system stays up and everything is happy. I only BSOD'd once during the whole ordeal, and given the iffy hardware I'm running on, I can't necessarily attribute it to the UMDF reflector crashing.

Turned out the stack corruption I was experiencing was just a missing QueryInterface call right after I started the framework. I've been used to things just returning the right interface- casting that IUnknown* to an IAppDomainSetup* was, in retrospect, a bad idea. What's weird is that the DDK build environment won't seem to build me a PDB with everything for my stuff- I can't see a number of my locals while debugging (including the HRESULT I'm using all over the place). I'm sure it's one of the hundreds of makefile options, but I don't feel like tracking it down.

Anyway, the Optimus Sideshow driver is progressing well. If I get really bored, I might try and release the managed UMDF framework I'm building on its own. I don't know how useful building drivers in managed code really is, but we've found at least one case for it. I'm trying to keep this stuff generic enough that it could be reused without much hassle.

Oy- it's late. Gonna be a bleary-eyed day at the office tomorrow.

Saturday, March 31, 2007

DDK build environment

Maybe I'm just spoiled by years of writing user-mode code in Visual Studio, but I'm less than impressed with the DDK build environment. This is it:

It's not very friendly about reporting errors, although it does at least have color-coding support- errors are red, warnings yellow, etc. But in general, the whole thing is just a huge wad of chewing gum and duct tape around nmake and shell scripts. Welcome to 1993! Spaces in your directory names? Ha! You'd better have your DDK installed on the root of your disk, or nothing's gonna work. Same goes for your project path. Want to reference non-DDK headers from C:\Program Files\Microsoft SDKs? Too bad- no matter how you escape it, quote it, or anything else, the spaces just get collapsed. Short of sym-linking the crap out of my drive, using 8.3 filenames (not necessarily stable across machines) or making copies of the entire SDK include tree, it's just not going to fly. This probably wasn't much of an issue before UMDF- kernel drivers have little business referencing user-mode headers outside the DDK. I'm comfortable enough in a command-line environment, but it seems like PowerShell + MSBuild would be a killer combo for this kind of thing, with the side-effect of being much more pluggable into VS for those of us who have left life in vi and the command-line behind us.

DDKBUILD is a nice compromise- it "feels" like a better integrated experience with Visual Studio (it'll do browse info, errors in the VS errors window, etc)... It's far from perfect though- limited support for UMDF, and all the smarts still live in the makefile. Wanna add a few new files in VS? They're just going in the bit bucket until you hack 'em into the makefile. Compile or link flag changes? Makefile. Want to see the text of warnings that are causing your build to fail? Gotta dig them out of a build logfile by hand- only errors are reported in VS (even though the default DDK build behavior is to fail on errors).

I'd probably be a lot less frustrated with this if the Optimus Sideshow driver project were going better. As it is, I'm having trouble getting a .NET UMDF driver up and running- strange stack corruption right after loading the framework. I'm hoping it's something dumb, or something related to the Martian build environment, but I just don't know yet...

Friday, March 30, 2007

Lamenting Nullable support in ADO.NET

I've been using .NET 2.0 heavily since it hit beta1. However, in my old job, I rarely had need to interact with ADO.NET- that was usually someone else's job. In a startup, pretty much everything is "your job"- I've done more ADO.NET in the last 3 weeks than in the prior six years of using .NET... Gotta say, I'm super-bummed that the data provider model wasn't updated to better support Nullable. Granted, in the days before nullable valuetypes, DBNull was a decent compromise for detecting nulls. But inlining code for dealing with DBNull into otherwise simple expressions leaves me feeling dirty and ashamed.

I understand that dealing with nullable types is a function best handled by the individual providers (especially considering the difference in semantics between backends), and I'm sure support will come with time. However, I'm especially disappointed that the Convert class, normally the "hero to the rescue" around DBNull, was also not updated. There's no ConvertToNullableInt32(...) or anything of the sort- we're left with rolling our own conversion methods or inlining things like:

int? MyNullableInt = reader["anInt"] is DBNull ? null : reader["anInt"];

Ick. I feel like I need a shower right now just having typed it. I'm hoping LINQ to SQL will support nullables in a more friendly manner (I seem to recall that it does, but I won't find out for sure until Orcas hits beta- don't have the bandwidth to mess with it now).

I found this recently, which looks similar to what I was considering writing... Still need to peek inside to see if I feel comfortable using it, but there's not really much to it. This covers half of what's missing. The other half is in the parameter type inference model used by individual providers. For instance, the following code breaks:

int? someIntValue = 5;

SqlCeParameter parm = new SqlCeParameter("@MYINT", someIntValue);

since the SqlCeParameter class has a hardcoded list of conversions to its internal datatypes, and Nullable<int> isn't in it. Again, this behavior is understandable- SQL CE has to work on 1.x CLRs, but that doesn't make me feel much better when I live in 2.0.

Oh well. I'm off the soapbox for tonight.

Thursday, March 29, 2007

First foray into driver-land

I've been looking for an excuse to write a device driver for a long time now... When the Optimus stuff popped up, it looked like a perfect opportunity.

My first attempts at writing a driver might seem wussy to kernel-mode weenies like Randy (my old college roommate and current USB-head at Microsoft). I've chosen to roll my first driver in UMDF- theoretically meaning I can't BSOD my box when I screw something up, since my driver bits live isolated in a user-mode process instead of rubbing shoulders with busy and important kernel bits. The host process might die, but the kernel should remain stable.

The big goal with this driver is implement as much as possible in managed code. UMDF is very COM-ish, so that makes life much easier for rolling a .NET wrapper around it. I'm told that the UMDF crew initially started to implement the whole user mode driver framework in .NET, providing a COM interop layer for folks who wanted to write in unmanaged code. After the "no managed code in Longhorn" mandate, that all went out the window. I've also been told that it might not work, (them's fightin' words!) so I'm taking a big leap of faith that I understand enough about user-mode Windows and hosting .NET to make it work.

More to come...

Friday, March 23, 2007

"Don't get strangled while wearing this bag on your head"

I don't know why these language-neutral pictograms of "things not to do" are so funny to me, but they are. My previous favorites are the three found on a can of compressed air showing that it's a bad idea to: spray into your mouth, spray upside-down into your mouth, and spray into your ear (apparently, spraying upside-down in your ear or other bodily orifices is OK- there is no indication to the contrary).

Anyway, found this on the bag around a new Dell laptop- I think it's my new favorite. Looks like they're discouraging being strangled while wearing this bag on your head. Doesn't the angle on the hand look like it belongs to someone else?

Thursday, March 22, 2007

Optimus Mini Three Fun

I can't explain the thrill of writing software to drive hardware- there's just some strange sense of power that comes from making things happen in meatspace by manipulating software. With that in mind, I've been following Scott Hanselman's trials trying to get .NET plugins working on the Optimus Mini Three keyboard with great interest. Back in February, he posted a request for someone to bridge the Optimus' C++ configurator plugin interface to a more friendly managed version. I'm not much of a dancer, but I whipped off a quick COM interop hack and sent it into the ether.

Fast-forward a few weeks- it's turned into a bit of an obsession. Scott liked my hack, kindly wrote it up, and asked for some enhancements. It got my mind spinning on how one could build a really nice managed bridge API (which it looks like others have also started). One three-hour-of-sleep night later, requested enhancements are working and out the door.

A quick aside- jeers to the dev at Art Lebedev Studios that thought changing a bunch of the plugin API constants and calls between release 1.29 and 1.30 of the configurator was a good idea. It's already hard for average folks to write plugins for your cool toy, but even worse that they break hard between minor releases of the product... I understand why they made the changes they did (clears up some muddiness in the model- one "plugin" per key, period), but it still caused me an extra chunk of work I hadn't planned on when I upgraded at 3am.

I'd originally been told that this hardware was capable of 30fps on each key. Maybe the hardware INSIDE the box, but the USB interface on this thing is actually a Prolific USB->Serial bridge (eg, the device inside the box speaks RS232, not native USB). Thus, the "keyboard" is not a standard HID device (it shows up as a virtual COM port), and has slow-as-molasses communication (relative to USB's capabilities). You're lucky to get 3fps, and that's on one key. If you're trying to animate all three keys, well, let's just say that "animated" is probably a strong word for what you'll end up with. Some back-of-the-napkin math shows that we're not coming close to maxing out the Prolific device's throughput, either, so it looks like the device itself is the bottleneck.

Anyway, griping aside, this thing is still a blast to play with, more so since Scott loaned me his hardware for a few days... We're currently exploring the possibilities of a Vista SideShow driver for this beast. My old college roommate was involved in a lot of the initial design for UMDF (the basis for SideShow drivers), and I've been looking for an excuse to write a driver for awhile now. This won't be the hardcore experience of writing a kernel-mode driver, but it's a fun and easy start, comparatively (and much less likely to BSOD my box).

More to come...


After a number of years lurking in the blogosphere and keeping my writings and doings inside a firewall, I figure it's high time to dip a toe into the bigger world out there. Here's the two paragraph history...

I'm a geek's geek, and I started early- learned BASIC at age 5 on a TI/99-4A, started my first BBS at 9 on a pair of Commodore 64's (you'd think I would've started blogging earlier), and got my first job at a real software company my freshman year of high school. Somewhere along the way, I learned there was more to life than ones and zeroes. Despite aversion to physical activity and the outdoors as a child, at some point I learned to enjoy the natural beauty of the great Northwest and all it has to offer. Perhaps it was the four years in the midwest at Purdue that really made me appreciate what I'd left behind in Oregon. Since moving back in 2000, I've decided I never want to leave again.

Like many geeks, I've also discovered a bit of a creative side while away from the keyboard. I play horn in the Oregon Symphonic Band, and, after marrying a singer, discovered that I too can sing (and actually enjoy it!). I still can't draw a straight line with a ruler, though. Oh well, creativity comes in many forms. Another outlet for mine is the seemingly endless string of home improvement projects at the Davis household. I must have a small ADD streak in me somewhere; I like to start new projects before old ones are finished, as evidenced by the state of my home and the no less than ten outstanding projects at any given time. My wife is a patient woman for putting up with it.

I'm not completely sure yet what form this blog will take. It's likely to be a mix of personal and work, geekin' and not.