Monday, November 19, 2012

DHCP Failover Breaks with Custom Options

I was really itching to try out the new DHCP Failover goodies in Windows Server 2012. I ran into a couple weird issues when trying to configure it- hopefully I can save someone else the trouble.

When I tried to create the partner server relationship and configure failover, I'd get the following error: Configure failover failed. Error: 20010. The specified option does not exist.

We have a few custom scope options defined for our IP phones. Apparently, it won't propagate the custom option configuration during the partner relationship setup- you have to do it manually. I haven't found this step or error message documented anywhere in the context of failover configuration.

Since we only had one custom option, and I knew what it was, I just manually added it. If you don't know which options are custom and need to be copied over, it's not hard to figure out. In the DHCP snap-in on the primary server, right-click the IPv4 container and choose Set Predefined Options, then scroll through values in the Option Name dropdown with the keyboard arrows or mouse wheel until you see the Delete button light up (that'll be a custom value). Hit Edit and copy the values down, then in the same place on the partner server, hit Add and poke in the custom values. If you have lots of custom options, you can use netsh dhcp or PowerShell to get/set the custom option config.

Once the same set of custom options exist on both servers, you can do Configure Failover as normal on the scopes and it should work fine. The values of any custom options defined under the scopes will sync up just fine.

I also had one scope where Configure Failover wasn't an option. I had imported all my scopes from a 2003 DC awhile back, so I'm guessing there was something else corrupted in the scope config- just deleting and recreating the scope fixed the problem (it was on a rarely used network, so no big deal; YMMV).

Hope this helps!

Friday, March 2, 2012

Enabling AHCI/RAID on Windows 8 after installation

UPDATE: MS has recently published a KB article on a simpler way to address this. Thanks to commenter Keymapper for the heads up!

Been playing around with Windows 8 Consumer Preview and Windows 8 Server recently. After installing, I needed to enable RAID mode (Intel ICH9R) on one of the machines that was incorrectly configured for legacy IDE mode (why is this the default BIOS setting Dell?). In Win7, you would just ensure that the Start value for the Intel AHCI/RAID driver is set to 0 in the registry, then flip the switch in the BIOS, and all's well. Under Win8 though, you still end up with the dreaded INACCESSIBLE_BOOT_DEVICE. The answer is simple enough: turns out they've added a new registry key underneath the driver you'll need to tweak: StartOverride. I just deleted the entire key, but if you're paranoid, you can probably just set the named value 0 to "0".

So, the full process:

- Enable the driver you need before changing the RAID mode setting in the BIOS:
(for Intel stuff, the driver name is usually iaStorV or iaStorSV, others may use storahci)
-- Delete the entire StartOverride key (or tweak the value) under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\(DriverNameHere)
- Reboot to BIOS setup
- Enable AHCI or RAID mode
- Profit!

Monday, January 9, 2012

Windows installation fails with missing CD/DVD driver

I was recently upgrading one of our large storage servers to a newer version of Windows and came across a really strange error during setup: "A required CD/DVD drive device driver is missing". This was really odd, since I've installed with no problems to this same class of hardware many times (and even this exact piece of hardware). Even stranger, the error popped up regardless if I was using the real DVD drive in the machine or our KVM's "virtual media" feature (which looks like a USB-connected DVD drive to Windows). After lots of searching and trying various things, I remembered what was different about this machine: it had about 30 storage LUNs on it for all the various disks. Hitting the "Browse..." button in the driver select dialog confirmed the problem- Windows was helpfully automounting all the LUNs and assigning drive letters to them. Since there are more than 26, it ran out of drive letters before getting around to mounting the DVD drive. Setup assumes if it can't find the DVD drive, it must be a driver problem, hence the misleading error message. I always disable automount on our storage servers anyway (since we mount all those storage LUNs under NTFS folders, not drive letters), but you can't do that for setup without altering the boot image. The solution in this case was to hit Shift-F10 from the setup dialog to get a command prompt, then use diskpart to unassign D: from a storage LUN and reassign it to the DVD drive.
list vol
select vol X 
(where X is the volume number in the list with D: assigned)
remove 
(removes the drive letter)
select vol Y 
(where Y is the volume number in the list that is your Windows Setup DVD)
assign letter=D
Once the setup DVD has a drive letter, you can close the command prompt and proceed with setup normally.

Monday, July 25, 2011

Dynamic client-side UI with Script#

At Earth Class Mail, we've just recently shipped a client-side UI written 100% in Script#, and we thought people might be interested in the process we used to get there. This post is just an overview of what we did- we'll supplement with more details in future posts. I should throw out some props to the team before I get too far- while my blog ended up being the home for the results, Matt Clay was the one behind most of the cool stuff that happened with this. He recognized early on that Script# represented a new way for us to do things, and drove most of the tooling I describe below. Cliff Bowman was the first to consume all the new tooling and came up with a lot of ideas on how to improve it.

Script# Background

If you haven't run across Script# before, I'm not surprised. It's a tool built by Nikhil Kothari that allows C# to compile down to Javascript. Script# has been around since at least 2007 (probably longer), but has only recently started to be really convenient to use. It lets you take advantage of many of the benefits of developing in a strongly-typed language like C# (compile-time type-checking, method signature checks, high-level refactoring) while still generating very minimal JS that looks like what you wrote originally. If you’d like more background, Matt and I talked about it recently with Scott Hanselman. You can also visit Nikhil's Script# project home.

Import Libraries

The first thing we had to do was write a couple of import libraries for JS libraries we wanted to use that weren't already in the box. Import libraries are somewhat akin to C header files that describe the shape (objects, methods, etc) of the code in the JS library so the C# compiler has something to verify your calls against. The import library does *not* generate any JS at compile-time, it's merely there to help out the C# compiler (as well as everything that goes with that- e.g., Intellisense, "Find References"). Script# ships with a jQuery import library, which hit the majority of what we needed. Since we'd previously decided to use some other libraries that didn't already have Script# import libraries (jQuery Mobile, DateBox, JSLinq), we had to whip out import libraries for them- at least for the objects and functions we needed to call from C#. This was a pretty straightforward process- only took a couple of hours to get what we needed.

Consuming WCF Services

The next challenge was to get jQuery calling our WCF JSON services. Our old JS UI had a hand-rolled service client that we'd have to keep in sync with all our operations and contracts- maintenance fail! Since our service contracts were already defined in C#, we initially tried to just "code-share" them into the Script# projects, but that proved to be problematic for a few reasons. First, the contracts were decorated with various attributes that WCF needs. Since Script# isn't aware of WCF, these attributes would need to be redeclared or #ifdef'd away to make the code compilable by Script#. It ended up not mattering anyway, though, since our service contract signatures weren't directly usable anyway. Since XmlHttpRequest and its ilk are async APIs, our generated service client would have to supply continuation parameters to all service calls (ie, onSuccess, onError, onCancel) which would render the operation signatures incompatible anyway. Our options were to redeclare the service contract (and impl) to be natively async pattern (so they'd have a continuation parameter declared) or to code-gen an async version of the contract interface for the client. We opted for the code-generation approach, as it allowed for various other niceties (e.g., unified eventing/error-handling, client-side partial members that aren't echoed back to the server), and settled on a hybrid C#/T4 code generator that gets wired into our build process. Now we have fully type-checked access to our service layer, and with some other optimizations that we'll talk about later, only a minimal amount of JS gets generated to support it.

jQuery templating

The next challenge was using the new jQuery template syntax. This is a pretty slick new addition to jQuery 1.5 that allows for rudimentary declarative client-side databinding. It works by generating JS at runtime from the templated markup file (very simple regex-based replacements of template tags with JS code)- the templates can reference a binding context object that allows the object's current value to appear when the markup is rendered. While it worked just fine in our new "all-managed" build environment, we had a couple of things we didn't like. The first was that template problems (unbalanced ifs, mis-spelled/missing members, etc) can't be discovered until runtime, when it's either a script error (at best) or a totally blank unrenderable UI (at worst). It also means that managed code niceties like "Find All References" won't behave correctly against code in templates, since they don't target Script#. We decided to make something with similar syntax and mechanics, but that runs at build-time and dumps out Script# code instead of JS. This way, "Find All References" still does the right thing, and we get compile-time checking of the code emitted by the templates. Just like other familiar tools, our template compiler creates a ".designer.cs" file that contains all the code and markup, which is then compiled by the Script# compiler into JS. We get a few added benefits from this approach as well. The code isn't being generated at runtime (as it is with jQuery templates), so we can detect template errors at compile-time. We were also able to add some new template tags for declarative formatting, control ID generation, and shortcutting resource access.

Resourcing

Next, we wanted to consume resources from .resx files using the same C# class hierarchy available in native .NET apps. Even though Script# has a little bit of resource stuff built in, Matt whipped up a simple build-time T4-based resx-to-Script# code generator that also added a few niceties (automatic enum-string mapping, extra validation).

Visual Studio/Build integration

Currently, all this stuff is wired up through pre/post-build events in Visual Studio, and some custom MSBuild work. We're looking at ways we could get it a little more tightly integrated, as well as having it work as a "custom tool" in VS2010 to allow save-time generation of some of the code rather than build-time.

Summary

Combining Script# with a bit of custom tooling allows for a new way of writing tight client-side JS that looks a lot more like ASP.NET or Winforms development, and offers many of the same conveniences. Nobody on our team wants to write naked JS any more- to the point that we're actually working on tooling to convert our existing JS codebase to Script#, so we can start to clean it up and make changes with all the C# features we've come to expect from client-side development. Obviously, some manual work will still be required to get everything moved over, but our team really believes that this is the wave of the future.

Saturday, July 23, 2011

The Road to Script#

At work, we just shipped our first major new chunk of UI in a couple of years, written 100% in Script#. We've been watching Script# for a few years as an interesting option for creating client-side UI, and it recently hit a level of functionality where we felt it was workable. It also coincided nicely with our need for a mobile UI (a standalone product that we could roll out slowly, low-risk compared to changing our bread-and-butter desktop UI).

A little history

When we first started working on a .NET rewrite of the LAMP-based Earth Class Mail (aka Remote Control Mail) in 2007, the client-side revolution was in full force. All the cool kids were eschewing server-side full page postback UI in favor of Javascript client UI. We recognized this from the start, but also had very tight shipping timelines to work under. ComponentArt had some nifty-looking products that promised the best of both worlds- server-side logic with client-side templating, data-binding, and generated Javascript. This fit perfectly with our team's server-side skillset (we didn't have any JS ninjas at the time), so we jumped on it. While we were able to put together a mostly client-side UI in a matter of months, it really didn't live up to the promise. The generated JS and ViewState was very heavy, causing the app to feel slow (Pogue and other reviewers universally complained about it). Also, the interaction between controls on the client was very brittle. Small, seemingly unimportant changes (like the order in which the controls were declared) made the difference between working code and script errors in the morass of generated JS. At the end of the day, we shipped pretty much on time, but the result was less than stellar, and we'd already started making plans for a rewrite.

V2: all JS

Fast-forward to summer of 2009, when we shipped a 25k+ line all-JS jQuery client UI rewrite (which also included an all-new WCF webHttpBinding/JSON service layer). While it was originally planned to be testable via Selenium, JSUnit, and others, things were changing too fast and the tests got dropped, so it was months of tedious manual QA to get it out the door. User reception of the new UI was very warm, and we iterated quickly to add new features. However, refactoring the new JS-based UI was extremely painful due to the lack of metadata. We mostly relied on decent structure and "Ctrl-f/Ctrl-h" when we needed to propagate a service contract change into the client. Workable, but really painful to test changes, and there were inevitably bugs that would slip through when someone did something "special" in the client code. It got to a point where we were basically afraid of the codebase, since we couldn't refactor or adjust without significant testing pain, so the client codebase stagnated somewhat.

On to Script#

We'd been watching our user-agent strings trend mobile for awhile, and this summer it finally reached a point where we needed to own a mobile UI. Our mobile story to this point consisted of users running the main browser UI on mobile devices with varying degrees of success (and a LOT of zooming), and an iPhone app that a customer wrote by reverse-engineering our JSON (we later helped him out by providing the service contracts, since he was filling a void we weren't). The question came to how would we build a new mobile UI? Bolting it to our existing JS client wasn't attractive to anyone, as it's grown unwieldy and scary, and we didn't want to risk destabilizing it with a bunch of new mobile-only code. The prospect of another mass of JS code wasn't attractive to anyone. Another ECM architect (Matt Clay) had been watching Script# for quite awhile, and it had just recently added VS2010 integrated code editing (used to be a standalone non-intellisense editor that ran inside VS2008) and C# 2.0 feature support (generics, anonymous methods). These features gave us enough pause to take another look, and after a week or so of experimentation, we decided to try and ship the mobile UI written with Script#. I'll post something soon that describes what the end result looks like.

Thursday, May 19, 2011

SNI support in Android!

UPDATE: Microsoft has announced that IIS 8 supports SNI!

The barriers to real use of true SSL named-based virtual hosting continue to fall. Android Honeycomb supports SNI! Hey Microsoft- where's the IIS support? Apache's had SNI support forever, and Chrome, FF, and IE8 support it now. You guys are the ones holding up the parade!

Background

Name-based virtual hosting is what makes private-branded cloud services and shared-tenant server hosting reasonable- rather than requiring a single IP address per hostname, many hostnames are mapped to a single IP with DNS CNAMEs. The webserver looks at the HTTP Host: header sent by the client's browser when deciding which site's content to serve. This falls apart with SSL, though, since the target hostname is baked into the certificate, and the SSL handshake occurs before the HTTP Host: header is available. SNI is the solution to this problem. It allows the hostname the client expects to be sent as part of the SSL handshake process, so the SSL server can select which certificate to present. The only workaround right now (short of one IP address per hosted domain) is the use of the SAN attribute (Subject Alternate Name), which allows a certificate to present a list of hosts that are valid- this doesn't scale well, and requires the hosting entity to obtain a new certificate for subjects they don't own every time they add a new hosted domain to a server. We've always said, "no way" when customers want us to private-label host Earth Class Mail under their domain, for this exact reason.

Thursday, May 13, 2010

JVC XR-KW810 Review

I recently upgraded the factory stereo in my '02 Lexus IS300 to the new JVC XR-KW810, and thought I'd share my experiences thus far.

I didn't really want to swap out the factory stereo, as it still sounded (and looked) quite good. Unfortunately, I recently picked up the official Google car dock for my Nexus One, and really wanted to use it as a music player in the car. Since the Google dock only has Bluetooth audio output, my only options for the factory Lexus stereo were to use the headphone jack on the phone to a tape adapter or a yet-to-be-hacked-in aux input. I tried it with a tape adapter for a couple of days, and decided it was time for a Bluetooth-capable stereo. My only requirements were Bluetooth, an aux-in, double-DIN with a real volume knob (and preferably lots of other "hard" buttons), and custom color configuration (to more closely match the IS300's orange illumination). This led me to the JVC XR-KW610 and it's bigger brother, the XR-KW810. The 610 was okay, but the segmented display looked kinda hokey and it didn't come with the Bluetooth adapter in-box. The 810 has a better looking matrix display and Bluetooth is included. Done.

Installation was very smooth (at least around the head unit itself- reusing the Lexus factory amp and speakers on a non-Lexus head requires a special part). It includes a sleeve for "roll your own" setups as well as an assortment of screwholes in the unit itself. The included Bluetooth adapter just plugs into the rear USB port (there's also one on the front), and the handsfree mic hangs off the back. The unit has a headlight switch input, which is pretty handy for dimming the illumination when the headlights are on. After putting the car all back together and booting it up, my first impressions were pretty good.

Sound quality through my factory amp was quite solid, though the default EQ settings were a little bassy on my setup (I didn't try the unit's built-in amp). This was easily rectified by tweaking the ProEQ settings, which allow for finer unit-wide EQ adjustment (as opposed to the front-panel EQ settings, which are per-input and fairly coarse). In addition to the ProEQ settings, there's a decent array of loudness, LPF, HPF, amp and sub gain adjustments. Also, each source's gain can be adjusted individually.

The controls are generally intuitive and pretty easy to operate without looking. There's a four-way button on the lower left of the face, three large buttons next to the volume knob, source/power and EQ buttons, and 6 preset selector buttons. The buttons are large, but have a somewhat cheap feel. The glossy finish on the unit looks nice under low light, but shows every smudge and speck of dust on a sunny day. The illumination color adjustments are extensive - buttons and display can be colored independently, and different colors can be set for day and night profiles. The display can be difficult to read in direct sunlight, though it does have a polarizing layer that helps somewhat. The real low point on the display is the low LCD update frequency, which causes horizontal text scrolling on long titles or RDS messages to be difficult to read.

On the initial install, I hadn't purchased the separate KT-HD300 HD Radio tuner yet. FM reception on the built-in tuner was quite good, but AM was a little weak compared the the factory unit. The one thing I missed from the factory head was RDS display (station ID and "now playing" info), which the built-in tuner doesn't have. However, the HD tuner adds this, so I ordered it (online, $89). The external HD tuner disables and replaces the built-in tuner by plugging into the back of the head unit. Luckily, it includes long antenna, power and data cables, because it's rather bulky (about 5x9x1 inches)- it took a bit of creativity to find a niche for it. It works as advertised, and does a seamless "upgrade" to the digital signal once it's locked in on the analog. Direct tuning to an digital-only station (ie, via a preset) can take a couple of seconds- the display flashes "Linking" while this is occurring. My only other beef with the HD tuner is a pretty minor one: it disables the "up/down" controls for scanning through presets that are available with the stock tuner (with the HD tuner, up/down is used to switch between HD channels on the same station). The unit supports 18 presets on the FM band, but only 6 are accessible by hard button. Without the up/down access, presets are selected by tapping the menu button, turning the knob to select, and tapping the knob. It works, but nowhere near as conveniently as with the built-in tuner.

The Bluetooth support is fairly advanced compared to other units in the same price range- it supports A2DP, AVRCP 1.3, HSP/HFP and PBAP. In English, this means you can use it to listen to high-quality audio from your music player, remotely control it, get the "now playing" info, navigate playlists, voice dial your cell-phone and answer calls, and copy or navigate the phonebook from the unit. I've only been able to try parts of this thus far, as the Nexus One's Bluetooth implementation doesn't yet support all this functionality. What I have tried is pretty solid- the unit can pair with two different devices, and has a dedicated call/answer button on the face. The handsfree mic volume seems a little low, so it needs to be routed pretty close to your face (maybe the visor). I use the Nexus One car dock for my handsfree calling anyway, so it's not an issue for me.

The USB support is pretty complete as well. If using a thumb drive, it has full folder navigation support and displays album/title info while playing. It also supports USB iPod control and charging, which works quite well, supporting standard functions (playlists, artist/album/song, podcasts, etc). It does disable the iPod display and control (shows a nifty "JVC"), so you have no choice but to control the music from the head unit (difficult for the backseat DJs, though they could use the included remote control in a pinch).

The CD player is pretty standard- it supports CD-TEXT, so newer CDs or burned ones will display title and track info. Not much else to say here.

Thus far, I'm very pleased with the JVC XR-KW810 head unit and KT-HD300 HD tuner. Now if Google would get around to updating the Bluetooth stack to support AVRCP 1.3, I could use all the goodies over Bluetooth.