Thursday, May 13, 2010

JVC XR-KW810 Review

I recently upgraded the factory stereo in my '02 Lexus IS300 to the new JVC XR-KW810, and thought I'd share my experiences thus far.

I didn't really want to swap out the factory stereo, as it still sounded (and looked) quite good. Unfortunately, I recently picked up the official Google car dock for my Nexus One, and really wanted to use it as a music player in the car. Since the Google dock only has Bluetooth audio output, my only options for the factory Lexus stereo were to use the headphone jack on the phone to a tape adapter or a yet-to-be-hacked-in aux input. I tried it with a tape adapter for a couple of days, and decided it was time for a Bluetooth-capable stereo. My only requirements were Bluetooth, an aux-in, double-DIN with a real volume knob (and preferably lots of other "hard" buttons), and custom color configuration (to more closely match the IS300's orange illumination). This led me to the JVC XR-KW610 and it's bigger brother, the XR-KW810. The 610 was okay, but the segmented display looked kinda hokey and it didn't come with the Bluetooth adapter in-box. The 810 has a better looking matrix display and Bluetooth is included. Done.

Installation was very smooth (at least around the head unit itself- reusing the Lexus factory amp and speakers on a non-Lexus head requires a special part). It includes a sleeve for "roll your own" setups as well as an assortment of screwholes in the unit itself. The included Bluetooth adapter just plugs into the rear USB port (there's also one on the front), and the handsfree mic hangs off the back. The unit has a headlight switch input, which is pretty handy for dimming the illumination when the headlights are on. After putting the car all back together and booting it up, my first impressions were pretty good.

Sound quality through my factory amp was quite solid, though the default EQ settings were a little bassy on my setup (I didn't try the unit's built-in amp). This was easily rectified by tweaking the ProEQ settings, which allow for finer unit-wide EQ adjustment (as opposed to the front-panel EQ settings, which are per-input and fairly coarse). In addition to the ProEQ settings, there's a decent array of loudness, LPF, HPF, amp and sub gain adjustments. Also, each source's gain can be adjusted individually.

The controls are generally intuitive and pretty easy to operate without looking. There's a four-way button on the lower left of the face, three large buttons next to the volume knob, source/power and EQ buttons, and 6 preset selector buttons. The buttons are large, but have a somewhat cheap feel. The glossy finish on the unit looks nice under low light, but shows every smudge and speck of dust on a sunny day. The illumination color adjustments are extensive - buttons and display can be colored independently, and different colors can be set for day and night profiles. The display can be difficult to read in direct sunlight, though it does have a polarizing layer that helps somewhat. The real low point on the display is the low LCD update frequency, which causes horizontal text scrolling on long titles or RDS messages to be difficult to read.

On the initial install, I hadn't purchased the separate KT-HD300 HD Radio tuner yet. FM reception on the built-in tuner was quite good, but AM was a little weak compared the the factory unit. The one thing I missed from the factory head was RDS display (station ID and "now playing" info), which the built-in tuner doesn't have. However, the HD tuner adds this, so I ordered it (online, $89). The external HD tuner disables and replaces the built-in tuner by plugging into the back of the head unit. Luckily, it includes long antenna, power and data cables, because it's rather bulky (about 5x9x1 inches)- it took a bit of creativity to find a niche for it. It works as advertised, and does a seamless "upgrade" to the digital signal once it's locked in on the analog. Direct tuning to an digital-only station (ie, via a preset) can take a couple of seconds- the display flashes "Linking" while this is occurring. My only other beef with the HD tuner is a pretty minor one: it disables the "up/down" controls for scanning through presets that are available with the stock tuner (with the HD tuner, up/down is used to switch between HD channels on the same station). The unit supports 18 presets on the FM band, but only 6 are accessible by hard button. Without the up/down access, presets are selected by tapping the menu button, turning the knob to select, and tapping the knob. It works, but nowhere near as conveniently as with the built-in tuner.

The Bluetooth support is fairly advanced compared to other units in the same price range- it supports A2DP, AVRCP 1.3, HSP/HFP and PBAP. In English, this means you can use it to listen to high-quality audio from your music player, remotely control it, get the "now playing" info, navigate playlists, voice dial your cell-phone and answer calls, and copy or navigate the phonebook from the unit. I've only been able to try parts of this thus far, as the Nexus One's Bluetooth implementation doesn't yet support all this functionality. What I have tried is pretty solid- the unit can pair with two different devices, and has a dedicated call/answer button on the face. The handsfree mic volume seems a little low, so it needs to be routed pretty close to your face (maybe the visor). I use the Nexus One car dock for my handsfree calling anyway, so it's not an issue for me.

The USB support is pretty complete as well. If using a thumb drive, it has full folder navigation support and displays album/title info while playing. It also supports USB iPod control and charging, which works quite well, supporting standard functions (playlists, artist/album/song, podcasts, etc). It does disable the iPod display and control (shows a nifty "JVC"), so you have no choice but to control the music from the head unit (difficult for the backseat DJs, though they could use the included remote control in a pinch).

The CD player is pretty standard- it supports CD-TEXT, so newer CDs or burned ones will display title and track info. Not much else to say here.

Thus far, I'm very pleased with the JVC XR-KW810 head unit and KT-HD300 HD tuner. Now if Google would get around to updating the Bluetooth stack to support AVRCP 1.3, I could use all the goodies over Bluetooth.

Thursday, April 29, 2010

Changing default framework profile in VS2010 projects

Today I figured out how to hack the default framework profile in VS2010 (so as NOT to use the Client Profile by default on 4.0 projects).

A little background: I'm all for the idea of the Client Profile in .NET 4, but Visual Studio forces you to use it by default on many projects targeting .NET Framework 4.0. This alone is merely annoying, since you can easily change the profile under the Project Properties window. However, this annoyance becomes fatal to another of my favorite Visual Studio features: throwaway projects. If you want a throwaway project that targets the full 4.0 framework profile, well, too bad. Changing the framework profile requires saving the project, and the version target selector on the New Project dialog doesn't let you choose a profile. Poop.

I've filed a Connect suggestion to see if we can get a first-class fix- by all means, go vote for it here.

Meantime, I use throwaway projects many times a day, and about half the time I need stuff that's not in the Client Profile. Here's the fix:

Disclaimer: this involves minor hackage to your Visual Studio 2010 install. I am not responsible if it breaks a future service pack, kicks your dog, or causes a tear in the space-time continuum.

Let's take a visit to VS2010's ProjectTemplates directory. It's under Program Files\Microsoft Visual Studio 10.0\Common7\IDE\ProjectTemplates. Here you'll find a number of directories. I'm going to hack the C# Console Application, since that's my usual project of choice, but the technique should work on any project that defaults to the Client profile. The C# Console Application project template is under CSharp\Windows\1033(or the LCID of your installed locale)\ConsoleApplication.zip.

Extract the consoleapplication.csproj template file, and open it in the editor of your choice. Find the line that says $if$ ($targetframeworkversion$ >= 4.0), and remove the "Client" from inside the TargetFrameworkProfile element below it. If you're feeling saucy, you can just remove the whole $if$ to $endif$ block. Save the hacked template, and replace the one in the ConsoleApplication.zip file (I had to use 7zip for this- Explorer's zip integration thought the file was corrupted).

This isn't the end, though- Visual Studio caches its project templates, so to see your changes, you have to ask it to rebuild the cache. Open the VS2010 command prompt, and type

devenv.exe /setup


It'll silently crank away for a bit, then return. Run VS2010 and create a new project using one of the templates you hacked, and check the Project Properties window. If all went well, you should see it targeting .NET Framework 4 instead of the Client Profile. Sweet!

Hope this helps someone out...

UPDATE: Nathan Halstead posted a comment to the Connect issue for this one, suggesting that "devenv.exe /setup" is the recommended safe way to refresh the project template cache (I've made the change inline), and that overwriting the template shouldn't negatively affect VS servicing (other than repairs/updates might overwrite the hacked version). He suggested creating a copy of the project template with a different name to avoid the servicing overwrite issue. Thanks, Nathan!

Thursday, April 15, 2010

SQL Server Database Mirroring Woes

I'm a huge fan of SQL Server's database mirroring concept. We've been using it on our application (60GB DB over 220 tables, 10's to 100's of millions of rows) for almost 3 years on SQL 2005. Log shipping has its place (it's pivotal to our offsite disaster recovery plan), and clustering is great if you have a huge replicated SAN, but, at least on paper, DB mirroring is the lowest-cost and most approachable option. In reality however, it has some warts.

We started out with synchronous mirroring in a high safety + witness configuration. This is great, as we could easily take down the primary DB server for maintenance during "slow" periods with minimal effect on the running application (a few transactions might fail, which we recover from gracefully). As our database grew, though, we started seeing massive system slowdowns during peak usage periods. Investigation showed that the lag was coming from the commit overhead on the mirror, which might grow to 30s or more causing timeouts (high safety mode requires that the transaction be fully stored on the mirror server before returning control to the client). More investigation revealed that the disk write I/O on the mirror server's data volume was between 10x-500x the principal, which outstripped the disk array's ability to keep up. With a lot of angry customers and idled operators waiting around, we didn't have a lot of time to invest in root-cause analysis, so we switched over to asynchronous mirroring to keep the doors open (async mirroring doesn't hold up the client transaction waiting for the log to copy to the mirror). Luckily, Microsoft Startup Accelerator (now Bizspark) had hooked us up with SQL Enterprise licenses, so async mirroring was an option for us- it's not on SQL Standard! With async mirroring, a catastrophic loss of the primary server pretty much guarantees some data loss, so it's not ideal.


Awhile back, we upgraded all our DB server storage to SSDs in a RAID10 config, resulting in a massive performance boost on our aging hardware. We figured this would allow us to go back to synchronous mirroring mode with no problems. While not as severe, we still experienced painful slowdowns during peak write periods, and had to switch back to async mirroring again. Even with async mirroring, the write volume to the mirror data disk was still consistently hundreds of times that of the primary. As we hadn't planned for these ridiculous mirror write volumes, we were starting to worry about our mirror server's SSDs burning out prematurely (SSDs have a limited write volume before the flash cells start to fail).

Flash forward to last month- we've purchased spanking new 12-core DB servers with the latest and greatest SSDs in RAID10, 64G of memory, and SQL 2008 on Windows Server 2008R2. We wanted to spend the time to get high safety synchronous mirroring in place again, so we wrote a little simulator app to see if SQL 2008 on our new servers had the same nasty I/O issues. It did. On average, the data write volume was constant, and 250-500x higher on the mirror (writing constant 3-7MB/s 24/7 is a quick death sentence for an SSD rated at 5GB/day for 5 years)!

Time to call in Microsoft. After explaining the situation, the first response was "as designed". Really? Our write volumes aren't all that high, so if this is true, I have a hard time believing that database mirroring is useful on a database of any size. In any case, had we gone live this way, our mirror machine's SSDs would've been shot within a matter of months. After an initial call of "BS!", I got a little more detail: apparently SQL Server not only ships the log data over in real-time, it also performs recovery on the DB for every transaction to minimize the failover time (which IS nice and snappy, usually <1s). Turns out, there is an undocumented trace flag that disables the per-transaction recovery process, at the cost of a higher failover delay. This sounded like exactly what I needed. So what is this magic trace flag?

DBCC TRACEON(3499, -1)

This should be run on both the primary and mirror DBs, since they switch roles during failover. It worked exactly as advertised for us. The mirror server's disk I/O was now in lock-step with the primary, and we could once again use full-safety mirroring with a witness. The failover times were definitely increased, but in our testing, they're still sub-10s, which is perfectly workable for us.

I've only found two references to this trace flag online- one in a presentation by an MS employee that says you should test extensively (which we are), the other in an unrelated KB article about upgrading DBs with fulltext indexes to 2008 from 2005. I've found a handful of people griping about this problem in forums over the years, with no responses. Hopefully this will take care of others' issues as well as it did ours. We were within inches of switching to a sub-second log shipping scenario to replace mirroring because of this issue, and now it's looking like we won't have to. Just wish it was a little better documented.