Thursday, April 29, 2010

Changing default framework profile in VS2010 projects

Today I figured out how to hack the default framework profile in VS2010 (so as NOT to use the Client Profile by default on 4.0 projects).

A little background: I'm all for the idea of the Client Profile in .NET 4, but Visual Studio forces you to use it by default on many projects targeting .NET Framework 4.0. This alone is merely annoying, since you can easily change the profile under the Project Properties window. However, this annoyance becomes fatal to another of my favorite Visual Studio features: throwaway projects. If you want a throwaway project that targets the full 4.0 framework profile, well, too bad. Changing the framework profile requires saving the project, and the version target selector on the New Project dialog doesn't let you choose a profile. Poop.

I've filed a Connect suggestion to see if we can get a first-class fix- by all means, go vote for it here.

Meantime, I use throwaway projects many times a day, and about half the time I need stuff that's not in the Client Profile. Here's the fix:

Disclaimer: this involves minor hackage to your Visual Studio 2010 install. I am not responsible if it breaks a future service pack, kicks your dog, or causes a tear in the space-time continuum.

Let's take a visit to VS2010's ProjectTemplates directory. It's under Program Files\Microsoft Visual Studio 10.0\Common7\IDE\ProjectTemplates. Here you'll find a number of directories. I'm going to hack the C# Console Application, since that's my usual project of choice, but the technique should work on any project that defaults to the Client profile. The C# Console Application project template is under CSharp\Windows\1033(or the LCID of your installed locale)\ConsoleApplication.zip.

Extract the consoleapplication.csproj template file, and open it in the editor of your choice. Find the line that says $if$ ($targetframeworkversion$ >= 4.0), and remove the "Client" from inside the TargetFrameworkProfile element below it. If you're feeling saucy, you can just remove the whole $if$ to $endif$ block. Save the hacked template, and replace the one in the ConsoleApplication.zip file (I had to use 7zip for this- Explorer's zip integration thought the file was corrupted).

This isn't the end, though- Visual Studio caches its project templates, so to see your changes, you have to ask it to rebuild the cache. Open the VS2010 command prompt, and type

devenv.exe /setup


It'll silently crank away for a bit, then return. Run VS2010 and create a new project using one of the templates you hacked, and check the Project Properties window. If all went well, you should see it targeting .NET Framework 4 instead of the Client Profile. Sweet!

Hope this helps someone out...

UPDATE: Nathan Halstead posted a comment to the Connect issue for this one, suggesting that "devenv.exe /setup" is the recommended safe way to refresh the project template cache (I've made the change inline), and that overwriting the template shouldn't negatively affect VS servicing (other than repairs/updates might overwrite the hacked version). He suggested creating a copy of the project template with a different name to avoid the servicing overwrite issue. Thanks, Nathan!

Thursday, April 15, 2010

SQL Server Database Mirroring Woes

I'm a huge fan of SQL Server's database mirroring concept. We've been using it on our application (60GB DB over 220 tables, 10's to 100's of millions of rows) for almost 3 years on SQL 2005. Log shipping has its place (it's pivotal to our offsite disaster recovery plan), and clustering is great if you have a huge replicated SAN, but, at least on paper, DB mirroring is the lowest-cost and most approachable option. In reality however, it has some warts.

We started out with synchronous mirroring in a high safety + witness configuration. This is great, as we could easily take down the primary DB server for maintenance during "slow" periods with minimal effect on the running application (a few transactions might fail, which we recover from gracefully). As our database grew, though, we started seeing massive system slowdowns during peak usage periods. Investigation showed that the lag was coming from the commit overhead on the mirror, which might grow to 30s or more causing timeouts (high safety mode requires that the transaction be fully stored on the mirror server before returning control to the client). More investigation revealed that the disk write I/O on the mirror server's data volume was between 10x-500x the principal, which outstripped the disk array's ability to keep up. With a lot of angry customers and idled operators waiting around, we didn't have a lot of time to invest in root-cause analysis, so we switched over to asynchronous mirroring to keep the doors open (async mirroring doesn't hold up the client transaction waiting for the log to copy to the mirror). Luckily, Microsoft Startup Accelerator (now Bizspark) had hooked us up with SQL Enterprise licenses, so async mirroring was an option for us- it's not on SQL Standard! With async mirroring, a catastrophic loss of the primary server pretty much guarantees some data loss, so it's not ideal.


Awhile back, we upgraded all our DB server storage to SSDs in a RAID10 config, resulting in a massive performance boost on our aging hardware. We figured this would allow us to go back to synchronous mirroring mode with no problems. While not as severe, we still experienced painful slowdowns during peak write periods, and had to switch back to async mirroring again. Even with async mirroring, the write volume to the mirror data disk was still consistently hundreds of times that of the primary. As we hadn't planned for these ridiculous mirror write volumes, we were starting to worry about our mirror server's SSDs burning out prematurely (SSDs have a limited write volume before the flash cells start to fail).

Flash forward to last month- we've purchased spanking new 12-core DB servers with the latest and greatest SSDs in RAID10, 64G of memory, and SQL 2008 on Windows Server 2008R2. We wanted to spend the time to get high safety synchronous mirroring in place again, so we wrote a little simulator app to see if SQL 2008 on our new servers had the same nasty I/O issues. It did. On average, the data write volume was constant, and 250-500x higher on the mirror (writing constant 3-7MB/s 24/7 is a quick death sentence for an SSD rated at 5GB/day for 5 years)!

Time to call in Microsoft. After explaining the situation, the first response was "as designed". Really? Our write volumes aren't all that high, so if this is true, I have a hard time believing that database mirroring is useful on a database of any size. In any case, had we gone live this way, our mirror machine's SSDs would've been shot within a matter of months. After an initial call of "BS!", I got a little more detail: apparently SQL Server not only ships the log data over in real-time, it also performs recovery on the DB for every transaction to minimize the failover time (which IS nice and snappy, usually <1s). Turns out, there is an undocumented trace flag that disables the per-transaction recovery process, at the cost of a higher failover delay. This sounded like exactly what I needed. So what is this magic trace flag?

DBCC TRACEON(3499, -1)

This should be run on both the primary and mirror DBs, since they switch roles during failover. It worked exactly as advertised for us. The mirror server's disk I/O was now in lock-step with the primary, and we could once again use full-safety mirroring with a witness. The failover times were definitely increased, but in our testing, they're still sub-10s, which is perfectly workable for us.

I've only found two references to this trace flag online- one in a presentation by an MS employee that says you should test extensively (which we are), the other in an unrelated KB article about upgrading DBs with fulltext indexes to 2008 from 2005. I've found a handful of people griping about this problem in forums over the years, with no responses. Hopefully this will take care of others' issues as well as it did ours. We were within inches of switching to a sub-second log shipping scenario to replace mirroring because of this issue, and now it's looking like we won't have to. Just wish it was a little better documented.