tag:blogger.com,1999:blog-17825342391174337622024-03-12T15:03:42.568-07:00Matt on ... WhateverUnknownnoreply@blogger.comBlogger70125tag:blogger.com,1999:blog-1782534239117433762.post-13884798627912421632021-08-11T11:23:00.001-07:002021-08-11T11:23:09.803-07:00Customizing MacOS guest VMs in Parallels 17 on Apple Silicon<p>Those of us that need to test and package software for MacOS on Apple Silicon (aka M1) have spent the past many months bemoaning the lack of virtualization options for MacOS on Apple's new flagship hardware platform. Modern build environments lean heavily on VM images (and/or containers, where available) to ensure safety, isolation, and repeatability. Unfortunately, since the boot process for Apple Silicon MacOS uses a bootloader borrowed from iOS, the typical EFI-based bootloader that's used to boot Intel Mac images won't work. Since its release, Apple Silicon has not offered any virtualization support for MacOS itself, which also explains the lack of things like GHA/AZP build workers for M1/Apple Silicon. So when Apple quietly announced that the upcoming MacOS Monterey offered MacOS guest support, there was much rejoicing.</p><p>After installing Monterey Beta4 on my M1 Mac Mini, I spent a long evening playing around with the new Parallels 17 support for MacOS guests. It's definitely *extremely* early days- most of the support for common Parallels features isn't wired up for MacOS guests, as they use a completely different set of disk images and tools that are a very thin wrapper around Apple's new Virtualization framework. Most of Parallels' great automation and command-line tooling is currently completely unaware of the new MacOS guests on Apple Silicon. If you're going through the UI "front door", it doesn't appear possible to customize the VM in any way (even its name in Control Center; as of this writing, creating multiple Mac guests names them all "macOS"). The bigger issue that I set out to solve is the inability to customize the default disk image size of 30GB to make the VM useful for simple development tasks- the default size is too small to even install the XCode command-line tools. While none of the usual Parallels tools or APIs appear capable of customizing an M1 MacOS guest or its images, a bit of poking revealed a couple of command-line tools buried in the Parallels 17 package that will allow some basic customization of new VMs using undocumented args.</p><p>The Parallels tool that wraps the Virtualization framework APIs for creating a new VM image from an Apple IPSW archive can be found at:</p><p>/Applications/Parallels\ Desktop.app/Contents/MacOS/prl_macvm_create</p><p>It has a couple of modes; calling it with `--getipswurl` will try to find a working download link for a compatible IPSW package to use to seed the new image, though I prefer to just use <a href="https://mrmacintosh.com/apple-silicon-m1-full-macos-restore-ipsw-firmware-files-database/">the list maintained by MrMacintosh</a>. Regardless where it comes from, downloading an IPSW image is the first step to creating a new VM. When you're using the Parallels "New" button, most of the time is spent on the IPSW download, so if you want to make a lot of VMs, downloading and reusing the IPSW for each VM will save a lot of time and bandwidth (as they're ~13GB each). </p><p>Once you have an IPSW image locally, run prl_macvm_create with the path to the IPSW, and the path where you want the VM image to live (the default is under '~/Parallels/macOS 12.macvm'). This is also your chance to increase the default disk size by adding `--disksize` and the desired disk image size (in bytes). If you omit this arg, your VM image will be created with a tiny 30GB disk that can't do much more than run the OS itself and allow for some small software installations. If you're planning to install XCode, I'd recommend at least 60GB.</p><p>Here's an invocation that uses a local copy of an Monterey IPSW image to create a new VM at ~/Parallels/devmac1.macvm with a 60GB disk:</p><p><span style="font-family: courier;">/Applications/Parallels\ Desktop.app/Contents/MacOS/prl_macvm_create ~/Downloads/UniversalMac_12.0_21A5294g_Restore.ipsw ~/Parallels/devmac1.macvm --disksize 60000000000</span></p><p>Assuming all's well, it should create a few image and config files under the path you specified, followed by `Starting installation.` and some progress messages. This process usually completes in a couple of minutes.</p><p>Once you're greeted with `Installation succeeded.`, your VM should be ready for its first boot. You can use "Open" in the Parallels Control Center to do this (any directory ending with the `.macvm` extension should be visible there) if you want it to behave as if you'd created it in Parallels, or if you want to run the VM directly from the command-line (which has some advantages), you can use</p><p>/Applications/Parallels\ Desktop.app/Contents/MacOS/Parallels\ Mac\ VM.app/Contents/MacOS/prl_macvm_app</p><p>With no args, this will run the default VM at '~/Parallels/macOS 12.macvm', or you can pass the `--openvm` argument to run any VM you wish.</p><p>Here's an invocation that runs the VM I created above:</p><p><span style="font-family: courier;">/Applications/Parallels\ Desktop.app/Contents/MacOS/Parallels\ Mac\ VM.app/Contents/MacOS/prl_macvm_app --openvm ~/Parallels/devmac1.macvm </span></p><p>This runs the VM inside the launched process, so Ctrl-C'ing the command or otherwise killing that process stops the VM. This is definitely a feature in my book for ephemeral VMs; it makes it pretty trivial to manage the running worker VMs in a CI environment by just starting the VM process and hanging onto its handle, signaling/killing it when you're done.</p><p>Side note: there are several OSS projects (eg, <a href="https://github.com/KhaosT/MacVM">KhaosT's MacVM</a>) that also wrap the calls to the Virtualization framework to create and run new MacOS guests under Monterey. The big win for Parallels right now is with its single command to create a new image. The open source image builders that I've seen will call into the virt framework to create a blank VM running in DFU recovery mode, but then require you to use Apple Configurator 2 to load the IPSW yourself into the VM. It works fine, but definitely less convenient for automation than what Parallels has rolled up into a convenient one-stop package, and I assume much of the rest of the Parallels value add from their excellent automation will come with time.</p><p>One thing we need right away is cheap throwaway VM clones; fully realized 60GB disk images are very expensive to copy around, run for a couple minutes, then delete and repeat. Thankfully, APFS' copy-on-write cloned files (created by cp -c on an APFS-formatted filesystem) fit the bill perfectly with the disk image files that Apple's Virtualization.framework uses. Once you've configured a VM image to include whatever tools and startup behavior you want, simply shut down the VM, and copy the entire VM directory as many times as you'd like for (basically) free, eg:</p><p><span style="font-family: courier;">cp -c -r ~/Parallels/devmac1.macvm ~/Parallels/cloned_ephemeral_mac.macvm</span> </p><p>The filesystem will only record changed blocks in the copied files once the VM boots up and starts doing writes. Once the clone directory is deleted, so are all the filesystem changes made under it.</p><p>Virtualization support is still quite early in the Apple Silicon ecosystem, but now we've got at least the very basic tools to do what's needed. Thanks Apple, Parallels, and all the OSS folks out there taking this stuff apart!</p><p>Another random side note: just for giggles, I tried creating a VM with a Big Sur 11.5.1 IPSW (nice to build against a released + supported OS), but it says "prl_macvm_create[10407:185738] No supported Mac configuration." - I assume there's some extra magic in the package required to allow it to be virtualized, so at least for now, it looks like Monterey+ is the only option for guest VMs.</p><p><br /></p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1782534239117433762.post-70420628119609375242020-03-18T16:49:00.000-07:002020-03-18T17:08:28.232-07:00Why no Ansible controller for Windows?As Ansible's first full-time hire working with Windows back in 2015, I often get the question "Why can't I run an Ansible controller on Windows?". It's a really good question, and one that I've spent a lot of time thinking about (and advocating for, and prototyping) over the years. There have been statements in our docs and by our core devs in the community that basically amount to "not gonna happen, don't ask", but I think we're overdue for a deeper dive into the challenges. Rather than sprinkling that discussion over a bunch of Github issues and IRC, I'll try to cover the big stuff all at once in this post.<br />
<br />
<h2>
TL;DR</h2>
There are a lot of UNIX-isms deeply baked into most of Ansible that prevent it from working on native Windows at all, and even if we solved every one of them, the likelihood of real-world playbooks executing with 100% fidelity between a *nix controller and a Windows controller is almost zero. If you want to run an Ansible controller on Windows anytime soon, use WSL.<br />
<br />
Okay, if you're still with me, I'll assume you're looking for more detail. I've actually done two internal prototypes of a Windows Ansible controller just to see what broke, and how hard it'd be to address. I'll describe the largest issues, and ways they could potentially be solved. This list is by no means exhaustive, but should hopefully illustrate that the overall effort is a non-trivial problem, as is "fixing" it without potentially breaking a lot of other things in Ansible.<br />
<br />
<h2>
Worker Process Model</h2>
Ansible's controller worker model (as of 2.10) makes heavy use of the <a href="https://en.wikipedia.org/wiki/Fork_(system_call)">POSIX fork() syscall</a>, which effectively clones the controller process for each task+host as a worker process, executes the host-specific action/connection/module code in the cloned worker, marshals the results of the task to the controller, and exits. This is a tried-and-true mechanism for concurrent execution that works effectively the same on all UNIX-flavored hosts, and especially with Python, often yields much better performance than with threads (for reasons I won't go into here). So what's the problem? <b>Windows doesn't have fork()</b>. This means that the entire worker execution subsystem (including connections, actions, modules) is 100% non-functional on Windows as currently implemented. POSIX-compatibility projects like <a href="https://www.cygwin.com/">Cygwin</a> have attempted to implement fork() for Windows, but even after years of really smart people working on it, <a href="https://cygwin.com/cygwin-ug-net/highlights.html#ov-hi-process">they admit that sometimes it just breaks</a>, which implies that it shouldn't be relied on for anything important. WSL takes care of this problem in its new process model by implementing a proper fork(), but that's not Windows-native either (and TMK can only be used by WSL Linux processes).<br />
<br />
So why not have threaded workers as an option? Significant effort has been expended to prototype threaded workers in Ansible, but without pretty major changes to the various plugin APIs to optimize their behavior for Python's <a href="https://medium.com/python-features/pythons-gil-a-hurdle-to-multithreaded-program-d04ad9c1a63">well-documented limitations around threaded execution</a>, acceptable performance and scaling cannot be achieved. The other issue with threaded workers in the main controller (or any shared/reusable worker process model) is that most plugins (including 3rd party plugins not shipped by Ansible) were written to assume that the worker process is both isolated from the controller and ephemeral. Side effects of things commonly done by plugins that are completely innocuous when they're fork-isolated could range from "annoying and weird" to "fatal" when they're happening concurrently in a shared process. This is an area where we've got a lot of ideas to improve the model in the future (and most of them would be Windows-friendly), but doing so while preserving backward-compatibility with existing user-developed plugins will take a great deal of effort.<br />
<br />
<h2>
UNIX-isms in Core Plugins and Executor</h2>
Once the process model problem is solved, the next issue is that much of the Ansible Python module subsystem, core modules, and other parts of the execution engine assume they're running in a POSIX-y environment. Things like POSIX file modes, shebangs, hardcoded forward-slashes on generated paths, assumed presence of POSIX command-line tools/syscalls/TTYs, to name just a few... Many of these items can be addressed with Windows-specific code paths, but it's not a simple task. There are also exceptions that are effectively unresolvable- things like file mode args on common core modules like <span style="font-family: "courier new" , "courier" , monospace;">file</span>, <span style="font-family: "courier new" , "courier" , monospace;">template</span>, and <span style="font-family: "courier new" , "courier" , monospace;">copy</span>. What should a Windows host do when asked to set a UNIX file mode on a Windows filesystem? This is why there are Windows-specific versions of those modules. We try to keep the common arguments and behavior consistent (within reason), but having separate implementations allows us to use the native management technology for the platform (eg Powershell/.NET on Windows, Python on POSIX), and to let the module UIs differ in platform-specific ways where it makes sense. For historical reasons, there are some places where the action/module names are the same for Windows and POSIX, but due to the numerous problems it's caused, we've tried to minimize that as a policy for new work. So effectively, this means that many core POSIX modules could never be fully functional on Windows- it'd be necessary to use the Windows equivalents.<br />
<br />
We sometimes get asked why we don't accept pull requests that fix some of these things, along with a policy to reject future changes that regress the fixes... The main reason is that, without comprehensive sanity/unit/integration tests in CI to ensure no regressions, these kinds of changes rot quickly. Policy without enforcement is not really policy, especially on a project as large as Ansible with as many folks able to approve and merge pull requests as we have. We learned long ago that manual code reviews looking for XYZ policy violations will always let things slip through, so sweeping policy-based changes throughout the codebase must be enforced in an automated fashion. Until the code reaches a point where integration tests could actually test and enforce Windows-specific non-regression tests <b>on Windows</b>, sweeping changes in the name of future Windows support probably aren't going to be accepted.<b> </b><br />
<br />
<h2>
Content Execution Parity</h2>
Let's say we've eliminated or built Windows equivalents to all the UNIX-isms in the core codebase, and that everything is working. Huzzah! Now we should be able to run all the Ansible content out there on our shiny new Windows-native Ansible controller, and the world is all rainbows and unicorns! Right? Sorry, nope. Even though we've gotten rid of the UNIX-isms in the code, that doesn't address UNIX-isms that exist in the Ansible content that the world runs on. The most obvious issues are around POSIX-flavored plays that use <span style="font-family: "courier new" , "courier" , monospace;">localhost</span> or the <span style="font-family: "courier new" , "courier" , monospace;">local</span> connection plugin, since it's necessary to use platform-specific versions of the modules to deal with things like paths and file modes. But that's not the only issue; content with commonly-used features like the <span style="font-family: "courier new" , "courier" , monospace;">pipe</span> and <span style="font-family: "courier new" , "courier" , monospace;">template</span> filters, glob lookups, <span style="font-family: "courier new" , "courier" , monospace;">become</span> methods like <span style="font-family: "courier new" , "courier" , monospace;">su</span> and <span style="font-family: "courier new" , "courier" , monospace;">sudo</span>, to name a few, will never be able to execute the same way in a native Windows environment. If you're an all-Windows shop, or your Ansible content is developed specifically to run on a Windows controller, maybe that's all fine, but without a lot of guardrails to inform when something unsupported or unworkable is happening, it's a recipe for frustration for the folks that are just trying to automate all the things.<br />
<br />
Honestly, I don't think there's a realistic comprehensive solution to this one. The best we could probably do is to tell you when you're trying to do something that's unsupported on your controller platform, so at least it's obvious that some conditional behavior is necessary in your content if you want to support running on both Windows and POSIX controllers. Maybe part of the solution is also that the implicit <span style="font-family: "courier new" , "courier" , monospace;">localhost</span> for Windows doesn't exist at all, or is called something else (so we won't even try to run POSIX stuff on Windows or vice-versa). That eliminates the need to make most of the POSIX/Python modules (and related subsystem) work on Windows at all, while still allowing the language and controller to work there. Remember: this is only about the behavior of <span style="font-family: "courier new" , "courier" , monospace;">localhost</span> and the <span style="font-family: "courier new" , "courier" , monospace;">local</span> connection plugin- for the majority of tasks where Ansible is managing remote targets of any type, execution parity should be achievable.<br />
<br />
<br />
<h2>
Things That Give Me Hope</h2>
None of these things are insurmountable. But they're also not going to happen the right way without some serious investment. Red Hat is clearly not afraid to invest in Windows where it makes sense; look to the existing Windows target efforts in Ansible, official support for Windows OpenJDK, Windows containers on Openshift, to name a few. Ansible has historically been an easy sell to all-Linux and mixed Linux/Windows groups, but without native Windows controller support, most all-Windows groups tend to stop the conversation pretty early. If you fall into this latter camp, be sure to let your Red Hat salesperson know how many Ansible nodes they're missing out on because we don't support this configuration today.<br />
<br />
All that said, there's never been a better time to run Ansible controllers on Windows. Ansible works great on <a href="https://docs.microsoft.com/en-us/windows/wsl">WSL</a> and <a href="https://docs.microsoft.com/en-us/windows/wsl/wsl2-index">WSL2</a>, and is pretty darn seamless. While that configuration is not capital-S-supported by Red Hat, most of the minor issues we've encountered have been easily addressed. We still tell people to avoid using Ansible under Cygwin, as the previously-mentioned fork unreliability <b>will</b> eventually cause things to break.<br />
<br />
As we work on the future of Ansible, we're trying to make sure we eliminate barriers to native Windows controllers, and don't erect any new ones. I'd love to someday announce first-class native Windows Ansible controller support. But it's not something that's going to come easily or quickly.<br />
<br />Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-1782534239117433762.post-3728134989412182182018-08-23T18:57:00.002-07:002019-07-15T11:09:37.811-07:00Testing and modifying GitHub PRs without extra remotes, branches, or clonesAs a popular open-source project, Ansible sees dozens of pull requests (PRs) each day from numerous members of our awesome community. Our CI system puts each one of those PRs through its paces on a litany of hosts and containers, but sometimes that's not enough. During the process of reviewing a PR, we may need to run it locally on a specialized test system, and sometimes we'll need to submit changes to it that should also be run through the CI gantlet before being merged. GitHub made this process a lot easier with the ability to <a href="https://help.github.com/articles/committing-changes-to-a-pull-request-branch-created-from-a-fork/">commit changes to PR branches on forks</a>, but most of the official documentation of the process either requires a whole new clone of the remote repo, or adding remotes or branches to your local repo. That's a lot of extra unnecessary work for ephemeral branches and forks I don't want to keep around.<br />
<br />
It's possible to locally pull down and test a PR, as well as push changes back to the original fork/branch, without messing with any local clones/remotes/branches. This relies on a couple of oft-misunderstood git features: detached HEADs, and the FETCH_HEAD ref. Basically, the process involves fetching the PR branch directly from the remote fork via its URL, then checking out the resultant FETCH_HEAD ref as a disconnected head (so we don't have to create a local branch either). At that point, we have exactly the commits as they exist on the PR's source branch. This is important, because if we were to use a rebased tree, we can no longer just add commits to the original PR branch. With the original commits, we can make modifications, test things, whatever. Any commits we make are added to the disconnected head, which we can then push directly back to the PR fork's branch (again by URL), and GitHub will add the new commits to the end of the branch, just as if the original submitter had pushed new commits. All CI and checks on the PR will be triggered as usual, code reviews and comments can happen, etc., - we're still taking full advantage of GitHub's PR feature set (instead of direct-merging the changes back to the main Ansible repo and bypassing all the rest of GitHub and Ansible's pre-merge infrastructure).<br />
<br />
So let's get to it already!<br />
<br />
Let's assume you have an Ansible clone laying around in ~/projects/ansible, and that it's your current working directory...<br />
<br />
Before we can fetch a PR branch, we'll first need to know the source fetch URL and branch. As of this writing, when viewing a PR, it can most easily be found just below the PR title, and looks like "(user) wants to merge (commits) into (target_fork:target_branch) from (source_user:source_branch)". That last part is what we need: the username or org where the source fork lives, and the source branch name it's coming from.<br />
<br />
The source fork fetch URL should be "https://github.com/(user-or-org)/(repo_name).git" to fetch over HTTPS, or "git@github.com:(user-or-org)/(repo-name).git" for SSH. So if the submitter's username is "bob", the project repo is called "ansible", and the source branch name for the PR is "fix_frob", the HTTPS fetch URL would be "https://github.com/bob/ansible.git", and the SSH version would be "git@github.com:bob/ansible.git".<br />
<br />
With these two pieces of information, we can now fetch the PR branch with a command like:<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">git fetch (source fork fetch URL) (source branch)</span><br />
<br />
For our hypothetical example:<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">git fetch https://github.com/bob/ansible.git fix_frob</span><br />
<br />
We now have the necessary objects from the remote sitting locally in a temporary ref called FETCH_HEAD (which is used internally by git for all fetch operations). In order to do something useful with them, we need to check them out into a working copy:<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">git checkout FETCH_HEAD</span><br />
<br />
This gives us the contents of the temporary FETCH_HEAD ref into what's called a "detached HEAD"- it behaves just like a branch checkout in every way, but there's no named branch "handle" for us to use to refer to it, which means there's nothing we need to worry about cleaning up when we're done!<br />
<br />
At this point, we can do whatever operations we like, just as if it were a normal working copy or branch checkout. If it was just a local test, and there's nothing we need to push back to the source branch, the next checkout of any branch will zap the state, and there's nothing for us to clean up. If we want to keep it around for some reason, it's easy to convert a detached HEAD into a normal branch.<br />
<br />
But maybe there's a small change you want to add to the PR- say the submitter forgot a changelog stub and we just want to get it merged without waiting. GitHub's UI will allow you to make a change to an existing file in a PR as a new commit, but you can't add new files through the UI. No worries- we can use a similar process to push new commits back to the original source branch!<br />
<br />
Make whatever changes are necessary and commit them as normal (as many commits as needed)...<br />
<br />
For our hypothetical example:<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">echo "bugfixes: tweaked the frobdingnager to only frob once" > changelogs/fragments/fix_frob.yaml</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">git add changelogs/fragments/fix_frob.yaml</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">git commit -m "added missing changelog fragment"</span><br />
<br />
We could just push our changes up, but remember, we're talking about pushing commits to someone else's repo. It's a neighborly thing to do to verify that we've only included the changes we expect, and that the submitter hasn't added anything more. To do that, we'll use the same command we did originally to refresh the FETCH_HEAD ref with the current contents of the source branch (which are hopefully unchanged):<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">git fetch (source fork fetch URL) (source branch)</span><br />
<br />
so for our example:<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">git fetch https://github.com/bob/ansible.git fix_frob</span><br />
<br />
and then we'll diff our detached HEAD contents that we want to push against the just-updated FETCH_HEAD:<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">git diff FETCH_HEAD HEAD</span><br />
<br />
which should show us only our new commits. If anything else shows up, we've either accidentally committed some unrelated stuff, or new stuff has shown up in the original source branch, and it needs to be reconciled before we push (an exercise left for the reader).<br />
<br />
Assuming all's well and we're ready to push, using the same source repo URL and branch we figured out above, push the changes back to the source repo with a command like:<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">git push (source fork fetch URL) HEAD:(source branch name)</span><br />
<br />
For our example:<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">git push https://github.com/bob/ansible.git HEAD:fix_frob</span><br />
<br />
If all's well, you should be prompted for credentials, then the new commits will be pushed. At this point, you can check out any other branch/ref and work on as normal, or repeat this process for other PRs- no cleanup necessary!<br />
<br />
If you see an error about "failed to push some refs", it usually means the PR owner has changed something on the source branch, and you'll need to reconcile before you push. Force-pushing is almost never the right thing to do- you may potentially overwrite other commits!<br />
<br />
A few other notes:<br />
* Support was later added for SSH push, which makes life much easier if you're using 2FA (of course you are, right?). Pushing over HTTPS with 2FA enabled requires jumping through some extra hoops... You'll have to use a <a href="https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/">personal access token</a> as your password, since GitHub's 2FA support doesn't support command-line authentication.<br />
* Be very careful about merging or rebasing from other branches if you'll be pushing changes back. A rebase will prevent you from pushing altogether (without force-pushing, but don't do that), and a careless merge from your own target branch will add all the intermediate commits since the PR owner last rebased. At least for Ansible, that's a deal-breaker...<br />
<br />
Testing and updating PRs without extra remotes, branches or clones using this process saves me a lot of hassle and cleanup- hope it's useful to you!<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1782534239117433762.post-46896737052779398602015-09-03T09:07:00.001-07:002015-09-11T16:27:05.318-07:00Manage stock Windows AMIs with Ansible (part 2)In <a href="http://blog.rolpdog.com/2015/09/manage-stock-windows-amis-with-ansible.html">part 1</a>, we demonstrated the use of an AWS User Data script to set a known Administrator password, and configure WinRM on a stock Windows AMI. In part 2, we'll use this technique with Ansible to spin up Windows hosts from scratch and put them to work.<br />
<br />
We'll assume that you've got Ansible configured properly for your AWS account (eg, boto installed, IAM credentials set up). See <a href="http://docs.ansible.com/ansible/guide_aws.html">Ansible's AWS Guide</a> if you need help getting this going. The examples in this post were tested against Ansible 2.0 (in alpha as of this writing), however, most of the content is applicable to Ansible 1.9. For simplicity, these samples also assume that you have a functional default VPC in your region (you should, unless you've deleted it). If you need help getting that configured, see <a href="http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/default-vpc.html">Amazon's page on default VPCs</a>.<br />
<br />
We'll build up the files throughout the post, but a gist with complete file content is available at <a href="https://gist.github.com/nitzmahone/aaf4340ea8d87c7fa578">https://gist.github.com/nitzmahone/aaf4340ea8d87c7fa578</a>.<br />
<br />
First, we'll set up a basic inventory that includes localhost, and define a couple of groups. The hosts we create or connect with in AWS will be added dynamically to the inventory and those groups. Create a file called hosts in your current directory, with the following contents:<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">localhost ansible_connection=local</span><br />
<span style="font-family: Courier New, Courier, monospace;"><br />
</span> <span style="font-family: Courier New, Courier, monospace;">[win]</span><br />
<span style="font-family: Courier New, Courier, monospace;"><br />
</span> <span style="font-family: Courier New, Courier, monospace;">[win:vars]</span><br />
<span style="font-family: Courier New, Courier, monospace;">ansible_connection=winrm</span><br />
<span style="font-family: Courier New, Courier, monospace;">ansible_ssh_port=5986</span><br />
<span style="font-family: Courier New, Courier, monospace;">ansible_ssh_user=Administrator</span><br />
<span style="font-family: Courier New, Courier, monospace;">ansible_ssh_pass={{ win_initial_password }}</span><br />
<br />
Note that we're using a variable in our inventory for the password- in conjunction with a vault, that keeps the password private. We'll set that up next. Create a vault file called <span style="font-family: Courier New, Courier, monospace;">secret.yml</span> in the same directory with your inventory by running:<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">ansible-vault create secret.yml</span><br />
<br />
Assign a strong password to the vault file when prompted, then put the following contents inside it when the editor pops up: [note- the default vault editor is vim- ensure it's installed, or preface the command with EDITOR=(your editor of choice here) to use a different one]:<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">win_initial_password: myTempPassword123!</span><br />
<br />
Save and exit the editor to encrypt the vault file.<br />
<br />
Next, we'll create a template of the User Data script we used in Part 1, so that the initial instance password can be set dynamically. Create a file called userdata.txt.j2 with the following content:<br />
<br />
<span style="font-family: Courier New, Courier, monospace;"><powershell></span><br />
<span style="font-family: Courier New, Courier, monospace;">$admin = [adsi]("WinNT://./administrator, user")</span><br />
<span style="font-family: Courier New, Courier, monospace;">$admin.PSBase.Invoke("SetPassword", "{{ win_initial_password }}")</span><br />
<span style="font-family: Courier New, Courier, monospace;">Invoke-Expression ((New-Object System.Net.Webclient).DownloadString('https://raw.githubusercontent.com/ansible/ansible/devel/examples/scripts/ConfigureRemotingForAnsible.ps1'))</span><br />
<span style="font-family: Courier New, Courier, monospace;"></powershell></span><br />
<br />
Note that we've replaced the hardcoded password from Part 1 with the variable win_initial_password (that's being set in our vault file).<br />
<br />
Finally, we'll create the playbook that will set up our Windows machine. Create a file called win-aws.yml; we'll build our playbook inside.<br />
<br />
Since our first play will be talking only to AWS (from our control machine), it only needs to target localhost, and we don't need to gather facts, so we can shut that off. We'll set a play-level var for the AWS region, and load the passwords from secret.yml. The first task looks up an Amazon-owned AMI named for the OS we want to run. The version number changes frequently, and old images are often retired, so we'll wildcard that part of the name, and sort descending so that the first image in the list should be the newest. Thankfully, Amazon pads these version numbers to two digits, so an ASCII sort works here. We want the module to fail if no images are found. Last, we'll register the output from the module to a var named found_amis, so we can refer to it later. Place the following content in win-aws.yml:<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">- name: infrastructure setup</span><br />
<span style="font-family: Courier New, Courier, monospace;"> hosts: localhost</span><br />
<span style="font-family: Courier New, Courier, monospace;"> gather_facts: no</span><br />
<span style="font-family: Courier New, Courier, monospace;"> vars:</span><br />
<span style="font-family: Courier New, Courier, monospace;"> target_aws_region: us-west-2</span><br />
<span style="font-family: Courier New, Courier, monospace;"> vars_files:</span><br />
<span style="font-family: Courier New, Courier, monospace;"> - secret.yml</span><br />
<span style="font-family: Courier New, Courier, monospace;"> tasks:</span><br />
<span style="font-family: Courier New, Courier, monospace;"> - name: find current Windows AMI in this region</span><br />
<span style="font-family: Courier New, Courier, monospace;"> ec2_ami_find:</span><br />
<span style="font-family: Courier New, Courier, monospace;"> region: "{{ target_aws_region }}"</span><br />
<span style="font-family: Courier New, Courier, monospace;"> platform: windows</span><br />
<span style="font-family: Courier New, Courier, monospace;"> virtualization_type: hvm</span><br />
<span style="font-family: Courier New, Courier, monospace;"> owner: amazon</span><br />
<span style="font-family: Courier New, Courier, monospace;"> name: Windows_Server-2012-R2_RTM-English-64Bit-Base-*</span><br />
<span style="font-family: Courier New, Courier, monospace;"> no_result_action: fail</span><br />
<span style="font-family: Courier New, Courier, monospace;"> sort: name</span><br />
<span style="font-family: Courier New, Courier, monospace;"> sort_order: descending</span><br />
<span style="font-family: Courier New, Courier, monospace;"> register: found_amis</span><br />
<span style="font-family: Courier New, Courier, monospace;"><br />
</span> <span style="font-family: inherit;">Next, we'll take the first found AMI result and set its ami_id value into a var called win_ami_id:</span><br />
<span style="font-family: Courier New, Courier, monospace;"><br />
</span> <span style="font-family: Courier New, Courier, monospace;"> - set_fact:</span><br />
<span style="font-family: Courier New, Courier, monospace;"> win_ami_id: "{{ (found_amis.results | first).ami_id }}"</span><br />
<div><br />
</div><div>Before we can fire up our instance, we'll need to ensure that there's a security group we can use to access it (in the default VPC, in this case). The group allows inbound access on port 80 for the web app we'll set up later, port 5986 for WinRM over https, and port 3389 for RDP in case we need to log in and poke around interactively. Again, we'll register the output to a var called sg_out so we can get its ID:</div><div><br />
</div><div><div><span style="font-family: Courier New, Courier, monospace;"> - name: ensure security group is present</span></div><div><span style="font-family: Courier New, Courier, monospace;"> ec2_group:</span></div><div><span style="font-family: Courier New, Courier, monospace;"> name: WinRM RDP</span></div><div><span style="font-family: Courier New, Courier, monospace;"> description: Inbound WinRM and RDP</span></div><div><span style="font-family: Courier New, Courier, monospace;"> region: "{{ target_aws_region }}"</span></div><div><span style="font-family: Courier New, Courier, monospace;"> rules:</span></div><div><div><span style="font-family: Courier New, Courier, monospace;"> - proto: tcp</span></div><div><span style="font-family: Courier New, Courier, monospace;"> from_port: 80</span></div><div><span style="font-family: Courier New, Courier, monospace;"> to_port: 80</span></div><div><span style="font-family: Courier New, Courier, monospace;"> cidr_ip: 0.0.0.0/0</span></div><div><span style="font-family: Courier New, Courier, monospace;"> - proto: tcp</span></div><div><span style="font-family: Courier New, Courier, monospace;"> from_port: 5986</span></div><div><span style="font-family: Courier New, Courier, monospace;"> to_port: 5986</span></div><div><span style="font-family: Courier New, Courier, monospace;"> cidr_ip: 0.0.0.0/0</span></div><div><span style="font-family: Courier New, Courier, monospace;"> - proto: tcp</span></div><div><span style="font-family: Courier New, Courier, monospace;"> from_port: 3389</span></div><div><span style="font-family: Courier New, Courier, monospace;"> to_port: 3389</span></div><div><span style="font-family: Courier New, Courier, monospace;"> cidr_ip: 0.0.0.0/0</span></div><div><div><span style="font-family: Courier New, Courier, monospace;"> rules_egress:</span></div><div><span style="font-family: Courier New, Courier, monospace;"> - proto: -1</span></div><div><span style="font-family: Courier New, Courier, monospace;"> cidr_ip: 0.0.0.0/0</span></div><span style="font-family: Courier New, Courier, monospace;"> </span></div><div><span style="font-family: Courier New, Courier, monospace;"> register: sg_out</span></div></div><div><br />
</div><div>Now that we know the image and security group IDs, we have everything we need to ensure that we have an instance in the default VPC:</div><div><br />
</div><div><div><span style="font-family: Courier New, Courier, monospace;"> - name: ensure instances are running</span></div><div><span style="font-family: Courier New, Courier, monospace;"> ec2:</span></div><div><span style="font-family: Courier New, Courier, monospace;"> region: "{{ target_aws_region }}"</span></div><div><span style="font-family: Courier New, Courier, monospace;"> image: "{{ win_ami_id }}"</span></div><div><span style="font-family: Courier New, Courier, monospace;"> instance_type: t2.micro</span></div><div><span style="font-family: Courier New, Courier, monospace;"> group_id: [ "{{ sg_out.group_id }}" ]</span></div><div><span style="font-family: Courier New, Courier, monospace;"> wait: yes</span></div><div><span style="font-family: Courier New, Courier, monospace;"> wait_timeout: 500</span></div><div><span style="font-family: Courier New, Courier, monospace;"> exact_count: 1</span></div><div><span style="font-family: Courier New, Courier, monospace;"> count_tag:</span></div><div><span style="font-family: Courier New, Courier, monospace;"> Name: stock-win-ami-test</span></div><div><span style="font-family: Courier New, Courier, monospace;"> instance_tags:</span></div><div><span style="font-family: Courier New, Courier, monospace;"> Name: stock-win-ami-test</span></div><div><span style="font-family: Courier New, Courier, monospace;"> user_data: "{{ lookup('template', 'userdata.txt.j2') }}"</span></div><div><span style="font-family: Courier New, Courier, monospace;"> register: ec2_result</span></div></div><div><br />
</div><div>We're just passing through the target_aws_region var we set earlier, as well as the win_ami_id we looked up. From the sg_out variable that contains the output from the security group module, we pull out just the group_id value and pass that as the instance's security group. For our sample, we just want one instance to exist, so we ask for an exact_count of 1, which is enforced by the count_tag arg finding instances with the Name tag set to "stock-win-ami-test". Finally, we use an inline template render to substitute the password into our User Data script template and pass it directly to the user_data arg; that will cause our instance to set up WinRM and reset the admin password on initial bootup. Once again, we register the output to the ec2_result var, as we'll need it later to add the EC2 hosts to inventory. Once this task has run, we need some way to ensure that the instances have booted, and that WinRM is answering (which can take some time). The easiest way is to use the wait_for action, against the WinRM port:</div><div><br />
</div><div><div><span style="font-family: Courier New, Courier, monospace;"> - name: wait for WinRM to answer on all hosts</span></div><div><span style="font-family: Courier New, Courier, monospace;"> wait_for:</span></div><div><span style="font-family: Courier New, Courier, monospace;"> port: 5986</span></div><div><span style="font-family: Courier New, Courier, monospace;"> host: "{{ item.public_ip }}"</span></div><div><span style="font-family: Courier New, Courier, monospace;"> timeout: 300</span></div><div><span style="font-family: Courier New, Courier, monospace;"> with_items: ec2_result.tagged_instances</span></div></div><div><br />
</div><div>This task will return immediately if the instance is already answering on the WinRM port, and if not, poll it for up to 300 seconds before giving up and failing. Our next step will consume the output from the ec2 task to add the host to our inventory dynamically:</div><div><br />
</div><div><div><span style="font-family: Courier New, Courier, monospace;"> - name: add hosts to groups</span></div><div><span style="font-family: Courier New, Courier, monospace;"> add_host:</span></div><div><span style="font-family: Courier New, Courier, monospace;"> name: win-temp-{{ item.id }}</span></div><div><span style="font-family: Courier New, Courier, monospace;"> ansible_ssh_host: "{{ item.public_ip }}"</span></div><div><span style="font-family: Courier New, Courier, monospace;"> groups: win</span></div><div><span style="font-family: Courier New, Courier, monospace;"> with_items:</span><span style="font-family: 'Courier New', Courier, monospace;"> ec2_result.tagged_instances</span></div></div><br />
This task loops over all the instances that matched the tags we passed (whether they were created or pre-existing) and adds them to our in-memory inventory, placing them in the win group (that we defined statically in the inventory earlier). This allows us to use the group_vars we set on the win group with all the WinRM connection details, so the only values we have to supply are the host's name and it's IP address (via ansible_ssh_host, so WinRM knows how to reach it). Once this task completes, we have fully-functional Windows instances that we can immediately target in another play in the same playbook (for instance, to do common configuration tasks, like resetting the password), or we could use a separate playbook run later against an ec2 dynamic inventory that targets these hosts. Let's do the former; we'll install IIS and configure up a simple Hello World web app. First, let's create a web page that we'll copy over. Create a file called default.aspx with the following content:<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">Hello from <%= Environment.MachineName %> at <%= DateTime.UtcNow %></span><br />
<br />
Next, add the following play to the end of the playbook we've been working with:<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">- name: web app setup</span><br />
<span style="font-family: Courier New, Courier, monospace;"> hosts: win</span><br />
<span style="font-family: Courier New, Courier, monospace;"> vars_files: [ "secret.yml" ]</span><br />
<span style="font-family: Courier New, Courier, monospace;"> tasks:</span><br />
<span style="font-family: Courier New, Courier, monospace;"> - name: ensure IIS and ASP.NET are installed</span><br />
<span style="font-family: Courier New, Courier, monospace;"> win_feature:</span><br />
<span style="font-family: Courier New, Courier, monospace;"> name: AS-Web-Support</span><br />
<span style="font-family: Courier New, Courier, monospace;"><br />
</span> <span style="font-family: Courier New, Courier, monospace;"> - name: ensure application dir exists</span><br />
<span style="font-family: Courier New, Courier, monospace;"> win_file:</span><br />
<span style="font-family: Courier New, Courier, monospace;"> path: c:\inetpub\foo</span><br />
<span style="font-family: Courier New, Courier, monospace;"> state: directory</span><br />
<span style="font-family: Courier New, Courier, monospace;"><br />
</span> <span style="font-family: Courier New, Courier, monospace;"> - name: ensure default.aspx is present</span><br />
<span style="font-family: Courier New, Courier, monospace;"> win_copy:</span><br />
<span style="font-family: Courier New, Courier, monospace;"> src: default.aspx</span><br />
<span style="font-family: Courier New, Courier, monospace;"> dest: c:\inetpub\foo\default.aspx</span><br />
<span style="font-family: Courier New, Courier, monospace;"><br />
</span> <span style="font-family: Courier New, Courier, monospace;"> - name: ensure that the foo web application exists</span><br />
<span style="font-family: Courier New, Courier, monospace;"> win_iis_webapplication:</span><br />
<span style="font-family: Courier New, Courier, monospace;"> name: foo</span><br />
<span style="font-family: Courier New, Courier, monospace;"> physical_path: c:\inetpub\foo</span><br />
<span style="font-family: Courier New, Courier, monospace;"> site: Default Web Site</span><br />
<span style="font-family: Courier New, Courier, monospace;"><br />
</span> <span style="font-family: Courier New, Courier, monospace;"> - name: ensure that application responds properly</span><br />
<span style="font-family: Courier New, Courier, monospace;"> uri:</span><br />
<span style="font-family: Courier New, Courier, monospace;"> url: http://{{ ansible_ssh_host}}/foo</span><br />
<span style="font-family: Courier New, Courier, monospace;"> return_content: yes</span><br />
<span style="font-family: Courier New, Courier, monospace;"> register: uri_out</span><br />
<span style="font-family: Courier New, Courier, monospace;"> delegate_to: localhost</span><br />
<span style="font-family: Courier New, Courier, monospace;"> until: uri_out.content | search("Hello from")</span><br />
<span style="font-family: Courier New, Courier, monospace;"> retries: 3</span><br />
<span style="font-family: Courier New, Courier, monospace;"><br />
</span> <span style="font-family: Courier New, Courier, monospace;"> - debug:</span><br />
<span style="font-family: Courier New, Courier, monospace;"> msg: web application is available at </span><span style="font-family: 'Courier New', Courier, monospace;">http://{{ ansible_ssh_host}}/foo</span><br />
<br />
This play targets the win group with the dynamic hosts we just added to it. We pull in our secrets file again (as the inventory will always need the password value inside). The play ensures that IIS and ASP.NET are installed with the win_feature module, creates a directory for the web application with win_file, copies the application content into that directory with win_copy, and ensures that the web application is created in IIS. Finally, we delegate a uri task to the local Ansible runner, and have it make up to 3 requests to the foo application, looking for the content that should be there.<br />
<br />
At this point, we've got a complete playbook that will idempotently stand up a Windows machine in AWS with a stock AMI, then configure and test a simple web application. To run it, just tell ansible-playbook where to get its inventory, what to run, and that you'll need to specify a vault password, like:<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">ansible-playbook -i hosts win-aws.yml --ask-vault-pass</span><br />
<br />
After supplying your vault password, the playbook should run to completion, at which point you should be able to access the web application via http://(your AWS host IP)/foo/.<br />
<br />
We've shown that it's pretty easy to use Ansible to provision Windows instances in AWS without needing custom AMIs. These techniques can be expanded to set up and deploy most any application with Ansible's growing Windows support. Give it a try for your code today! Happy automating...<br />
<br />
<br />
</div>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-1782534239117433762.post-55003876884654559432015-09-03T09:07:00.000-07:002015-09-03T09:56:38.724-07:00Manage stock Windows AMIs with Ansible (part 1)Ever wished you could just spin up a stock Windows AMI and manage it with Ansible directly? Linux AMIs usually have SSH enabled and private key support configured at first boot, but most stock Windows images don't have WinRM configured, and the administrator passwords are randomly assigned and only accessible via APIs several minutes post-boot. People go to some pretty awful lengths to get plug-and-play Windows instances working with Ansible under AWS, but the most common solution seems to be building a derivative AMI from an instance with WinRM pre-configured and a hard-coded Administrator password. This isn't too hard to do once, but between Amazon's frequent base AMI updates, and the need to repeat the process in multiple regions, it can quickly turn into an ongoing hassle.<br />
<br />
Enter User Data. If you're not familiar with it, you're not alone. It's a somewhat obscure option buried in the Advanced area of the AWS instance launch UI. It can be used for many different purposes; much of the AWS documentation treats it as a mega-tag that can hold up to 16k of arbitrary data, accessible only from inside the instance. Less well-known is that scripts embedded in User Data will be <a href="http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/UsingConfig_WinAMI.html#user-data-execution">executed by the EC2 Config Windows service</a> near the end of the first boot. This allows a small degree of first-boot customization on a vanilla instance, including setting up WinRM and changing the administrator password; once those two items are completed, the instance is manageable with Ansible immediately!<br />
<br />
We'll build up the files throughout the post, but a gist with complete file content is available at <a href="https://gist.github.com/nitzmahone/4271319ab8e7acf3330c">https://gist.github.com/nitzmahone/4271319ab8e7acf3330c</a>.<br />
<br />
Scripts can be embedded in User Data by wrapping them in <powershell> or <script> tags for Windows batch scripts- in this case, we'll stick to Powershell. The following User Data script will set the local Administrator password to a known value, then download and run a script hosted in Ansible's GitHub repo to auto-configure WinRM:<br />
<br />
<br />
<span style="font-size: x-small;"><span style="font-family: Courier New, Courier, monospace;"><powershell><br />
</span><span style="font-family: Courier New, Courier, monospace;">$admin = [adsi]("WinNT://./administrator, user")<br />
</span><span style="font-family: Courier New, Courier, monospace;">$admin.PSBase.Invoke("SetPassword", "myTempPassword123!")</span></span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: x-small;">Invoke-Expression ((New-Object System.Net.Webclient).DownloadString('https://raw.githubusercontent.com/ansible/ansible/devel/examples/scripts/ConfigureRemotingForAnsible.ps1'))</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"></powershell></span><br />
<div>
<br />
<br /></div>
A word of caution: User Data is accessible via http from inside the instance without any authentication. While the following technique will get your instances quickly accessible from Ansible, <b>DO NOT</b> use a sensitive password (eg, your master domain admin password), as it will be visible as long as the User Data exists, and User Data requires an instance stop/start cycle to modify. Anyone/anything inside your instance that can make an http request to an arbitrary host can see the password you set with this technique. A good practice is to have one of your first Ansible tasks against your new instance change the password to a different value. Another thing to keep in mind is that the default Windows password policy is usually enabled, so the passwords you choose need to satisfy its complexity requirements.<br />
<br />
Before we get to the Holy Grail of actually using Ansible to spin up Windows instances using this technique, let's just try it manually from the AWS Console first. Click Launch Instance, and select a Windows image, then under Configure Instance Details, expand Advanced Details at the bottom to see the User Data textbox.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgwk6N3Y0aQHGM-qMuhfSVudvEgmELY00s22_vlrm9RDmFCo8ITk-5fI_lLlTzm0gCltcP3zezocuT5Gx81WY-u7QqirP_3hcIgC2pTwzyrtr9kVPg0KpDadldd-1A6-Kyg2QQuxT4g2dE/s1600/Screen+Shot+2015-09-01+at+11.39.33+AM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="275" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgwk6N3Y0aQHGM-qMuhfSVudvEgmELY00s22_vlrm9RDmFCo8ITk-5fI_lLlTzm0gCltcP3zezocuT5Gx81WY-u7QqirP_3hcIgC2pTwzyrtr9kVPg0KpDadldd-1A6-Kyg2QQuxT4g2dE/s320/Screen+Shot+2015-09-01+at+11.39.33+AM.png" width="320" /></a></div>
<br />
<br />
Paste the script above into the textbox, then click through to Configure Security Group, and ensure that TCP ports 3389 and 5986 are open for all IPs. Continue to Review and Launch, select your private key (which doesn't make any difference now, since you know the admin password), and wait for the instance to launch. If all's well, after the instance has booted you should be able to reach RDP on port 3389, and WinRM on port 5986 with Ansible (both protocols using the Administrator password set by the script). It can often take several minutes for Windows instances set up this way to begin responding, so be patient!<br />
<br />
Let's test this using the win_ping module with a dirt simple inventory. Create a file called hosts with the following contents:<br />
<br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">aws-win-host ansible_ssh_host=(your aws host public IP here)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span>
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">[win]</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">aws-win-host</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span>
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">[win:vars]</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">ansible_connection=winrm</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">ansible_ssh_port=5986</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">ansible_ssh_user=Administrator</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">ansible_ssh_pass=myTempPassword123!</span><br />
<br />
then run the win_ping module using Ansible, referencing this inventory file:<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">ansible win -i hosts -m win_ping</span><br />
<br />
If all's well, you should see the ping response, and your AWS Windows host is fully manageable by Ansible without using a custom AMI!<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhbEQzXA8PIYPNrjEz53D3itPBBptEv-SQwKcegmROmbOAiZTTGM4gmMSS55vkiS3ESmRxnyntYX_RVJ-DYybgowlvv8d0KMmL68fJWCWjQrUwch-C2R6xX30Dy6nt2lnRVGVD6he1kK0M/s1600/Screen+Shot+2015-09-01+at+11.58.23+AM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="241" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhbEQzXA8PIYPNrjEz53D3itPBBptEv-SQwKcegmROmbOAiZTTGM4gmMSS55vkiS3ESmRxnyntYX_RVJ-DYybgowlvv8d0KMmL68fJWCWjQrUwch-C2R6xX30Dy6nt2lnRVGVD6he1kK0M/s320/Screen+Shot+2015-09-01+at+11.58.23+AM.png" width="320" /></a></div>
<br />
In <a href="http://blog.rolpdog.com/2015/09/manage-stock-windows-amis-with-ansible_3.html">part 2</a>, we'll show an end-to-end example of using Ansible to provision Windows AWS instances.<br />
<br />Unknownnoreply@blogger.com3tag:blogger.com,1999:blog-1782534239117433762.post-14516112575745066082012-11-19T16:28:00.000-08:002012-11-19T16:30:04.257-08:00DHCP Failover Breaks with Custom OptionsI was really itching to try out the new DHCP Failover goodies in Windows Server 2012. I ran into a couple weird issues when trying to configure it- hopefully I can save someone else the trouble. <br />
<br />
When I tried to create the partner server relationship and configure failover, I'd get the following error: Configure failover failed. Error: 20010. The specified option does not exist.<br />
<br />
We have a few custom scope options defined for our IP phones. Apparently, it won't propagate the custom option configuration during the partner relationship setup- you have to do it manually. I haven't found this step or error message documented anywhere in the context of failover configuration. <br />
<br />
Since we only had one custom option, and I knew what it was, I just manually added it. If you don't know which options are custom and need to be copied over, it's not hard to figure out. In the DHCP snap-in on the primary server, right-click the IPv4 container and choose Set Predefined Options, then scroll through values in the Option Name dropdown with the keyboard arrows or mouse wheel until you see the Delete button light up (that'll be a custom value). Hit Edit and copy the values down, then in the same place on the partner server, hit Add and poke in the custom values. If you have lots of custom options, you can use netsh dhcp or PowerShell to get/set the custom option config.<br />
<br />
Once the same set of custom options exist on both servers, you can do Configure Failover as normal on the scopes and it should work fine. The values of any custom options defined under the scopes will sync up just fine.<br />
<br />
I also had one scope where Configure Failover wasn't an option. I had imported all my scopes from a 2003 DC awhile back, so I'm guessing there was something else corrupted in the scope config- just deleting and recreating the scope fixed the problem (it was on a rarely used network, so no big deal; YMMV). <br />
<br />
Hope this helps!Unknownnoreply@blogger.com17tag:blogger.com,1999:blog-1782534239117433762.post-76611043941485165582012-03-02T16:18:00.000-08:002012-12-29T23:52:59.938-08:00Enabling AHCI/RAID on Windows 8 after installation<b>UPDATE:</b> MS has recently published a <a href="http://support.microsoft.com/kb/2751461">KB article</a> on a simpler way to address this. Thanks to commenter Keymapper for the heads up!<br />
<br />
Been playing around with Windows 8 Consumer Preview and Windows 8 Server recently. After installing, I needed to enable RAID mode (Intel ICH9R) on one of the machines that was incorrectly configured for legacy IDE mode (why is this the default BIOS setting Dell?). In Win7, you would just ensure that the Start value for the Intel AHCI/RAID driver is set to 0 in the registry, then flip the switch in the BIOS, and all's well. Under Win8 though, you still end up with the dreaded INACCESSIBLE_BOOT_DEVICE. The answer is simple enough: turns out they've added a new registry key underneath the driver you'll need to tweak: StartOverride. I just deleted the entire key, but if you're paranoid, you can probably just set the named value 0 to "0". <br />
<br />
So, the full process:<br />
<br />
- Enable the driver you need <b>before</b> changing the RAID mode setting in the BIOS:<br />
(for Intel stuff, the driver name is usually iaStorV or iaStorSV, others may use storahci)<br />
-- Delete the entire StartOverride key (or tweak the value) under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\(DriverNameHere)<br />
- Reboot to BIOS setup<br />
- Enable AHCI or RAID mode<br />
- Profit!<br />
<br />
Unknownnoreply@blogger.com13tag:blogger.com,1999:blog-1782534239117433762.post-12316932407674585942012-01-09T14:29:00.000-08:002012-01-09T14:29:55.623-08:00Windows installation fails with missing CD/DVD driverI was recently upgrading one of our large storage servers to a newer version of Windows and came across a really strange error during setup: "A required CD/DVD drive device driver is missing". This was really odd, since I've installed with no problems to this same class of hardware many times (and even this exact piece of hardware). Even stranger, the error popped up regardless if I was using the real DVD drive in the machine or our KVM's "virtual media" feature (which looks like a USB-connected DVD drive to Windows). After lots of searching and trying various things, I remembered what was different about this machine: it had about 30 storage LUNs on it for all the various disks. Hitting the "Browse..." button in the driver select dialog confirmed the problem- Windows was helpfully automounting all the LUNs and assigning drive letters to them. Since there are more than 26, it ran out of drive letters before getting around to mounting the DVD drive. Setup assumes if it can't find the DVD drive, it must be a driver problem, hence the misleading error message. I always disable automount on our storage servers anyway (since we mount all those storage LUNs under NTFS folders, not drive letters), but you can't do that for setup without altering the boot image. The solution in this case was to hit Shift-F10 from the setup dialog to get a command prompt, then use diskpart to unassign D: from a storage LUN and reassign it to the DVD drive.
<pre>
list vol
select vol X
</pre>
(where X is the volume number in the list with D: assigned)
<pre>
remove
</pre>
(removes the drive letter)
<pre>
select vol Y
</pre>
(where Y is the volume number in the list that is your Windows Setup DVD)
<pre>
assign letter=D
</pre>
Once the setup DVD has a drive letter, you can close the command prompt and proceed with setup normally.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1782534239117433762.post-8044530099759427872011-07-25T14:53:00.000-07:002011-07-25T22:48:15.407-07:00Dynamic client-side UI with Script#<p>At Earth Class Mail, we've just recently shipped a client-side UI written 100% in Script#, and we thought people might be interested in the process we used to get there. This post is just an overview of what we did- we'll supplement with more details in future posts. I should throw out some props to the team before I get too far- while my blog ended up being the home for the results, Matt Clay was the one behind most of the cool stuff that happened with this. He recognized early on that Script# represented a new way for us to do things, and drove most of the tooling I describe below. Cliff Bowman was the first to consume all the new tooling and came up with a lot of ideas on how to improve it.</p>
<h3>Script# Background</h3>
<p>If you haven't run across Script# before, I'm not surprised. It's a tool built by Nikhil Kothari that allows C# to compile down to Javascript. Script# has been around since at least 2007 (probably longer), but has only recently started to be really convenient to use. It lets you take advantage of many of the benefits of developing in a strongly-typed language like C# (compile-time type-checking, method signature checks, high-level refactoring) while still generating very minimal JS that looks like what you wrote originally. If you’d like more background, Matt and I <a href="http://www.hanselminutes.com/default.aspx?showID=296">talked about it recently with Scott Hanselman</a>. You can also visit Nikhil's <a href="http://projects.nikhilk.net/ScriptSharp">Script# project home</a>.</p>
<h3>Import Libraries</h3>
<p>The first thing we had to do was write a couple of import libraries for JS libraries we wanted to use that weren't already in the box. Import libraries are somewhat akin to C header files that describe the shape (objects, methods, etc) of the code in the JS library so the C# compiler has something to verify your calls against. The import library does *not* generate any JS at compile-time, it's merely there to help out the C# compiler (as well as everything that goes with that- e.g., Intellisense, "Find References"). Script# ships with a jQuery import library, which hit the majority of what we needed. Since we'd previously decided to use some other libraries that didn't already have Script# import libraries (jQuery Mobile, DateBox, JSLinq), we had to whip out import libraries for them- at least for the objects and functions we needed to call from C#. This was a pretty straightforward process- only took a couple of hours to get what we needed. </p>
<h3>Consuming WCF Services</h3>
<p>The next challenge was to get jQuery calling our WCF JSON services. Our old JS UI had a hand-rolled service client that we'd have to keep in sync with all our operations and contracts- maintenance fail! Since our service contracts were already defined in C#, we initially tried to just "code-share" them into the Script# projects, but that proved to be problematic for a few reasons. First, the contracts were decorated with various attributes that WCF needs. Since Script# isn't aware of WCF, these attributes would need to be redeclared or #ifdef'd away to make the code compilable by Script#. It ended up not mattering anyway, though, since our service contract signatures weren't directly usable anyway. Since XmlHttpRequest and its ilk are async APIs, our generated service client would have to supply continuation parameters to all service calls (ie, onSuccess, onError, onCancel) which would render the operation signatures incompatible anyway. Our options were to redeclare the service contract (and impl) to be natively async pattern (so they'd have a continuation parameter declared) or to code-gen an async version of the contract interface for the client. We opted for the code-generation approach, as it allowed for various other niceties (e.g., unified eventing/error-handling, client-side partial members that aren't echoed back to the server), and settled on a hybrid C#/T4 code generator that gets wired into our build process. Now we have fully type-checked access to our service layer, and with some other optimizations that we'll talk about later, only a minimal amount of JS gets generated to support it.</p>
<h3>jQuery templating</h3>
<p>The next challenge was using the new jQuery template syntax. This is a pretty slick new addition to jQuery 1.5 that allows for rudimentary declarative client-side databinding. It works by generating JS at runtime from the templated markup file (very simple regex-based replacements of template tags with JS code)- the templates can reference a binding context object that allows the object's current value to appear when the markup is rendered. While it worked just fine in our new "all-managed" build environment, we had a couple of things we didn't like. The first was that template problems (unbalanced ifs, mis-spelled/missing members, etc) can't be discovered until runtime, when it's either a script error (at best) or a totally blank unrenderable UI (at worst). It also means that managed code niceties like "Find All References" won't behave correctly against code in templates, since they don't target Script#. We decided to make something with similar syntax and mechanics, but that runs at build-time and dumps out Script# code instead of JS. This way, "Find All References" still does the right thing, and we get compile-time checking of the code emitted by the templates. Just like other familiar tools, our template compiler creates a ".designer.cs" file that contains all the code and markup, which is then compiled by the Script# compiler into JS. We get a few added benefits from this approach as well. The code isn't being generated at runtime (as it is with jQuery templates), so we can detect template errors at compile-time. We were also able to add some new template tags for declarative formatting, control ID generation, and shortcutting resource access.</p>
<h3>Resourcing</h3>
<p>Next, we wanted to consume resources from .resx files using the same C# class hierarchy available in native .NET apps. Even though Script# has a little bit of resource stuff built in, Matt whipped up a simple build-time T4-based resx-to-Script# code generator that also added a few niceties (automatic enum-string mapping, extra validation). </p>
<h3>Visual Studio/Build integration</h3>
<p>Currently, all this stuff is wired up through pre/post-build events in Visual Studio, and some custom MSBuild work. We're looking at ways we could get it a little more tightly integrated, as well as having it work as a "custom tool" in VS2010 to allow save-time generation of some of the code rather than build-time.</p>
<h3>Summary</h3>
<p>Combining Script# with a bit of custom tooling allows for a new way of writing tight client-side JS that looks a lot more like ASP.NET or Winforms development, and offers many of the same conveniences. Nobody on our team wants to write naked JS any more- to the point that we're actually working on tooling to convert our existing JS codebase to Script#, so we can start to clean it up and make changes with all the C# features we've come to expect from client-side development. Obviously, some manual work will still be required to get everything moved over, but our team really believes that this is the wave of the future.</p>Unknownnoreply@blogger.com3tag:blogger.com,1999:blog-1782534239117433762.post-56204634292806980162011-07-23T13:02:00.000-07:002011-07-25T22:52:13.732-07:00The Road to Script#<p>At work, we just shipped our first major new chunk of UI in a couple of years, written 100% in Script#. We've been watching Script# for a few years as an interesting option for creating client-side UI, and it recently hit a level of functionality where we felt it was workable. It also coincided nicely with our need for a mobile UI (a standalone product that we could roll out slowly, low-risk compared to changing our bread-and-butter desktop UI).</p>
<h3>A little history</h3>
<p>When we first started working on a .NET rewrite of the LAMP-based Earth Class Mail (aka Remote Control Mail) in 2007, the client-side revolution was in full force. All the cool kids were eschewing server-side full page postback UI in favor of Javascript client UI. We recognized this from the start, but also had very tight shipping timelines to work under. ComponentArt had some nifty-looking products that promised the best of both worlds- server-side logic with client-side templating, data-binding, and generated Javascript. This fit perfectly with our team's server-side skillset (we didn't have any JS ninjas at the time), so we jumped on it. While we were able to put together a mostly client-side UI in a matter of months, it really didn't live up to the promise. The generated JS and ViewState was very heavy, causing the app to feel slow (Pogue and other reviewers universally complained about it). Also, the interaction between controls on the client was very brittle. Small, seemingly unimportant changes (like the order in which the controls were declared) made the difference between working code and script errors in the morass of generated JS. At the end of the day, we shipped pretty much on time, but the result was less than stellar, and we'd already started making plans for a rewrite.</p>
<h3>V2: all JS</h3>
<p>Fast-forward to summer of 2009, when we shipped a 25k+ line all-JS jQuery client UI rewrite (which also included an all-new WCF webHttpBinding/JSON service layer). While it was originally planned to be testable via Selenium, JSUnit, and others, things were changing too fast and the tests got dropped, so it was months of tedious manual QA to get it out the door. User reception of the new UI was very warm, and we iterated quickly to add new features. However, refactoring the new JS-based UI was extremely painful due to the lack of metadata. We mostly relied on decent structure and "Ctrl-f/Ctrl-h" when we needed to propagate a service contract change into the client. Workable, but really painful to test changes, and there were inevitably bugs that would slip through when someone did something "special" in the client code. It got to a point where we were basically afraid of the codebase, since we couldn't refactor or adjust without significant testing pain, so the client codebase stagnated somewhat.</p>
<h3>On to Script#</h3>
<p>We'd been watching our user-agent strings trend mobile for awhile, and this summer it finally reached a point where we needed to own a mobile UI. Our mobile story to this point consisted of users running the main browser UI on mobile devices with varying degrees of success (and a LOT of zooming), and an iPhone app that a customer wrote by reverse-engineering our JSON (we later helped him out by providing the service contracts, since he was filling a void we weren't). The question came to how would we build a new mobile UI? Bolting it to our existing JS client wasn't attractive to anyone, as it's grown unwieldy and scary, and we didn't want to risk destabilizing it with a bunch of new mobile-only code. The prospect of another mass of JS code wasn't attractive to anyone. Another ECM architect (Matt Clay) had been watching Script# for quite awhile, and it had just recently added VS2010 integrated code editing (used to be a standalone non-intellisense editor that ran inside VS2008) and C# 2.0 feature support (generics, anonymous methods). These features gave us enough pause to take another look, and after a week or so of experimentation, we decided to try and ship the mobile UI written with Script#. I'll post something soon that describes what the end result looks like.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1782534239117433762.post-51352776309910536272011-05-19T21:49:00.001-07:002012-05-15T15:16:23.860-07:00SNI support in Android!<div><p><b>UPDATE: </b>Microsoft has <a href="http://learn.iis.net/page.aspx/1096/iis-80-server-name-indication-sni-ssl-scalability/">announced</a> that IIS 8 supports SNI!<br />
</p><p>The barriers to real use of true SSL named-based virtual hosting continue to fall. <a href="http://en.wikipedia.org/wiki/Server_Name_Indication">Android Honeycomb supports SNI!</a> Hey Microsoft- where's the IIS support? Apache's had SNI support forever, and Chrome, FF, and IE8 support it now. You guys are the ones holding up the parade! </p><h3>Background</h3>Name-based virtual hosting is what makes private-branded cloud services and shared-tenant server hosting reasonable- rather than requiring a single IP address per hostname, many hostnames are mapped to a single IP with DNS CNAMEs. The webserver looks at the HTTP Host: header sent by the client's browser when deciding which site's content to serve. This falls apart with SSL, though, since the target hostname is baked into the certificate, and the SSL handshake occurs before the HTTP Host: header is available. SNI is the solution to this problem. It allows the hostname the client expects to be sent as part of the SSL handshake process, so the SSL server can select which certificate to present. The only workaround right now (short of one IP address per hosted domain) is the use of the SAN attribute (Subject Alternate Name), which allows a certificate to present a list of hosts that are valid- this doesn't scale well, and requires the hosting entity to obtain a new certificate for subjects they don't own every time they add a new hosted domain to a server. We've always said, "no way" when customers want us to private-label host Earth Class Mail under their domain, for this exact reason. </div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1782534239117433762.post-22566675434926714462010-05-13T23:03:00.001-07:002010-05-13T23:08:50.612-07:00JVC XR-KW810 ReviewI recently upgraded the factory stereo in my '02 Lexus IS300 to the new JVC XR-KW810, and thought I'd share my experiences thus far.<br /><br />I didn't really want to swap out the factory stereo, as it still sounded (and looked) quite good. Unfortunately, I recently picked up the official Google car dock for my Nexus One, and really wanted to use it as a music player in the car. Since the Google dock only has Bluetooth audio output, my only options for the factory Lexus stereo were to use the headphone jack on the phone to a tape adapter or a yet-to-be-hacked-in aux input. I tried it with a tape adapter for a couple of days, and decided it was time for a Bluetooth-capable stereo. My only requirements were Bluetooth, an aux-in, double-DIN with a real volume knob (and preferably lots of other "hard" buttons), and custom color configuration (to more closely match the IS300's orange illumination). This led me to the JVC XR-KW610 and it's bigger brother, the XR-KW810. The 610 was okay, but the segmented display looked kinda hokey and it didn't come with the Bluetooth adapter in-box. The 810 has a better looking matrix display and Bluetooth is included. Done.<br /><br />Installation was very smooth (at least around the head unit itself- reusing the Lexus factory amp and speakers on a non-Lexus head requires a special part). It includes a sleeve for "roll your own" setups as well as an assortment of screwholes in the unit itself. The included Bluetooth adapter just plugs into the rear USB port (there's also one on the front), and the handsfree mic hangs off the back. The unit has a headlight switch input, which is pretty handy for dimming the illumination when the headlights are on. After putting the car all back together and booting it up, my first impressions were pretty good. <br /><br />Sound quality through my factory amp was quite solid, though the default EQ settings were a little bassy on my setup (I didn't try the unit's built-in amp). This was easily rectified by tweaking the ProEQ settings, which allow for finer unit-wide EQ adjustment (as opposed to the front-panel EQ settings, which are per-input and fairly coarse). In addition to the ProEQ settings, there's a decent array of loudness, LPF, HPF, amp and sub gain adjustments. Also, each source's gain can be adjusted individually.<br /><br />The controls are generally intuitive and pretty easy to operate without looking. There's a four-way button on the lower left of the face, three large buttons next to the volume knob, source/power and EQ buttons, and 6 preset selector buttons. The buttons are large, but have a somewhat cheap feel. The glossy finish on the unit looks nice under low light, but shows every smudge and speck of dust on a sunny day. The illumination color adjustments are extensive - buttons and display can be colored independently, and different colors can be set for day and night profiles. The display can be difficult to read in direct sunlight, though it does have a polarizing layer that helps somewhat. The real low point on the display is the low LCD update frequency, which causes horizontal text scrolling on long titles or RDS messages to be difficult to read.<br /><br />On the initial install, I hadn't purchased the separate KT-HD300 HD Radio tuner yet. FM reception on the built-in tuner was quite good, but AM was a little weak compared the the factory unit. The one thing I missed from the factory head was RDS display (station ID and "now playing" info), which the built-in tuner doesn't have. However, the HD tuner adds this, so I ordered it (online, $89). The external HD tuner disables and replaces the built-in tuner by plugging into the back of the head unit. Luckily, it includes long antenna, power and data cables, because it's rather bulky (about 5x9x1 inches)- it took a bit of creativity to find a niche for it. It works as advertised, and does a seamless "upgrade" to the digital signal once it's locked in on the analog. Direct tuning to an digital-only station (ie, via a preset) can take a couple of seconds- the display flashes "Linking" while this is occurring. My only other beef with the HD tuner is a pretty minor one: it disables the "up/down" controls for scanning through presets that are available with the stock tuner (with the HD tuner, up/down is used to switch between HD channels on the same station). The unit supports 18 presets on the FM band, but only 6 are accessible by hard button. Without the up/down access, presets are selected by tapping the menu button, turning the knob to select, and tapping the knob. It works, but nowhere near as conveniently as with the built-in tuner.<br /><br />The Bluetooth support is fairly advanced compared to other units in the same price range- it supports A2DP, AVRCP 1.3, HSP/HFP and PBAP. In English, this means you can use it to listen to high-quality audio from your music player, remotely control it, get the "now playing" info, navigate playlists, voice dial your cell-phone and answer calls, and copy or navigate the phonebook from the unit. I've only been able to try parts of this thus far, as the Nexus One's Bluetooth implementation doesn't yet support all this functionality. What I have tried is pretty solid- the unit can pair with two different devices, and has a dedicated call/answer button on the face. The handsfree mic volume seems a little low, so it needs to be routed pretty close to your face (maybe the visor). I use the Nexus One car dock for my handsfree calling anyway, so it's not an issue for me. <br /><br />The USB support is pretty complete as well. If using a thumb drive, it has full folder navigation support and displays album/title info while playing. It also supports USB iPod control and charging, which works quite well, supporting standard functions (playlists, artist/album/song, podcasts, etc). It does disable the iPod display and control (shows a nifty "JVC"), so you have no choice but to control the music from the head unit (difficult for the backseat DJs, though they could use the included remote control in a pinch).<br /><br />The CD player is pretty standard- it supports CD-TEXT, so newer CDs or burned ones will display title and track info. Not much else to say here.<br /><br />Thus far, I'm very pleased with the JVC XR-KW810 head unit and KT-HD300 HD tuner. Now if Google would get around to updating the Bluetooth stack to support AVRCP 1.3, I could use all the goodies over Bluetooth.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1782534239117433762.post-48361303261508881542010-04-29T19:23:00.001-07:002010-05-10T17:59:29.041-07:00Changing default framework profile in VS2010 projectsToday I figured out how to hack the default framework profile in VS2010 (so as NOT to use the Client Profile by default on 4.0 projects).<br /><br />A little background: I'm all for the idea of the Client Profile in .NET 4, but Visual Studio forces you to use it by default on many projects targeting .NET Framework 4.0. This alone is merely annoying, since you can easily change the profile under the Project Properties window. However, this annoyance becomes fatal to another of my favorite Visual Studio features: throwaway projects. If you want a throwaway project that targets the full 4.0 framework profile, well, too bad. Changing the framework profile requires saving the project, and the version target selector on the New Project dialog doesn't let you choose a profile. Poop.<br /><br />I've filed a Connect suggestion to see if we can get a first-class fix- by all means, go vote for it <a href="https://connect.microsoft.com/VisualStudio/feedback/details/555621/cant-create-throwaway-project-targeting-4-0-full-framework">here</a>.<br /><br />Meantime, I use throwaway projects many times a day, and about half the time I need stuff that's not in the Client Profile. Here's the fix:<br /><br />Disclaimer: this involves minor hackage to your Visual Studio 2010 install. I am not responsible if it breaks a future service pack, kicks your dog, or causes a tear in the space-time continuum. <br /><br />Let's take a visit to VS2010's ProjectTemplates directory. It's under Program Files\Microsoft Visual Studio 10.0\Common7\IDE\ProjectTemplates. Here you'll find a number of directories. I'm going to hack the C# Console Application, since that's my usual project of choice, but the technique should work on any project that defaults to the Client profile. The C# Console Application project template is under CSharp\Windows\1033(or the LCID of your installed locale)\ConsoleApplication.zip.<br /><br />Extract the consoleapplication.csproj template file, and open it in the editor of your choice. Find the line that says $if$ ($targetframeworkversion$ >= 4.0), and remove the "Client" from inside the TargetFrameworkProfile element below it. If you're feeling saucy, you can just remove the whole $if$ to $endif$ block. Save the hacked template, and replace the one in the ConsoleApplication.zip file (I had to use 7zip for this- Explorer's zip integration thought the file was corrupted). <br /><br />This isn't the end, though- Visual Studio caches its project templates, so to see your changes, you have to ask it to rebuild the cache. Open the VS2010 command prompt, and type <br /><br /><blockquote>devenv.exe /setup</blockquote><br /><br />It'll silently crank away for a bit, then return. Run VS2010 and create a new project using one of the templates you hacked, and check the Project Properties window. If all went well, you should see it targeting .NET Framework 4 instead of the Client Profile. Sweet!<br /><br />Hope this helps someone out...<br /><br /><strong>UPDATE</strong>: Nathan Halstead posted a comment to the Connect issue for this one, suggesting that "devenv.exe /setup" is the recommended safe way to refresh the project template cache (I've made the change inline), and that overwriting the template shouldn't negatively affect VS servicing (other than repairs/updates might overwrite the hacked version). He suggested creating a copy of the project template with a different name to avoid the servicing overwrite issue. Thanks, Nathan!Unknownnoreply@blogger.com4tag:blogger.com,1999:blog-1782534239117433762.post-9591226698511889122010-04-15T16:17:00.001-07:002010-04-15T19:06:34.937-07:00SQL Server Database Mirroring Woes<span style="font-family:arial;">I'm a huge fan of SQL Server's database mirroring concept. We've been using it on our application (60GB DB over 220 tables, 10's to 100's of millions of rows) for almost 3 years on SQL 2005. Log shipping has its place (it's pivotal to our offsite disaster recovery plan), and clustering is great if you have a huge replicated SAN, but, at least on paper, DB mirroring is the lowest-cost and most approachable option. In reality however, it has some warts.<br /><br />We started out with synchronous mirroring in a high safety + witness configuration. This is great, as we could easily take down the primary DB server for maintenance during "slow" periods with minimal effect on the running application (a few transactions might fail, which we recover from gracefully). As our database grew, though, we started seeing massive system slowdowns during peak usage periods. Investigation showed that the lag was coming from the commit overhead on the mirror, which might grow to 30s or more causing timeouts (high safety mode requires that the transaction be fully stored on the mirror server before returning control to the client). More investigation revealed that the disk write I/O on the mirror server's data volume was between 10x-500x the principal, which outstripped the disk array's ability to keep up. With a lot of angry customers and idled operators waiting around, we didn't have a lot of time to invest in root-cause analysis, so we switched over to asynchronous mirroring to keep the doors open (async mirroring doesn't hold up the client transaction waiting for the log to copy to the mirror). Luckily, Microsoft Startup Accelerator (now Bizspark) had hooked us up with SQL Enterprise licenses, so async mirroring was an option for us- it's not on SQL Standard! With async mirroring, a catastrophic loss of the primary server pretty much guarantees some data loss, so it's not ideal.</span><br /><span style="font-family:arial;"><br />Awhile back, we upgraded all our DB server storage to SSDs in a RAID10 config, resulting in a massive performance boost on our aging hardware. We figured this would allow us to go back to synchronous mirroring mode with no problems. While not as severe, we still experienced painful slowdowns during peak write periods, and had to switch back to async mirroring again. Even with async mirroring, the write volume to the mirror data disk was still consistently hundreds of times that of the primary. As we hadn't planned for these ridiculous mirror write volumes, we were starting to worry about our mirror server's SSDs burning out prematurely (SSDs have a limited write volume before the flash cells start to fail).<br /><br />Flash forward to last month- we've purchased spanking new 12-core DB servers with the latest and greatest SSDs in RAID10, 64G of memory, and SQL 2008 on Windows Server 2008R2. We wanted to spend the time to get high safety synchronous mirroring in place again, so we wrote a little simulator app to see if SQL 2008 on our new servers had the same nasty I/O issues. It did. On average, the data write volume was constant, and 250-500x higher on the mirror (writing constant 3-7MB/s 24/7 is a quick death sentence for an SSD rated at 5GB/day for 5 years)!<br /><br />Time to call in Microsoft. After explaining the situation, the first response was "as designed". Really? Our write volumes aren't all that high, so if this is true, I have a hard time believing that database mirroring is useful on a database of any size. In any case, had we gone live this way, our mirror machine's SSDs would've been shot within a matter of months. After an initial call of "BS!", I got a little more detail: apparently SQL Server not only ships the log data over in real-time, it also performs recovery on the DB for every transaction to minimize the failover time (which IS nice and snappy, usually <1s). Turns out, there is an undocumented trace flag that disables the per-transaction recovery process, at the cost of a higher failover delay. This sounded like exactly what I needed. So what is this magic trace flag?<br /><br /><span face="courier new">DBCC TRACEON(3499, -1)</span><br /><br /><span style="font-family:arial;">This should be run on both the primary and mirror DBs, since they switch roles during failover. It worked exactly as advertised for us. The mirror server's disk I/O was now in lock-step with the primary, and we could once again use full-safety mirroring with a witness. The failover times were definitely increased, but in our testing, they're still sub-10s, which is perfectly workable for us.<br /><br />I've only found two references to this trace flag online- one in a presentation by an MS employee that says you should test extensively (which we are), the other in an unrelated KB article about upgrading DBs with fulltext indexes to 2008 from 2005. I've found a handful of people griping about this problem in forums over the years, with no responses. Hopefully this will take care of others' issues as well as it did ours. We were within inches of switching to a sub-second log shipping scenario to replace mirroring because of this issue, and now it's looking like we won't have to. Just wish it was a little better documented.</span><br /><br /></span><span style="font-family:arial;"></span>Unknownnoreply@blogger.com6tag:blogger.com,1999:blog-1782534239117433762.post-48038818513069225252009-05-07T17:21:00.000-07:002010-03-16T14:34:19.182-07:00X-25M updateComing up on a month of living with the X-25M SSD... Right after my original post, Intel released a firmware update to address the slowdown issues. I, of course, applied it immediately, with no issues. <br /><br />A month later, things are still going great. The one nasty side effect is that this drive has RUINED me for working on anyone else's computer (including my home PC). Everything else just feels like molasses when compared to my work laptop. <br /><br />I also have to be really careful with SQL Server performance. I was writing a little tool that did some LINQ to SQL stuff recently, and the way it was doing a GroupBy() hid the fact that it was doing hundreds of queries on the DB. Normally, I'd notice such a thing because what should be a lightning fast query would cause the machine to grind for a few seconds. With the SSD though, even the hundreds of queries came back lightning fast. I had to run SQL profiler to see what it was really doing- glad I did, because I was able to tweak the query to run fast on "normal" machines with a single DB query and do the fancy grouping behavior in memory after the fact.<br /><br />Anyway, I'm still giving this thing two big fat thumbs up!Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-1782534239117433762.post-36313256897091093722009-05-07T17:03:00.000-07:002009-05-07T17:18:59.256-07:00Fun with LINQI had a little problem at work today that smacked of "I bet there's a clever LINQ way to do this without using local variables and side effects". The problem: given a list of whatevers, create a dictionary of whatevers whose key is the original index in the list. Don't ask why- that'd take too long to explain.<br /><br />After screwing around with a false start (yield return in the key selector lambda of ToDictionary- disallowed by the compiler), I came up with something really gross-looking using Aggregate. Then I posed the problem to a couple of coworkers. Here's what we all came up with. Which one do you think runs the fastest? The answer may surprise you.<br /><br /><pre style="font-family: monospace; font-size:.75em"><br /><br />using System;<br />using System.Collections.Generic;<br />using System.Diagnostics;<br />using System.Linq;<br /><br />namespace ConsoleApplication1<br />{<br /> class Program<br /> {<br /> static void Main(string[] args)<br /> {<br /> List<string> input = new List<string> { "1", "2", "3", "4", "5", "6" };<br /><br /> // #1: Matt's original with local var<br /> var sw1 = Stopwatch.StartNew();<br /><br /> for (int i = 0; i < 4000000; i++)<br /> {<br /> int index = 0;<br /><br /> var dic = input.ToDictionary(k => index++);<br /> }<br /><br /> sw1.Stop();<br /><br /> // #2: Matt's "yikes" version with .Aggregate<br /> var sw2 = Stopwatch.StartNew();<br /><br /> for (int i = 0; i < 4000000; i++)<br /> {<br /> var dic = input.Aggregate(<br /> new { Index = 0, Dictionary = new Dictionary<int,>() },<br /> (a, t) => { a.Dictionary.Add(a.Index, t); return new { Index = a.Index + 1, Dictionary = a.Dictionary }; }<br /> );<br /> }<br /><br /> sw2.Stop();<br /><br /> // #3: James' "list ordinal hack" version<br /> var sw3 = Stopwatch.StartNew();<br /><br /> for (int i = 0; i < 4000000; i++)<br /> {<br /> var dic2 =<br /> input.Aggregate(new List<int>(), (lst, elt) => { lst.Add(lst.Count); return lst; })<br /> .ToDictionary(k => k, v => input[v]);<br /> }<br /><br /> sw3.Stop();<br /><br /> // #4: James' "nasty list hack + .Aggregate" version<br /> var sw4 = Stopwatch.StartNew();<br /><br /> for (int i = 0; i < 4000000; i++)<br /> {<br /> var dic =<br /> input.Aggregate(new Dictionary<int, string>(), (d, elt) => { d[d.Count] = elt; return d; });<br /><br /> }<br /><br /> sw4.Stop();<br /><br /> Console.WriteLine("Done. 1:{0}ms, 2:{1}ms, 3:{2}ms, 4:{3}ms", sw1.ElapsedMilliseconds, sw2.ElapsedMilliseconds, sw3.ElapsedMilliseconds, sw4.ElapsedMilliseconds);<br /><br /> Console.ReadKey();<br /><br /> }<br /><br /> }<br /><br />}<br /><br /></pre>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1782534239117433762.post-80799220632722123882009-04-09T20:54:00.000-07:002009-04-10T18:54:56.123-07:00Intel X25-M SSD on the dev laptopJust today replaced my 7200rpm drive on my main dev laptop with a 160G Intel X25-M SSD. Been drooling over this since it came out, and my expectations were very high. Two words: holy crap. In just about every way, it's as fast as I hoped. Some of the high points:<br /><br />- <strong>Boot time</strong> (Server 2008): 15sec, down from 45sec.<br />- <strong>SQL Server 2005 Database restore</strong> (30G DB): 9min, down from 45min.<br />- <strong>Query large SQL Server table (3m rows) on an unindexed column</strong>: 20sec, down from 3min<br />- <strong>VS2008 load of our main .sln</strong>: 13sec, down from 50sec<br />- <strong>Full rebuild of our main .sln</strong>: 39sec, down from 1min 15sec. <br /><br />The rebuild didn't go down as much as I'd hoped, but then I tried doing it DURING the large DB restore mentioned above. Wow! Previously, my machine was completely useless during the DB restore. With the SSD, the restore chugged away, and the full rebuild took 45s, only 6s longer than an "unloaded" machine. <br /><br />Overall the machine is very snappy feeling, and app startups are noticeably faster. I did the February pre-SP2 hotfix rollup for Outlook 2007, which contains a bunch of SSD-friendly fixes, it's also very snappy now even with my bloated mailbox.<br /><br />Really pleased with the first day performance. Hopefully it holds out- I'm almost planning on needing to do an image-wipe-restore process every couple of months (to combat the well-documented internal fragmentation slowdowns), but we'll see how these metrics hold up over time first.Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-1782534239117433762.post-65837640630284213432008-09-30T09:13:00.000-07:002008-09-30T15:38:19.212-07:00Squeezebox BoomAfter visiting a friend in New York a couple of weeks ago and playing with his Slim Devices (nee Logitech) <a href="http://www.slimdevices.com/pi_squeezebox.html">Squeezebox Classic</a>, I just had to have one. I just wished they had one with speakers- the place I want to put it doesn't have a stereo handy. A quick visit to their website shows the new <a href="http://www.slimdevices.com/pi_boom.html">Squeezebox Boom</a>- perfect! Clickety-click to Amazon, and it's on the way...<br /><br />Fast-forward to last night, when I "To My Desk"'d it from my EarthClassMail account and ripped the box open in the office to set it up for a late-night coding session. Right out of the box, I was enamored with the build quality- it's got a beautiful black enamel finish with really clean lines, and it feels quite dense for such a small device. Lots of little niceties like the magnetic remote "dent" in the top of the unit and the sleep/snooze button on top in case you want to use it as a hella-spendy clock radio. Plugged it in and had my SqueezeCentral account created and the device on the network within a couple of minutes. I went for the wired connection at the office- didn't even try the wireless, since our office wireless network security doesn't play well with a lot of devices. I didn't set the local server up (that's for home), so I was just playing with the built-in internet services. There's quite a bit of content available for free- even more if you're willing to create accounts and link them up. I was pleased to see the "local radio" option- it shows you all the internet streams of the local radio stations (all my favorites were on there), as well as allowing you to browse around the world right on the device. <br /><br />I had pretty low expectations for sound quality. The device was kinda spendy ($279), but not enough of a premium over the speaker-less Classic model's $199 price tag to set my expectations very high. Right from the start, I was blown away. This thing sounds great! It has great mid-bass response from a pair of 4" speakers- the low end is "as expected" (eg, not going to rattle the windows out with sub bass-y goodness), but they do provide a sub-out if you're worried about it (I'm not).<br /><br />I've seen the device UI described as "fiddly", and I'd have to agree- it takes a bit of getting used to, and the navigation isn't terribly friendly unless you know the whole sequence (as well as whatever nagivation the radio service you're using provides too, since they're all different). Things also work a little differently via the remote than using the wheel. It is more or less consistent, though, once you get used to it. My wife had it figured out within a couple of minutes and was having a blast with the "artist search" stations on the Slacker service. <br /><br />We're both musicians, and yet there's not a lot of music around the house most of the time. Hopefully this thing will make it easier for us to have music around the house wherever and whenever we want.<br /><br />Recommended!Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1782534239117433762.post-68455303429863028522008-09-15T18:58:00.000-07:002010-03-16T14:34:29.266-07:00Trying TrueCrypt full-disk encryptionI've been looking for a way to secure the data and IP on my laptop without a significant sacrifice of performance, reliability, or convenience. I looked into a few different directions:<br /><br /><li>BitLocker - This actually became an option once I upgraded from Vista Business to Ultimate (had to rebuild my dev laptop after an unfortunate Windows Update problem that MS couldn't solve). However, my laptop doesn't have a TPM, so I'd have to use an external USB key to boot. External key = high on the security scale, low on convenience. Next!</li><br /><li><a href="http://technet.microsoft.com/en-us/library/bb457065.aspx">Windows EFS</a> - Convenient, and allows the flexibility to encrypt at the folder level, but is difficult for multi-user access (something I do more of than I'd like). There's also a major performance hit for SQL Server operations. FAIL.</li><br /><li><a href="http://msdn.microsoft.com/en-us/library/bb934049.aspx">SQL 2008 Transparent Data Encryption</a> - This one was intriguing, but it ultimately sounds like it wouldn't work well for my needs. I'd have to have the same keys used to create the backups, or encrypt AFTER restore of an unencrypted backup. Also, obviously limited to SQL Server, which doesn't cover everything I need. Either way, not going to fly.</li><br /><li><a href="http://www.truecrypt.org">TrueCrypt</a> - Free, supports both file-based volume encryption, as well as bare-metal volume encryption. I'd used TrueCrypt before for the former, as well as for non-performance-sensitive stuff where we needed to move large volumes of sensitive data around on removable drives. I couldn't find anyone talking about the real-world performance hit, though. Windows boot volume encryption support is also fairly recent, so that made me a little nervous.</li><br /><br />A few weeks ago, I decided to try the TrueCrypt route. To start, I created a file-based volume and did some testing in there. My benchmark was far from scientific, but I tested with things I do every day. I did a full SVN checkout of a code branch, opened and built it in Visual Studio, restored a SQL Server DB there, etc. Performance wasn't horrid, but it wasn't anywhere close to my bare-metal performance either- especially the SQL Server DB restore (took about 5x as long as on bare-metal). Most of the other operations I timed took anywhere from 1.5x-2x as long. There also doesn't appear to be a way to auto-mount file-based volumes, which means on every boot, I have to manually mount the volume (by entering the password), then restart the SQL Server. Gets old fast.<br /> <br />A file-based volume just wasn't going to cut it. Two weeks ago, I finally bit the bullet and decided to try hitting "Encrypt System Partition/Drive" (AFTER a full backup, thankyouverymuch). Making the leap easier to take was the fact that the process claims to be fully reversible. The experience was quite good- after choosing a password and generating keys, I burned a recovery CD (I'm glad the UI makes such an issue of this!). After the CD had burned and verified, it proceeded to background-encrypt the disk. I could theoretically use the system during this time, but decided not to try- just left it to crank overnight. When I came back the next morning, all was well. I rebooted and held my breath. I was presented with the TrueCrypt password prompt, followed by the normal Vista bootup process. Cool!<br /><br />I went and retried my real world benchmarks, and much to my surprise, most of them were indistinguishable from their non-encrypted counterparts! The only one that was notably slower was a SQL DB restore- and that was only when the backup had a large log file. In case you didn't know: SQL Server won't allow you to resize the logfile on restore, so it allocates and zeroes an "empty" logfile matching whatever the server's logfile size was. We pre-allocate production server logfiles fairly large so they don't have to autogrow during large transactions. The side-effect is that restores to a clean DB are painfully slow. If I re-created the backup after truncating down to a reasonably-sized logfile, the restore performance was almost exactly the same as on a bare-metal, unencrypted drive.<br /><br />Two weeks in, I'm really impressed with what the folks at TrueCrypt have done. 6.0a is as-advertised, and the performance hit is pretty minimal for just about everything I've tried. Looks like this problem is solved!Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-1782534239117433762.post-63811182147084288332008-09-15T10:28:00.000-07:002008-09-15T20:02:27.604-07:00Yay Chase- security is good!At some point in the not-too-distant past, I noticed that <a href="http://www.chase.com">Chase</a> switched their main home page to use HTTPS. In general, it's a marketing site, so who cares? However, the thing that's great about this is that Chase provides a web banking login right on their marketing site's homepage. Even though the old HTTP page posted back to an HTTPS endpoint on submit, it was a major security hole- subject to phishing, DNS poisoning, man-in-the-middle, and who-knows-what-else attacks. They're also using secure cookies, but not httpOnly. Decent, anyway... <br /><br />As a side note, we've resisted marketing and user requests for this functionality since day one. Marketing has not (to date) been willing to switch their site to HTTPS-only, and we're unwilling to make the security compromise. I think a number of users were taken aback by our response to "but my bank does it, it must be secure!"<br /><br />Bravo, Chase- hopefully your competition will follow in your footsteps, leading to a slightly more secure financial web for us all.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1782534239117433762.post-62029536387214860412008-09-13T15:27:00.000-07:002008-09-13T15:51:33.008-07:00Chicago hotel funBoy, this is starting to smack of <a href="http://blog.rolpdog.com/2007/05/vacation-from-h-e-double-hockey-sticks.html">last year's vacation from hell</a> (right down to Jenny still being on United's "see agent, you must be a terrorist" check-in watch list). Didn't make it to the game, so we were sulkily watching the Purdue/Oregon game from our hotel room on a brand new LG flat-panel that looked and sounded like a bad 70's TV with rabbit-ears. I don't understand why hotels spend hundreds of thousands of dollars to upgrade to flat-panel HDTVs in every room, then leave 1980's analog cable infrastructure to drive them. Anyway, right at the end of halftime, the fire alarm went off. Great. At least I thought to grab the car keys and my wallet, so we didn't have to sit in the lobby for an hour. We went and grabbed some munchies at Walgreen's (the only place we can drive to- everything around O'Hare is hotels and industrial areas, and we're still pretty much flooded in). When we got back, the fire alarms had finally stopped, so we went back to the room. We were greeted with a smell about like wet dog- the "balcony" door was leaking onto the carpet from all the rain. To boot, Purdue's butt-kickin' lead from the first half had evaporated, now tied at 20-20. Grr.<br /><br />Hope we can just cut our losses and get out of here on time tomorrow.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1782534239117433762.post-3673068528422315482008-09-13T09:55:00.000-07:002008-09-13T10:03:20.463-07:00Stuck in ChicagoSo we flew into Chicago for a quick trip down to West Lafayette to watch the Purdue/Oregon football game (on our way to the east coast). Unfortunately, Chicago's been hit with a lot of rain and flooding. We're staying in a hotel about 100 yards from the freeway, which is open and running fine, but we can't get out of the little hole we're in because all the roads are closed. Argh! The game starts in three hours- I don't think we're going to make it. So the main purpopse of the first leg of our trip (visit Purdue, watch football) was pretty much a bust. We did have a nice visit with Jenny's cousin Adam and had great sushi at Sushi Samba last night, though. <br /> <br />Off to Philadelphia tomorrow (I hope). O'Hare's experiencing lots of delays due to the local weather issues as well as downstream effects from Ike. Ah, joy. At least we don't have a terribly fixed schedule beyond the football game and flight to Philadelphia- everything else is fairly flexible.<br /><br />More to come...Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1782534239117433762.post-37735655568102521302008-09-02T15:32:00.001-07:002008-09-02T15:41:56.303-07:00Google Chrome Browser: First ImpressionsJust installed the beta of Google Chrome on Vista this morning. Generally, it's pretty slick! Installation was painless, only took a couple of minutes to download. Rendering is first-rate (no surprise- they're leveraging WebKit). I tried <a href="https://secure.earthclassmail.com/">our site </a>- all the basic smoke tests seemed to work fine. The JS engine didn't seem terribly snappy- though with all the JIT stuff they're doing, I'm sure they've got plenty of work left for cold startup perf.<br /><br />The "Incognito Mode" is pretty nifty (non-persistent browsing sessions)- wonder who stole what between Chrome and IE8's InPrivate mode. <br /><br />Really like the integrated JS console and debugging stuff- hopefully that stays in the finished product rather than being a separate download/install. Having customers with Firebug on their machines is invaluable for debugging weird one-off issues- it'd be even better if something similar was built in!<br /><br />My coworker was able to crash it on YouTube, and was also able to get the "Sad Tab" (it says "Aw Snap!"- sweet). The process isolation stuff is fantastic- kinda back to the way IE3 used to do frames in their own processes... Funny how we always end up repeating ourselves, for better or for worse.<br /><br />UI looks very similar to IE7 in most respects (layout, etc), except for the position of the tab bar where each tab has its own address bar. Very comfy and familiar. Looks pretty on Vista.<br /><br />Anyway, a very polished first beta from first impressions- kudos to the Chrome team! My interest is piqued- I'll be watching carefully and will continue to play with it.Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-1782534239117433762.post-29337916057471908702008-09-02T02:19:00.001-07:002008-09-02T02:33:52.459-07:00PDC 2008Got my confirmation for <a href="http://www.microsoftpdc.com/">PDC 2008</a> last week- this will be PDC #3 for me. Looks to be good stuff on the sessions they've posted so far. I was dreading staying out by the airport- all the conference-rate hotels near the convention center were booked up by the time I got the OK to go, but they added one more at the last minute. Score!<br /><br />Most of the people I've met up with in the past aren't able to attend this year- crummy economy, employer politics, whatever. In addition to all the tech content, PDC's a valuable developer networking event- I've met great folks with whom I both learned and shared useful information. Drop me a line if you're going! <br /><br />Through various working relationships at Microsoft, I've been playing early with some of the bits that'll be unveiled at PDC this year. I can't talk about much yet, but I'm looking forward to the day when I can. We're hoping to have some code ready by PDC that shows off integration with the new mystery technology. Some of these things really are game-changers, and I wish I could ship with them right now!Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1782534239117433762.post-1915619353112880352008-09-01T23:45:00.000-07:002008-09-02T03:46:11.131-07:00Bachelor weekend redux...Jenny took off to Phoenix to visit a friend for the long weekend, so I figured that'd be a good time to do some kitchen experimentation and work on some house projects. OK, so I spent more time lazing than laboring, but I got a few things done that've been hanging over my head for awhile.<br /><br />First the food stuff- I got a new food processor for my birthday, and it's been sitting in the cupboard calling to me. I tried out a Good Eats hummous recipe- the food processor worked great, but the recipe had a little too much garlic (didn't know that was possible!). I also tried out a shrimp scampi recipe- didn't quite make it to the peach cobbler I'd been planning on making- maybe next week.<br /><br />On the house: job one was to rid ourselves of the stinky makeshift shower curtain that's graced our shower for far too long. The folks we bought the house from had made the shower curtains out of some kind of industrial plastic and a corded curtain rod (making it fairly inconvenient to get out of the shower). The curtains were endlessly slimy and smelled like wet dog on a good day. This is actually the first house where I've put up with a shower curtain at all- everyplace else, I've installed tub enclosures. Unfortunately, our master bath is set in the floor, so a normal tub enclosure won't work- we'd have to get an 8-footer custom built, and given that we're going to redo the master bath sometime soon, that'd just be a waste. I picked up a cool hotel-style curved shower curtain rod (keeps the curtain off you) and two normal fabric shower curtains. Rather than test my 7th grade home-ec sewing skills, a friend of my mom's hacked them up and made an 8 foot tall franken-curtain. Worked great- thanks, Jane!<br /><br />Next up: fix the master bath toilet. This one ended up being quite a chore. We've been using the commode down the hall for awhile now. I leaned on the toilet while switching it over to a new valve, and it popped off the floor. One of the old flange bolts had corroded right through, so gravity was all that was holding it down. Not good, but no problem (and happy to find it before we had a ... messier problem)- just get some new flange bolts and all is well, right? Hmph. When I pulled it up, it had an old iron closet flange that the bolts screw down into, instead of the modern kind where they key into the flange and stick up. OK, fine- just use my handy-dandy screw extractor to pull the broken one out and replace. Err, no. The screw extractor broke off in the bolt. Crap- now it's either tear up the floor and replace the flange (a lotta work for a temporary setup) or tap in a new bolt near the old one and try to get everything slopped into place. Turns out, there's a third option: a "super ring". It's a flat metal ring that sits over the existing flange and attaches directly to the floor, and it has slots for modern keyed flange bolts. Cool- now I just have to grind out some space on the surface of the old one for the bolts to slide on, seal it up, and we're good to go. Well, almost. A couple of the screw ears on the new ring prevented the toilet from seating properly, so I had to cut them off with the grinder (mmm, burning metal smell). What's left seems to hold everything together just fine, though. Did I mention that our master bath is carpeted? I hate carpet in bathrooms, especially around the shower and toilet. While it's nice to do your morning bidness with cushy carpet under your feet, it's just gross to think about what lurks in there. Anyway, I was very careful to have a piece of plastic sheeting under the toilet for all the dry-fitting I was doing while working this out. The last time I removed the toilet, though, the back edge of the old wax ring scraped on the carpet, leaving a nasty brown stain (rust and wax, not poo). Still- ew! Wax is not easy to get out of carpet, especially when it's intermixed with rust (and in front of the toilet, it so LOOKS like poo). Anyway, toilet is seated, working, and apparently leak-free. <br /><br />Next, I decided to replace the toilet's fill valve while "the patient was open"- the original one had a lot of galvanized pipe chunks in it and took forever to fill (the overflow fill tube was completely clogged with rust). I already had one out in the garage- should be nice and easy, but true to the rest of the day, it wasn't. The new package was missing the overflow tube, so I had to resurrect the old rust-clogged one with lots of bending and tapping and poking. Then, the fill stalk hole on the toilet tank was slightly misshapen, so the tank leaked a bit after I got the new one mounted. Argh! I was able to take care of the leak with some caulk between the stalk and the retention washer (again, just temporary- we'll be replacing this toilet soon anyway). <br /><br />Next up was the fancy "leak sentry" thing that came with my new fill valve. It's a clever device that I'd never seen before- basically a metal blade that sits below the float and is hooked to a second chain on the tank lever. When you flush normally, the chain retracts the blade away from the fill stalk and the float moves as normal. If the tank is leaking, the blade engages against the fill stalk, preventing the float from dropping, so you have to "double click" the tank lever to refill, alerting you that there's a problem. Don't know if it wasn't designed for the ancient mondo-gallons-flush toilet I'm using or what, but I just couldn't get it to work right. I futzed with it for about 20 minutes (I even R'dTFM!), but finally gave up and removed it. <br /><br />Last was trying to clean up the nasty wax mess on the carpet in front of the toilet. I tried using an iron on a paper towel over the wax to melt and soak it up (Google sez this works well for candle wax), but it didn't really work too well for my mess. Next up: the <a href="http://www.amazon.com/gp/product/B000ASDCXY">SpotBot</a>. I'd heard and read good things about this little automated stain remover, so I figured I'd throw a tough job at it. Picked one up at Fred Meyer, dropped it on top of the stain and hit a button. I was amazed: it worked quite well! The carpet's never probably going to look quite the same (it was pretty luxurious carpet), but it did get almost all the wax and rust stains up, even just on the "quick" mode. I'll see how it dries, and maybe give it a go on the "deep" mode if there's any remaining rust color. My only complaint: it really burns through the cleaning solution, though the manual mode gives you full control over that part if you're willing to trade some elbow grease. Some very clever engineering on the device- I was most impressed by the "burp" valve design on the dirty cup that breaks the vacuum when you dump the cup out.<br /><br />A fairly productive weekend, anyway, and now I have a new toy for cleaning up the inevitable future messes I'll make on the carpet.Unknownnoreply@blogger.com0