Source: http://railstips.org/2009/6/8/what-is-the-simplest-thing-that-could-possibly-work

I am always amazed when I read an article from 2004 and find interesting goodies. I’m probably late to the game on a lot of these articles, as I didn’t really dive into programming as a career until 2005, but I just read The Simplest Thing that Could Possibly Work, a conversation with Ward Cunningham by Bill Venners. The article was published on January 19, 2004, but it is truly timeless.

The Shortest Path

Simplicity is the shortest path to a solution.

“Shortest” doesn’t necessarily refer to lines of code or number of characters, but I see it more as the path that requires the least amount of complexity. As he mentions in the article, if someone releases a 20 page proof to a math problem and then later on, someone releases a 10 page proof for the same problem, the 10 page proof is not necessarily more simple.

The 10 page proof could use some form of mathematics that is not widely used in the community and takes some time to comprehend. This means the 10 page version could be less simple as it requires learning to understand, whereas the 20 page uses generally understood concepts.

I think this is a balance that we always fight with as programmers. What is simple? I can usually say simple or not simple when I look at code, but it is hard to define the rules for simplicity.

Work Today Makes You Better Tomorrow

The effort you expend today to understand the code will make you a more powerful programmer tomorrow.

This is one of the concepts that has made the biggest different in my programming knowledge over the past few years. The first time that I really did this was when I wrote about class and instance variables a few years back. Ever since then, when I come across something that I don’t understand, that I feel I should, I spend the time to understand it. I have grown immensely because of this and would recommend that you do the same if you aren’t already.

Narrow What You Think About

We had been thinking about too much at once, trying to achieve too complicated a goal, trying to code it too well.

This is something that I have been practicing a lot lately. You know how sometimes you just feel overwhelmed and don’t want to start a feature or project? What I’ve found is that when I feel this way it is because I’m trying to think about too much at once.

Ward encourages over and over in the article, think about what is the most simple possible thing that could work. Notice he did not say what is the simplest thing that would work, but rather what could work.

This is something that I’ve noticed recently while pairing with Brandon Keepers. Both of us almost apologize for some of the code we first implement, as we are afraid the other will think that is all we are capable of. What is funny, is that we both realize that you have to start with something and thus never judge. It is far easier to incrementally work towards a brilliant solution than to think it in your head and instantly code it.

Start with a test. Make the test pass. Rinse and repeat. Small, tested changes that solve only the immediate problem at hand always end up with a more simple solution than trying to do it all in one fell swoop. I’ve also found I’m more productive this way as I have less moments of wondering what to do next. The failing test tells me.

Anyway, I thought the article was interesting enough that I would post some of the highlights here and encourage you all to read it. If you know of some oldie, but goodie articles, link them up in the comments below.

Source: http://drnicwilliams.com/2009/06/07/tdd-for-greasemonkey-scripts-and-introducing-ninja-search-js/

“this article shows how I used test-driven development tools and processes on a Greasemonkey script.” Though it also includes free ninjas.

1. Long drop downs hate humans

When I do online banking I need to select from a large list of other people’s bank accounts to which I might like to transfer money too. It is the massive drop down list that I must scroll through that I wish to raise issue with today. The problem of having to give other people money is probably a different discussion.

And take those time-zone selector drop down lists, for example, the massively long list rendered by Rails’ time_zone_select helper. Granted, I am thankful for you letting me choose my timezone in your web app. Though for those of us not living in the USA we must hunt for our closest city in the list. Dozens of locations, ordered by time zone and not the name of the city (see adjacent image). Unfortunately you can’t easily type a few letters of your current city to find it. Rather, you have to scroll. And if you live in the GMT+1000 time zone group (Eastern Australia), you have to scroll all the way to the bottom.

5. Choose from a small list

So I got to thinking I’d like a Greasemonkey (for Firefox) or GreaseKit (for Safari) script that automatically converted all ridiculously long HTML drop down lists into a sexy, autocompletion text field. You could then type in “bris” and be presented with “(GMT+1000) Brisbane”, or given the less amusing banking scenario then I could type “ATO” and get the bank account details for the Australian Tax Office.

I mean, how hard could it be?

This article is two things: an introduction to Ninja Search JS which gives a friendly ninja for every drop down field to solve the above problem. Mostly, the rest of this article shows how I used test-driven development tools and processes on a Greasemonkey script.

Introducing Ninja Search JS

Ninja Search JS banner

Click the banner to learn about and install the awesome Ninja Search JS. It includes free ninjas.

Currently it is a script for Greasemonkey (FireFox) or GreaseKit (Safari). It could be dynamically installed as necessary via a bookmarklet. I just haven’t done that yet. It could also be a FireFox extension so it didn’t have to fetch remote CSS and JS assets each time.

Ninja Search JS uses liquidmetal and jquery-flexselect projects created by Ryan McGeary.

Most importantly of all, I think, is that I wrote it all using TDD. That is, tests first. I don’t think this is an erroneous statement given the relatively ridiculous, and unimportant nature of Ninja Search JS itself.

TDD for Greasemonkey scripts

I love the simple idea of Greasemonkey scripts: run a script on a subset of all websites you visit. You can’t easily do this on desktop apps, which is why web apps are so awesome – its just HTML inside your browser, and with Greasemoney or browser extensions you can hook into that HTML, add your own DOM, remove DOM, add events etc.

But what stops me writing more of them is that once you cobble together a script, you push it out into the wild and then bug reports start coming back. Or feature requests, preferably. I’d now have a code base without any test coverage, so each new change is likely to break something else. Its also difficult to isolate bugs across different browsers, or in different environments (running Ninja Search JS in a page that used prototypejs originally failed), without a test suite.

And the best way to get yourself a test suite is to write it before you write the code itself. I believe this to be true because I know it sucks writing tests after I’ve writing the code.

I mostly focused on unit testing this script rather than integration testing. With integration testing I’d need to install the script into Greasemonkey, then display some HTML, then run the tests. I’ve no idea how’d I’d do that.

testing running

But I do know how to unit test JavaScript, and if I can get good coverage of the core libraries, then I should be able to slap the Greasemonkey specific code on top and do manual QA testing after that. The Greasemonkey specific code shouldn’t ever change much (it just loads up CSS and more JS code dynamically) so I feel ok about this approach.

For this project I used Screw.Unit for the first time (via a modified version of the blue-ridge rails plugin) and it was pretty sweet. Especially being able to run single tests or groups of tests in isolation.

Project structure

summary of project structure

All the JavaScript source – including dependent libraries such as jquery and jquery-flexselect – was put into the public folder. This is because I needed to be able to load the files into the browser without using file:// protocol (which was failing for me). So, I moved the entire project into my Sites folder, and added the project as a Passenger web app. I’m ahead of myself, but there is a reason I went with public for the JavaScript + assets folder.

In vendor/plugins, The blue-ridge rails plugin is a composite of several JavaScript libraries, including the test framework Screw.Unit, and a headless rake task to run all the tests without browser windows popping up everywhere. In my code base blue-ridge is slightly modified since my project doesn’t look like a rails app.

Our tests go in spec. In a Rails app using blue-ridge, they’d go in spec/javascripts, but since JavaScript is all we have in this project I’ve flattened the spec folder structure.

The website folder houses the github pages website (a git submodule to the gh-pages branch) and also the greasemonkey script and its runtime JavaScript, CSS, and ninja image assets.

A simple first test

For the Ninja Search JS I wanted to add the little ninja icon next to every <select> element on every page I ever visited. When the icon is clicked, it would convert the corresponding <select> element into a text field with fantastical autocompletion support.

For Screw.Unit, the first thing we need is a spec/ninja_search_spec.js file for the tests, and an HTML fixture file that will be loaded into the browser. The HTML file’s name must match to the corresponding test name, so it must be spec/fixtures/ninja_search.html.

For our first test we want the cute ninja icon to appear next to <select> drop downs.

require("spec_helper.js");
require("../public/ninja_search.js"); // relative to spec folder

Screw.Unit(function(){
  describe("inline activation button", function(){
    it("should display NinjaSearch image button", function(){
      var button = $('a.ninja_search_activation');
      expect(button.size()).to(be_gte, 1);
    });
  });
});

The Blue Ridge textmate bundle makes it really easy to create the describe (des) and it (it) blocks, and ex expands into a useful expects(...).to(matcher, ...) snippet.

The two ellipses are values that are compared by a matcher. Matchers are available via global names such as equals, be_gte (greater than or equal) etc. See the matchers.js file for the default available matchers.

The HTML fixture file is important in that it includes the sample HTML upon which the tests are executed.

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">

<head>
  <title>Ninja Search | JavaScript Testing Results</title>
  <link rel="stylesheet" href="screw.css" type="text/css" charset="utf-8" />
  <script src="../../vendor/plugins/blue-ridge/lib/blue-ridge.js"></script>
</head>

<body>
  <div>
    <label for="person_user_time_zone_id">Main drop down for tests</label>
    <select name="person[user][time_zone_id]" id="person_user_time_zone_id" style="display: inline;">
      <option value="Hawaii">(GMT-10:00) Hawaii</option>
      <option value="Alaska">(GMT-09:00) Alaska</option>
      <option value="Pacific Time (US & Canada)">(GMT-08:00) Pacific Time (US & Canada)</option>
      <option value="Arizona">(GMT-07:00) Arizona</option>
      <option value="Mountain Time (US & Canada)">(GMT-07:00) Mountain Time (US & Canada)</option>
      <option value="Central Time (US & Canada)">(GMT-06:00) Central Time (US & Canada)</option>
      <option value="Eastern Time (US & Canada)">(GMT-05:00) Eastern Time (US & Canada)</option>
    </select>
  </div>
</body>
</html>

In its header it loads the blue-ridge JavaScript library, which in turn loads Screw.Unit and ultimately our spec.js test file (based on corresponding file name), so ninja_search.html will cause a file spec/ninja_search_spec.js to be loaded.

To run our first test just load up the spec/fixtures/ninja_search.html file into your browser.

Your first test will fail. But that’s ok, that’s the point of TDD. Red, green, refactor.

Simple passing code

So now we need some code to make the test pass.

Create a file public/ninja_search.js and something like the following should work:

(function($){
  $(function() {
    $('select').each(function(index) {
      var id = $(this).attr('id');

      // create the Ninja Search button, with rel attribute referencing corresponding >select id="...">
      $('> rel="' + id + '">ninja search>/a>')
      .insertAfter($(this));
    });
  });
})(jQuery);

Reload your test fixtures HTML file and the test should pass.

Now rinse and repeat. The final suite of tests and fixture files for Ninja Search JS are on github.

Building a Greasemonkey script

Typically Greasemonkey scripts are all-inclusive affairs. One JavaScript file, named my_script.user.js, typically does the trick.

I decided I wanted a thin Greasemonkey script that would dynamically load my ninja-search.js, and any stylesheets and dependent libraries. This would allow people to install the thin Greasemonkey script once, and I can deploy new versions of the actual code base over time without them having to re-install anything.

Ultimately in production, the stylesheets, images, and JavaScript code would be hosted on the intertubes somewhere. Though during development that would be long-winded and painful to push the code to a remote host just to run tests.

So I have three Greasemonkey scripts:

  • public/ninja_search.dev.user.js – loads each dependent library and asset from the local file system
  • public/ninja_search.local.user.js – loads compressed library and asset from the local file system
  • public/ninja_search.user.js – loads compressed library and assets from remote server

Let’s ignore the optimisation of compressing dependent JavaScript libraries for the moment and just look at the dev.user.js and user.js files.

The two scripts differ in the target host from which they load assets and libraries. ninja_search.dev.user.js loads them from the local machine and ninja_search.user.js loads them from a remote server.

For example ninja_search.dev.user.js loads local dependencies like this:

require("http://ninja-search-js.local/jquery.js");
require("http://ninja-search-js.local/ninja_search.js");

And ninja_search.user.js loads remote dependencies like this:

require("http://drnic.github.com/ninja-search-js/dist/jquery.js");
require("http://drnic.github.com/ninja-search-js/dist/ninja_search.js");

In the final version of ninja_search.user.js we load a simple, conpressed library containing jquery, our code, and other dependencies, called ninja_search_complete.js.

Using Passenger to server local libraries

The problem with loading local JavaScript libraries using the file:// protocol, inferred earlier, is that it doesn’t work. So if I can’t load libraries using file:// then I must use the http:// protocol. That means I must route the requests through Apache/Ningx.

Fortunately there is a very simple solution: use Phusion Passenger which serves a “web app’s” public folder automatically. That’s why all the javascript, CSS and image assets have been placed in a folder public instead of src or lib or javascript.

On my OS X machine, I moved the repository folder into my Sites folder and wired up the folder as a Passenger web app using PassengerPane. It took 2 minutes and now I had http://ninja-search.local as a valid base URL to serve my JavaScript libraries to my Greasemonkey script.

Testing the Greasemonkey scripts

I can only have one of the three Greasemonkey scripts installed at a time, so I install the ninja-search.dev.user.js file to check that everything is basically working inside a browser on interesting, foreign sites (outside of the unit test HTML pages).

Once I’ve deployed the JavaScript files and assets to the remote server I can then install the ninja-search.user.js file (so can you) and double check that I haven’t screwed anything up.

Deploying via GitHub Pages

The normal, community place to upload and share Greasemonkey scripts is userscripts.org. This is great for one file scripts, though if your script includes CSS and image assets, let alone additional JavaScript libraries, then I don’t think its as helpful, which is a pity.

So I decided to deploy the ninja-search-js files into the project’s own GitHub Pages site.

After creating the GitHub Pages site using Pages Generator, I then pulled down the gh-pages branch, and then linked (via submodules) that branch into my master branch as website folder.

Something like:


git checkout origin/gh-pages -b gh-pages
git checkout master
git submodule add -b gh-pages git@github.com:drnic/ninja-search-js.git website

Now I can access the gh-pages branch from my master branch (where the code is).

Then to deploy our Greasemonkey script we just copy over all the public files into website/dist, and then commit and push the changes to the gh-pages branch.


mkdir -p website/dist
cp -R public/* website/dist/
cd website
git commit -a "latest script release"
git push origin gh-pages
cd ..

Then you wait very patiently for GitHub to deploy your latest website, which now contains your Greasemonkey script (dist/ninja-search.user.js) and all the libraries (our actual code), stylesheets and images.

Summary

Greasemonkey scripts might seem like small little chunks of code. But all code starts small and grows. At some stage you’ll wish you had some test coverage. And later you’ll hate yourself for ever having release the bloody thing in the first place.

I wrote all this up to summarise how I’d done TDD for the Ninja Search JS project, which is slightly different from how I added test cases to _why’s the octocat’s pajamas greasemonkey script when I first started hacking with unit testing Greasemonkey scripts. The next one will probably be slightly different again.

I feel good about the current project structure, I liked Screw.Unit and blue-ridge, and I’m amused by my use of GitHub Pages to deploy the application itself.

If anyone has any ideas on how this could be improved, or done radically differently, I’d love to hear them!

Source: http://www.thoughtbot.com/projects/shoulda/

Making Tests Easy on the Fingers and Eyes

The Shoulda gem makes it easy to write elegant, understandable, and maintainable Ruby tests. Shoulda consists of test macros, assertions, and helpers added on to the Test::Unit framework. It’s fully compatible with your existing tests, and requires no retooling to use.

  • Context & Should blocks – context and should provide RSpec-like test blocks for Test::Unit suites. In addition, you get nested contexts and a much more readable syntax.
  • Macros – Generate many ActionController and ActiveRecord tests with helpful error messages. They get you started quickly, and can help you ensure that your application is conforming to best practice.
  • Assertions – Many common Rails testing idioms have been distilled into a set of useful assertions.

The Shoulda gem can be used for non-Rails projects


Installation


sudo gem install thoughtbot-shoulda --source=http://gems.github.com

Usage

Context Helpers

Stop killing your fingers with all of those underscores… Name your tests with plain sentences!


class UserTest < Test::Unit 
  context "A User instance" do
    setup do
      @user = User.find(:first)
    end

    should "return its full name" do
      assert_equal 'John Doe', @user.full_name
    end

    context "with a profile" do
      setup do
        @user.profile = Profile.find(:first)
      end

      should "return true when sent #has_profile?" do
        assert @user.has_profile?
      end
    end
  end
end

Produces the following test methods:

"test: A User instance should return its full name." 
"test: A User instance with a profile should return true when sent #has_profile?."

So readable!

ActiveRecord Tests

Quick macro tests for your ActiveRecord associations and validations:


class PostTest < Test::Unit::TestCase
  should_belong_to :user
  should_have_many :tags, :through => :taggings

  should_require_unique_attributes :title
  should_require_attributes :body, :message => /wtf/
  should_require_attributes :title
  should_only_allow_numeric_values_for :user_id
end

class UserTest < Test::Unit::TestCase
  should_have_many :posts

  should_not_allow_values_for :email, "blah", "b lah" 
  should_allow_values_for :email, "a@b.com", "asdf@asdf.com" 
  should_ensure_length_in_range :email, 1..100
  should_ensure_value_in_range :age, 1..100
  should_protect_attributes :password
end

Makes TDD so much easier.

Controller Tests

Macros to test the most common controller patterns…


context "on GET to :show for first record" do
  setup do
    get :show, :id => 1
  end

  should_assign_to :user
  should_respond_with :success
  should_render_template :show
  should_not_set_the_flash

  should "do something else really cool" do
    assert_equal 1, assigns(:user).id
  end
end

Helpful Assertions

More to come here, but have fun with what’s there.


assert_same_elements([:a, :b, :c], [:c, :a, :b])
assert_contains(['a', '1'], /\d/)
assert_contains(['a', '1'], 'a')

Source: http://drnicwilliams.com/2009/04/15/cucumber-building-a-better-world-object/

How to write helper libraries for your Cucumber step definitions and how to upgrade your support libraries from Cucumber 0.2 to 0.3 (released today).

In cucumber, each scenario step in a .feature file matches to a GivenWhenThen step definition. The step definitions are normal Ruby code. First class, bonnified, honky-tonk Ruby code. And what’s the one thing we love to do to Ruby code on a rainy Sunday afternoon? Refactor it. Turn messy code into readable “return in 50 years, on the time capsule, and get back to work quickly” code.

In Cucumber we use a special, until-now unknown, magic technique for refactoring step definitions. They are called “Ruby methods”. Oooh, feel the magic. You take some code in a step definition and you refactor it into a method. And you’re done. For example:

When /I fill in the Account form/ do
  fill_in("account_name", "Mocra")
  fill_in("account_abn", "12 345 678 901")
  click_button("Submit")
end

Could be refactored into:

When /I fill in the Account form/ do
  fill_in_account_form
end

def fill_in_account_form
  fill_in("account_name", "Mocra")
  fill_in("account_abn", "12 345 678 901")
  click_button("Submit")
end

Good work. Or is it? No, we’ve done something a little naughty. We’ve polluted the global object space with our method and turns out it just isn’t necessary. There’s a nicer way and a clean idiom for how/where to write helper methods.

Annoyingly, that idiom broke with the release of Cucumber 0.3. So I’ll introduce both so you can fix any bugs that you spot and know how to fix them.

The solution is to understand the existence of the World object and the clean technique for writing features/support/foobar_helpers.rb libraries of helper methods.

Introducing the World object

To ensure that each cucumber scenario starts with a clean slate, your scenarios are run upon a blank Object.new object. Or in a Rails project its a new Rails test sessionActionController::IntegrationTest.

These are called World objects (see cucumber wiki). You pass in a specific World object you’d like to use, else it defaults to Object.new For a Rails project, you’re usingCucumber::Rails::World.new for your world object each time, which is a subclass ofActionController::IntegrationTest.

The benefit of a World object starting point for each scenario is that you can add methods to it, that won’t affect the rest of the Ruby world you live in: which will be the Cucumber runner. That is, you cannot accidently blow up Cucumber.

Extending the World object

Step 1, put methods in a module. Step 2, add the module to your World object.

So that we’re all extending Cucumber in the same way, there is a folder for where your helper methods should be stored, and a programming idiomatic way to do it. It has changed slight from Cucumber 0.2 to 0.3 so I’ll show both.

For our helper method fill_in_account_form above:

  1. Create features/support/form_submission_helpers.rb (its automatically loaded)
  2. Wrap the helper method in a module module FormSubmissionHelpers ... end
  3. Tell Cucumber to include the module into each World object for each Scenario

In Cucumber 0.3+ your complete helper file would look like:

module FormSubmissionHelpers
  def fill_in_account_form
    fill_in("account_name", "Mocra")
    fill_in("account_abn", "12 345 678 901")
    click_button("Submit")
  end
end
World(FormSubmissionHelpers)

For Cucumber 0.2 your complete helper file might have looked like:

module FormSubmissionHelpers
  def fill_in_account_form
    fill_in("account_name", "Mocra")
    fill_in("account_abn", "12 345 678 901")
    click_button("Submit")
  end
end
World do |world|
  world.extend FormSubmissionHelpers
end

Where the difference is the last part of the file. This mechanism is deprecated and results in the following error message after upgrading to Cucumber 0.3:

/Library/Ruby/Gems/1.8/gems/cucumber-0.3.0/bin/../lib/cucumber/step_mother.rb:189:in `World': You can only pass a proc to #World once, but it's happening (Cucumber::MultipleWorld)
in 2 places:

vendor/plugins/cucumber/lib/cucumber/rails/world.rb:72:in `World'
vendor/plugins/email-spec/lib/email_spec/cucumber.rb:18:in `World'

Summary

Refactor step definitions. Put it in features/support/…_helpers.rb files, inside modules that are assigned to the World object.

Fixin’ Fixtures

Tháng Hai 17, 2009

Source: http://errtheblog.com/posts/61-fixin-fixtures

The main problem with fixtures, for me, has always been how unfun they are. They literally suck the fun out of anything they’re around. You throw them in your test/ directory, then suddenly testing is, like, work. But it’s not work, dammit. This is Ruby, dammit.

So, keep them fun. Write your fixtures in Ruby.

Fixtures in Ruby

Step 1: FixtureScenarios

Tom Preston-Werner has this FixtureScenarios plugin. The project home page explains:

This plugin allows you to create ‘scenarios’ which are collections of fixtures and ruby files that represent a context against which you can run tests.

Pretty cool. Instead of just test/fixtures, you can have (for instance), test/fixtures/users and test/fixtures/beta. Then, in your test class, you call scenario :users in place of the normal fixtures :users, :posts, :images call. This will pull all the YAML files from test/fixtures/users into your test database. When you need a completely separate set of fixtures for another feature, it’s as easy as scenario :beta.

Scenarios can also be nested. You can create a test/fixtures/users/banned directory filled with YAMLy files and call it with scenario :banned. This’ll pull in both the test/fixtures/users fixtures and all the test/fixtures/users/banned fixtures. It will also, by default, pull in all the test/fixtures fixtures.

The real beauty of this is when you load up your code six months down the road, recently returned from your jaunt up Everest, and need to add a feature. Fast! Just add a new scenario—forget about worrying on your old tests, they won’t be affected.

This plugin is also great when BDDin’ with test/spec or RSpec. You can have separate contexts in one spec file, all using different fixture scenarios as needed. Maybe you want to assert hardcore pagination behavior—you can load up 10,000 records! But only for that one scenario, leaving your other scenarios sane.

Grabbit:

$ script/plugin install http://fixture-scenarios.googlecode.com/svn/trunk/fixture_scenarios

Step 2: FixtureScenarioBuilder

Okay, like I said, we need to write our fixtures in Ruby.

By default, FixtureScenarios will load any Ruby files it finds in your fixture directories. Cool, but not really sufficient. We need transactional fixtures, we need the database cleared out after test runs, we need a clean database when tests start. We really need all the cool features YAML fixtures offer.

And we can get them, with FixtureScenarioBuilder. This is an add-on to Tom’s FixtureScenarios plugin. AKA you need FixtureScenarios for it to work.

Grabbit:

$ script/plugin install \ svn://errtheblog.com/svn/plugins/fixture_scenarios_builder

Step 3: scenarios.rb

After you’ve installed the plugin, create a scenarios.rb file and place it in your test/fixtures directory. It is in this file you build your fixture scenarios. In Ruby.

Like this:

scenario :blog do
  %w( Tom Chris Kevin ).each_with_index do |user, index|
    User.create! :name => user, :banned => index.odd?
  end
end

As soon as I run a test, test/fixtures/blog will get created with a users.yml file inside it. It’ll look like this:

chris:
  name: Chris
  id: "2"
  banned: "1"
  updated_at: 2007-05-09 09:08:04
  created_at: 2007-05-09 09:08:04
kevin:
  name: Kevin
  id: "3"
  banned: "0"
  updated_at: 2007-05-09 09:08:04
  created_at: 2007-05-09 09:08:04
tom:
  name: Tom
  id: "1"
  banned: "0"
  updated_at: 2007-05-09 09:08:04
  created_at: 2007-05-09 09:08:04

Notice we’ve got intelligent fixture names. FixtureScenarioBuilder did its best to guess these by trying name, username, then title. If it can’t figure out a name, it falls back to stories_001 or whatever.

As for quick, iterative, agile development: whenever you modify scenarios.rb, your YAML fixtures will get re-built. Any errors in your scenarios.rb file will be clearly surfaced when building.

FixtureScenarioBuilder also supports nesting fixtures:

scenario :blog do
  %w( Tom Chris ).each do |user|
    User.create! :name => user
  end
end

scenario :banned_users => :blog do
  User.create! :name => 'Kevin', :banned => true
end

As you may guess, this creates test/fixtures/blog and test/fixtures/blog/banned_users. Rake-like and tasty.

Ruby fixtures are super because you can easily throw around associations, forget about those updated_at & created_at fields, easily refactor fixtures after making sweeping schema changes, and don’t need to worry about join tables.

I’ve been doing fixtures this way for a few months now and really dig. FixtureScenarios and FixtureScenarioBuilder are like a one-two knock out punch of… uh… test data.

Step 4: Real Life And All

For a more complex example, check out this scenarios.rb file. It uses nesting, multiple scenarios, and all sorts of Ruby goodness (for a code review site, ya heard).

You can get a lot of this stuff by ERBing your YAML files, but really, that sucks. I like having my habtm join tables generated for me, I am comfortable defining associations and whatnot in Ruby. It’s how I think.

These days, many people are hacking their database or mocking to get around the fixture pain. Other fixture solutions are the fixture references plugin, this patch, and QuickFix. I’m sure there are countless others. Got a good solution? Leave a comment, plz.

Step 5: Get Out of Here

For further explanation, check out the README. Browse the code. Find a bug. Etc.

Thanks, and goodnight.