Source: http://railstips.org/2009/6/8/what-is-the-simplest-thing-that-could-possibly-work

I am always amazed when I read an article from 2004 and find interesting goodies. I’m probably late to the game on a lot of these articles, as I didn’t really dive into programming as a career until 2005, but I just read The Simplest Thing that Could Possibly Work, a conversation with Ward Cunningham by Bill Venners. The article was published on January 19, 2004, but it is truly timeless.

The Shortest Path

Simplicity is the shortest path to a solution.

“Shortest” doesn’t necessarily refer to lines of code or number of characters, but I see it more as the path that requires the least amount of complexity. As he mentions in the article, if someone releases a 20 page proof to a math problem and then later on, someone releases a 10 page proof for the same problem, the 10 page proof is not necessarily more simple.

The 10 page proof could use some form of mathematics that is not widely used in the community and takes some time to comprehend. This means the 10 page version could be less simple as it requires learning to understand, whereas the 20 page uses generally understood concepts.

I think this is a balance that we always fight with as programmers. What is simple? I can usually say simple or not simple when I look at code, but it is hard to define the rules for simplicity.

Work Today Makes You Better Tomorrow

The effort you expend today to understand the code will make you a more powerful programmer tomorrow.

This is one of the concepts that has made the biggest different in my programming knowledge over the past few years. The first time that I really did this was when I wrote about class and instance variables a few years back. Ever since then, when I come across something that I don’t understand, that I feel I should, I spend the time to understand it. I have grown immensely because of this and would recommend that you do the same if you aren’t already.

Narrow What You Think About

We had been thinking about too much at once, trying to achieve too complicated a goal, trying to code it too well.

This is something that I have been practicing a lot lately. You know how sometimes you just feel overwhelmed and don’t want to start a feature or project? What I’ve found is that when I feel this way it is because I’m trying to think about too much at once.

Ward encourages over and over in the article, think about what is the most simple possible thing that could work. Notice he did not say what is the simplest thing that would work, but rather what could work.

This is something that I’ve noticed recently while pairing with Brandon Keepers. Both of us almost apologize for some of the code we first implement, as we are afraid the other will think that is all we are capable of. What is funny, is that we both realize that you have to start with something and thus never judge. It is far easier to incrementally work towards a brilliant solution than to think it in your head and instantly code it.

Start with a test. Make the test pass. Rinse and repeat. Small, tested changes that solve only the immediate problem at hand always end up with a more simple solution than trying to do it all in one fell swoop. I’ve also found I’m more productive this way as I have less moments of wondering what to do next. The failing test tells me.

Anyway, I thought the article was interesting enough that I would post some of the highlights here and encourage you all to read it. If you know of some oldie, but goodie articles, link them up in the comments below.

Source: http://blog.rubybestpractices.com/posts/jamesbritt/2009-04-13-solving-the-problem.rc.html

So you decide you want a way for a number of people to post short articles to a Web site, and maybe allow for other people to leave comments. What do you do? That’s easy: jump to the white board and start sketching out a vast array of boxes, blobs, and arrows that define a sophisticated content management system with multi-level admin roles, content versioning, threaded discussion boards, syntax highlighting, the works.

Right?

Don’t be too quick to judge. It’s easy to fall into the trap of defining the wrong requirements for a project.

Part of the reason is that building software is (or should be) fun, and often bigger projects are more fun. There is also the tendency to think about “what if”, imagining all the things that maybe one day who knows you never can tell might be needed.

People also tend to think in terms of what’s familiar, of how things have been done in the past or by others.

There are many ways to satisfy the needs described in the first paragraph. Some don’t require writing any software at all.

For the Ruby Best Practices blog, the general goals were modest. Allow a core set of people to easily add new content. Allow other people to contribute content as well, but only with the OK from someone in that core group. (BTW, there are eight of us in the RBP blog team. See here for my take on the RBP team logo)

We wanted to allow the use of Textile, and not make people use a browser for editing. Basically, turn simple flat files into a Web site with minimal fuss.

Korma takes an interesting approach to building Web sites. The whole app is ~230 lines of Ruby. Its key function is to take content from a Git repository, run it through some templating, and write out the site files.

Relying on Git for that back end is stunningly perfect. Git provides versioning, access control, and distributed contributions.

It becomes the database layer common to most blogging tools. For free. Right off the bat, no need to write any admin tools or content versioning code .

At the heart of Korma is the grit gem. As the project blurb says, “Grit gives you object oriented read/write access to Git repositories via Ruby.” Very sweet.

The korma.rb file takes two arguments. The first is required, and is the path to a git repository. The second argument is optional; it tells Korma what directory to use for the generated content, and defaults to ‘www’, relative to where you invoke the program.

The app uses grit to reach into the current contents of the repo and parse the needed files. Files have to be committed to be accessible to Korma.

There is a configuration file that describes some basic site metadata, such as the base URL, the site title, and the author list. When called, korma.rb grabs this config file, sets some Korma::Blog properties, and then writes out the static files.

An early version of Korma used Sinatra; basically, Korma was a lightweight Web app to serve the blog posts. But as simple as it was, it was overkill, since there was no real dynamic content. It made no sense to have the Web app regenerate the HTML on each request, since it changed so infrequently.

A next version replaced the Web app part with static files, making it a straightforward command-line program. This solved another problem: how to automate the regeneration of files. The answer: use Git’s post-commit hook to invoke the program.

For example:

   #!/bin/sh
   # File .git/hooks/post-commit   
   /usr/local/bin/ruby /home/james/data/vendor/korma/korma.rb /home/james/data/vendor/rbp-blog /home/james/data/vendor/korma/www

Early versions also used Haml for site-wide templates. Not being a fan of Haml, I added in a configurable option to use Erb. It was nice and all, but it was a feature without a requirement. No one was asking for configurable templating, so that option was dropped and Erb replaced Haml as the default.

If you are wondering why working code was removed, consider that any code you have is something you have to maintain. As bug-free and robust as you may like to think it, no code is easier to maintain than no code. Configurable templating was simply not a problem we were needed to solve, and a smaller, simpler code base is more valuable than a maybe “nice to have.”

There was some discussion about the need or value of allowing comments. In the end they were deemed good, but there was no good argument for hosting them as part of the blog site itself. That meant a 3rd-party solution (in this case, Disqus) was perfectly acceptable. Again, a goal was to have comments, not to write a commenting system (as entertaining as that may be).

Using Git allows for yet another feature for free: easy one-off contributions. In many systems, allowing anyone to contribute means creating an account, granting some sort of access, managing all the new user annoyances. Or, an existing user has to play proxy, accepting content and entering it into the system on behalf of the real author. That’s work! With git, one can clone the repo, add new content, and issue a pull request back to the master branch. Anyone with commit rights to the master can then merge it in (or politely decline). No-code features FTW.

None of the blog design requirements are written in stone, and they may change tomorrow, but by identifying the real needs, addressing what was deemed essential, offloading what we could, and skipping the feature bling, we have a system that is easy to understand, easy to maintain, and easy to change.

Source: http://drnicwilliams.com/2009/06/07/tdd-for-greasemonkey-scripts-and-introducing-ninja-search-js/

“this article shows how I used test-driven development tools and processes on a Greasemonkey script.” Though it also includes free ninjas.

1. Long drop downs hate humans

When I do online banking I need to select from a large list of other people’s bank accounts to which I might like to transfer money too. It is the massive drop down list that I must scroll through that I wish to raise issue with today. The problem of having to give other people money is probably a different discussion.

And take those time-zone selector drop down lists, for example, the massively long list rendered by Rails’ time_zone_select helper. Granted, I am thankful for you letting me choose my timezone in your web app. Though for those of us not living in the USA we must hunt for our closest city in the list. Dozens of locations, ordered by time zone and not the name of the city (see adjacent image). Unfortunately you can’t easily type a few letters of your current city to find it. Rather, you have to scroll. And if you live in the GMT+1000 time zone group (Eastern Australia), you have to scroll all the way to the bottom.

5. Choose from a small list

So I got to thinking I’d like a Greasemonkey (for Firefox) or GreaseKit (for Safari) script that automatically converted all ridiculously long HTML drop down lists into a sexy, autocompletion text field. You could then type in “bris” and be presented with “(GMT+1000) Brisbane”, or given the less amusing banking scenario then I could type “ATO” and get the bank account details for the Australian Tax Office.

I mean, how hard could it be?

This article is two things: an introduction to Ninja Search JS which gives a friendly ninja for every drop down field to solve the above problem. Mostly, the rest of this article shows how I used test-driven development tools and processes on a Greasemonkey script.

Introducing Ninja Search JS

Ninja Search JS banner

Click the banner to learn about and install the awesome Ninja Search JS. It includes free ninjas.

Currently it is a script for Greasemonkey (FireFox) or GreaseKit (Safari). It could be dynamically installed as necessary via a bookmarklet. I just haven’t done that yet. It could also be a FireFox extension so it didn’t have to fetch remote CSS and JS assets each time.

Ninja Search JS uses liquidmetal and jquery-flexselect projects created by Ryan McGeary.

Most importantly of all, I think, is that I wrote it all using TDD. That is, tests first. I don’t think this is an erroneous statement given the relatively ridiculous, and unimportant nature of Ninja Search JS itself.

TDD for Greasemonkey scripts

I love the simple idea of Greasemonkey scripts: run a script on a subset of all websites you visit. You can’t easily do this on desktop apps, which is why web apps are so awesome – its just HTML inside your browser, and with Greasemoney or browser extensions you can hook into that HTML, add your own DOM, remove DOM, add events etc.

But what stops me writing more of them is that once you cobble together a script, you push it out into the wild and then bug reports start coming back. Or feature requests, preferably. I’d now have a code base without any test coverage, so each new change is likely to break something else. Its also difficult to isolate bugs across different browsers, or in different environments (running Ninja Search JS in a page that used prototypejs originally failed), without a test suite.

And the best way to get yourself a test suite is to write it before you write the code itself. I believe this to be true because I know it sucks writing tests after I’ve writing the code.

I mostly focused on unit testing this script rather than integration testing. With integration testing I’d need to install the script into Greasemonkey, then display some HTML, then run the tests. I’ve no idea how’d I’d do that.

testing running

But I do know how to unit test JavaScript, and if I can get good coverage of the core libraries, then I should be able to slap the Greasemonkey specific code on top and do manual QA testing after that. The Greasemonkey specific code shouldn’t ever change much (it just loads up CSS and more JS code dynamically) so I feel ok about this approach.

For this project I used Screw.Unit for the first time (via a modified version of the blue-ridge rails plugin) and it was pretty sweet. Especially being able to run single tests or groups of tests in isolation.

Project structure

summary of project structure

All the JavaScript source – including dependent libraries such as jquery and jquery-flexselect – was put into the public folder. This is because I needed to be able to load the files into the browser without using file:// protocol (which was failing for me). So, I moved the entire project into my Sites folder, and added the project as a Passenger web app. I’m ahead of myself, but there is a reason I went with public for the JavaScript + assets folder.

In vendor/plugins, The blue-ridge rails plugin is a composite of several JavaScript libraries, including the test framework Screw.Unit, and a headless rake task to run all the tests without browser windows popping up everywhere. In my code base blue-ridge is slightly modified since my project doesn’t look like a rails app.

Our tests go in spec. In a Rails app using blue-ridge, they’d go in spec/javascripts, but since JavaScript is all we have in this project I’ve flattened the spec folder structure.

The website folder houses the github pages website (a git submodule to the gh-pages branch) and also the greasemonkey script and its runtime JavaScript, CSS, and ninja image assets.

A simple first test

For the Ninja Search JS I wanted to add the little ninja icon next to every <select> element on every page I ever visited. When the icon is clicked, it would convert the corresponding <select> element into a text field with fantastical autocompletion support.

For Screw.Unit, the first thing we need is a spec/ninja_search_spec.js file for the tests, and an HTML fixture file that will be loaded into the browser. The HTML file’s name must match to the corresponding test name, so it must be spec/fixtures/ninja_search.html.

For our first test we want the cute ninja icon to appear next to <select> drop downs.

require("spec_helper.js");
require("../public/ninja_search.js"); // relative to spec folder

Screw.Unit(function(){
  describe("inline activation button", function(){
    it("should display NinjaSearch image button", function(){
      var button = $('a.ninja_search_activation');
      expect(button.size()).to(be_gte, 1);
    });
  });
});

The Blue Ridge textmate bundle makes it really easy to create the describe (des) and it (it) blocks, and ex expands into a useful expects(...).to(matcher, ...) snippet.

The two ellipses are values that are compared by a matcher. Matchers are available via global names such as equals, be_gte (greater than or equal) etc. See the matchers.js file for the default available matchers.

The HTML fixture file is important in that it includes the sample HTML upon which the tests are executed.

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">

<head>
  <title>Ninja Search | JavaScript Testing Results</title>
  <link rel="stylesheet" href="screw.css" type="text/css" charset="utf-8" />
  <script src="../../vendor/plugins/blue-ridge/lib/blue-ridge.js"></script>
</head>

<body>
  <div>
    <label for="person_user_time_zone_id">Main drop down for tests</label>
    <select name="person[user][time_zone_id]" id="person_user_time_zone_id" style="display: inline;">
      <option value="Hawaii">(GMT-10:00) Hawaii</option>
      <option value="Alaska">(GMT-09:00) Alaska</option>
      <option value="Pacific Time (US & Canada)">(GMT-08:00) Pacific Time (US & Canada)</option>
      <option value="Arizona">(GMT-07:00) Arizona</option>
      <option value="Mountain Time (US & Canada)">(GMT-07:00) Mountain Time (US & Canada)</option>
      <option value="Central Time (US & Canada)">(GMT-06:00) Central Time (US & Canada)</option>
      <option value="Eastern Time (US & Canada)">(GMT-05:00) Eastern Time (US & Canada)</option>
    </select>
  </div>
</body>
</html>

In its header it loads the blue-ridge JavaScript library, which in turn loads Screw.Unit and ultimately our spec.js test file (based on corresponding file name), so ninja_search.html will cause a file spec/ninja_search_spec.js to be loaded.

To run our first test just load up the spec/fixtures/ninja_search.html file into your browser.

Your first test will fail. But that’s ok, that’s the point of TDD. Red, green, refactor.

Simple passing code

So now we need some code to make the test pass.

Create a file public/ninja_search.js and something like the following should work:

(function($){
  $(function() {
    $('select').each(function(index) {
      var id = $(this).attr('id');

      // create the Ninja Search button, with rel attribute referencing corresponding >select id="...">
      $('> rel="' + id + '">ninja search>/a>')
      .insertAfter($(this));
    });
  });
})(jQuery);

Reload your test fixtures HTML file and the test should pass.

Now rinse and repeat. The final suite of tests and fixture files for Ninja Search JS are on github.

Building a Greasemonkey script

Typically Greasemonkey scripts are all-inclusive affairs. One JavaScript file, named my_script.user.js, typically does the trick.

I decided I wanted a thin Greasemonkey script that would dynamically load my ninja-search.js, and any stylesheets and dependent libraries. This would allow people to install the thin Greasemonkey script once, and I can deploy new versions of the actual code base over time without them having to re-install anything.

Ultimately in production, the stylesheets, images, and JavaScript code would be hosted on the intertubes somewhere. Though during development that would be long-winded and painful to push the code to a remote host just to run tests.

So I have three Greasemonkey scripts:

  • public/ninja_search.dev.user.js – loads each dependent library and asset from the local file system
  • public/ninja_search.local.user.js – loads compressed library and asset from the local file system
  • public/ninja_search.user.js – loads compressed library and assets from remote server

Let’s ignore the optimisation of compressing dependent JavaScript libraries for the moment and just look at the dev.user.js and user.js files.

The two scripts differ in the target host from which they load assets and libraries. ninja_search.dev.user.js loads them from the local machine and ninja_search.user.js loads them from a remote server.

For example ninja_search.dev.user.js loads local dependencies like this:

require("http://ninja-search-js.local/jquery.js");
require("http://ninja-search-js.local/ninja_search.js");

And ninja_search.user.js loads remote dependencies like this:

require("http://drnic.github.com/ninja-search-js/dist/jquery.js");
require("http://drnic.github.com/ninja-search-js/dist/ninja_search.js");

In the final version of ninja_search.user.js we load a simple, conpressed library containing jquery, our code, and other dependencies, called ninja_search_complete.js.

Using Passenger to server local libraries

The problem with loading local JavaScript libraries using the file:// protocol, inferred earlier, is that it doesn’t work. So if I can’t load libraries using file:// then I must use the http:// protocol. That means I must route the requests through Apache/Ningx.

Fortunately there is a very simple solution: use Phusion Passenger which serves a “web app’s” public folder automatically. That’s why all the javascript, CSS and image assets have been placed in a folder public instead of src or lib or javascript.

On my OS X machine, I moved the repository folder into my Sites folder and wired up the folder as a Passenger web app using PassengerPane. It took 2 minutes and now I had http://ninja-search.local as a valid base URL to serve my JavaScript libraries to my Greasemonkey script.

Testing the Greasemonkey scripts

I can only have one of the three Greasemonkey scripts installed at a time, so I install the ninja-search.dev.user.js file to check that everything is basically working inside a browser on interesting, foreign sites (outside of the unit test HTML pages).

Once I’ve deployed the JavaScript files and assets to the remote server I can then install the ninja-search.user.js file (so can you) and double check that I haven’t screwed anything up.

Deploying via GitHub Pages

The normal, community place to upload and share Greasemonkey scripts is userscripts.org. This is great for one file scripts, though if your script includes CSS and image assets, let alone additional JavaScript libraries, then I don’t think its as helpful, which is a pity.

So I decided to deploy the ninja-search-js files into the project’s own GitHub Pages site.

After creating the GitHub Pages site using Pages Generator, I then pulled down the gh-pages branch, and then linked (via submodules) that branch into my master branch as website folder.

Something like:


git checkout origin/gh-pages -b gh-pages
git checkout master
git submodule add -b gh-pages git@github.com:drnic/ninja-search-js.git website

Now I can access the gh-pages branch from my master branch (where the code is).

Then to deploy our Greasemonkey script we just copy over all the public files into website/dist, and then commit and push the changes to the gh-pages branch.


mkdir -p website/dist
cp -R public/* website/dist/
cd website
git commit -a "latest script release"
git push origin gh-pages
cd ..

Then you wait very patiently for GitHub to deploy your latest website, which now contains your Greasemonkey script (dist/ninja-search.user.js) and all the libraries (our actual code), stylesheets and images.

Summary

Greasemonkey scripts might seem like small little chunks of code. But all code starts small and grows. At some stage you’ll wish you had some test coverage. And later you’ll hate yourself for ever having release the bloody thing in the first place.

I wrote all this up to summarise how I’d done TDD for the Ninja Search JS project, which is slightly different from how I added test cases to _why’s the octocat’s pajamas greasemonkey script when I first started hacking with unit testing Greasemonkey scripts. The next one will probably be slightly different again.

I feel good about the current project structure, I liked Screw.Unit and blue-ridge, and I’m amused by my use of GitHub Pages to deploy the application itself.

If anyone has any ideas on how this could be improved, or done radically differently, I’d love to hear them!

Source: http://sketches.rubyforge.org/

Description

Sketches allows you to create and edit Ruby code from the comfort of your editor, while having it safely reloaded in IRB whenever changes to the code are saved.

Features

  • Spawn an editor of your choosing from IRB.
  • Automatically reload your code when it changes.
  • Use a custom editor command.
  • Use a custom temp directory to store sketches in.

Install

Download it here or run:

$ sudo gem install sketches

Then require sketches in your .irbrc file:

require 'sketches'

Sketches can be configured to use a custom editor command:

Sketches.config :editor => 'gvim'

Sketches.config :editor => lambda { |path|
  "xterm -fg gray -bg black -e vim #{path} &"
}

Examples

  • Open a new sketch:
    sketch
  • Open a new named sketch:
    sketch :foo
  • Open a sketch from an existing file:
    sketch_from 'path/to/bar.rb'
  • Reopen an existing sketch:
    sketch 2
    sketch :foo
  • List all sketches:
    sketches
  • Name a sketch:
    name_sketch 2, :foo
  • Save a sketch to an alternant location:
    save_sketch :foo, 'path/to/foo.rb'