You have unicode string in your code, you deploy it on heroku, and the app crashed with an error:

invalid multibyte char (US-ASCII)

Fear not, just do this:

heroku config:add RUBYOPT=’-Ku’

And that will magically fix the problems


The most important lesson ^_^

Tháng Hai 19, 2011

Everyone's dream

Where Dell went wrong?

Tháng Hai 4, 2010

With the resignation of Kevin Rollins as Dell’s CEO, one of the oldest yet most important adages of PC industry is proven: no company can stay on top forever. Relentless competition, product commoditization, prickly customers and the aggressiveness to become number one or two in the fields always tend to bring down a company.

So where did Dell go wrong?

In a common way of speaking, Dell succumb to the belief that its business model would always work and keep it far ahead against other competitors. It clung too narrowly to its own founding strategy, forgetting to search or develop future sources of growth.

While Dell indeed broaden its product lines, it never really used it lead in products direct sales to invest in new business lines, talent, or innovation to deal with the constant improvement of its competitors.

Dell is a textbook example of single-formula growth, “We make PCs cheap. This is what we do, and we do it a lot” said Jim Mackey, MD at Billion Dollar Growth Network.

Although single-formula can help a company grows very fast during certain stage, once it reaches certain point of development, single-formula simply doesn’t have the ability to expand or create new growth.

Together with that, one of Dell’s fatal mistakes is that it hired too many “former management” consultants like Rollins himself. While having great credentials themselves, those consultants are usually very disconnected from people and only associates themselves with their fellows. In their mind, their top priorities is how to cut back on services to reduce cost without anyone noticing.

Unfortunately, with the overgrowing Internet these days, customers become more educated, and they have easy access to communities sites to share their thoughts and experiences. With the changes in Dell’s way of treating its customers, there is no surprise that they will turn to other brands like Hewlett “the computer is personal again” Package, Lenovo or Acer.

With its secret weapon for decades, the customer trust and satisfaction, backfired on itself, Dell faced a great downfall in business.

Although there have been many efforts from Dell, like having Michael Dell himself in the call center answering call, customers’ trust will still take a while to win back.

A second mistake is that Dell has no style itself. For half a decades, despite being a very profitable major PC company, Dell didn’t invest that money into researching for new product lines but rather return those as earning for stakeholders. Because of that, the company has been a boring PC maker for a very long time. While PCs and electronics consumers have become a large part of business in the recent years, Dell could only come up with TVs, but still in a very low market share, 15%.

Dell didn’t begin an orderly evolution during its prime time, now it is paying for the consequences.

Dell is also a people-intensive business that doesn’t benefit from the expertise of efficient manufacturing, making it a very hard time to move up the value chain. While other competitors like HP can reduce its products price with the advantage of being a manufacturer itself, Dell’s computer price gradually become “un-cheap” if not more expensive than others’ computers. Statistic showed that in the year of 2002, Dell’s average price was $1084, while HP is only $1009. For a long time cheap price is one of Dell’s key advantages, but that advantage has faded.

Not being a manufacturer also works against Dell, because by no mean it can improve its PC capabilities by utilizing new technologies and such, Dell’s PCs fall out of strict customer favor quickly.

These are the kind of challenges that Dell has to face for years. How well its founder handles them will determine whether his legacy is building a great company that lasts or just having a great idea that ran out of steam. We will wait and see :)

Dell SWOT analysis

Tháng Hai 3, 2010

Dell Computer Corporation was founded in 1984 by Michael Dell with a very simple premise: computers should be built and sold directly to customers. By doing this, Dell has become one of the world’s largest PC maker as well as one of the best well known brands. Dell now employs more than 76000 people world wide, nine manufacturing plants and provide 24/7 customer supports. The company ships approximately 140000 custom-made computers per day and has over 2 billion interactions with customers every year.

Dell’s Direct model enables them to interact with customers directly, providing them with fast and reasonable priced products and distribution.


Direct sell models meaning that Dell build computers based on customer provided specifications, allowing customers to make their own computer interactively on Dell website or telephone. Over 85% of Dell’s sale are made through their website using this method, allowing customers to make their own computer with competitive price, since this cuts down the cost arise from retailers. 

This method also allows more customer interaction, helping Dell provides top-notch customer service both pre-sale and post-sale.

Computers are made by Dell affiliated manufacturer with relatively cheap labour. This will cut down the cost of inventory managements, as a result, Dell only has to keep an average of 5 days inventory in comparison to the usual 30-40 days of its competitors like HP. This also means Dell can quickly introduce the latest relevant technology without worrying about leftover inventories.


Custom made computers are Dell’s strength but also weakness. Because every product is built by customer, no customer can buy a pre-made Dell product like other companies with retailers system. Custom made products also might take up to several days to finish.

The above also means that Dell is very dependent on its suppliers, or manufacturer, which come from a wide range of countries, and very hard to control the quality consistency amongst them. There has been case of massive recall due to defective products happen in 2004 when Dell has to recall 4.4 million laptops adapter in fear of them overheating, causing electric shock or fire.

Dependence on suppliers and not being able to produce computers itself make Dell unable to switch supply and locked with certain core suppliers for a period of time.

Another of Dell weaknesses is the relationship with college student segment. Since most of students purchase their computers through their schools or institute, Dell’s Direct Model approach is obviously not popular. As a matter of fact, this segment earn only 5% of Dell’s total revenue while remaining a very potential market segment for other companies.

Despite being a very successful company, Dell does not hold its own trademark or patent or any copyright technology at all. All of its technologies are also used by all other industry competitors.


While there are still defects with Dell’s Direct Model, personal computers are becoming more and more necessary and commonly used, customers are getting more and more educated about computer. As their knowledge grows, they will seek out for Dell’s custom made computers the fit their need to experience additional use of computer features.

The overgrowing Internet also provides Dell with more opportunities since the majority of Dell businesses are done over the net.

Moreover, while already been one of the world’s largest computer providers, Dell’s market still has many potential areas to explore. By providing low cost, low price computers directly to retailers, Dell would gain a substantial segment of the market. And by sponsoring educations purpose, Dell could gain more popularity amongst students.

Together with the pursuit of diversification in technologies by introducing new products such as printers or toners, LCD televisions and other non-computing goods, Dell now can compete in a wide range of market areas.


Like any other companies in the ever-changing fields of computer business, Dell face the competition from rivalries existing in the PC market globally. Although Dell’s model has been proven to be effective, there is no stopping its competitors to adopt a similar, if not the same, strategy with a better suppliers.

Also, in global trend, the price difference among brands are getting smaller and smaller, making Dell’s main attraction, the build cost, become less obvious. There might come a time that all the price difference disappear, leaving customer choice purely dependent on the brand’s name, and as other brands can provide pre-made computers unlike Dell’s several-day-to-build custom made products, Dell is at a disadvantage here.

Being a globally recognized brand, Dell is also exposed to the fluctuations in the World currency exchange market. The system where orders are placed before being charged could leave the company in potential loss in part of the supply chain if there is changes in exchange rates.


I am always amazed when I read an article from 2004 and find interesting goodies. I’m probably late to the game on a lot of these articles, as I didn’t really dive into programming as a career until 2005, but I just read The Simplest Thing that Could Possibly Work, a conversation with Ward Cunningham by Bill Venners. The article was published on January 19, 2004, but it is truly timeless.

The Shortest Path

Simplicity is the shortest path to a solution.

“Shortest” doesn’t necessarily refer to lines of code or number of characters, but I see it more as the path that requires the least amount of complexity. As he mentions in the article, if someone releases a 20 page proof to a math problem and then later on, someone releases a 10 page proof for the same problem, the 10 page proof is not necessarily more simple.

The 10 page proof could use some form of mathematics that is not widely used in the community and takes some time to comprehend. This means the 10 page version could be less simple as it requires learning to understand, whereas the 20 page uses generally understood concepts.

I think this is a balance that we always fight with as programmers. What is simple? I can usually say simple or not simple when I look at code, but it is hard to define the rules for simplicity.

Work Today Makes You Better Tomorrow

The effort you expend today to understand the code will make you a more powerful programmer tomorrow.

This is one of the concepts that has made the biggest different in my programming knowledge over the past few years. The first time that I really did this was when I wrote about class and instance variables a few years back. Ever since then, when I come across something that I don’t understand, that I feel I should, I spend the time to understand it. I have grown immensely because of this and would recommend that you do the same if you aren’t already.

Narrow What You Think About

We had been thinking about too much at once, trying to achieve too complicated a goal, trying to code it too well.

This is something that I have been practicing a lot lately. You know how sometimes you just feel overwhelmed and don’t want to start a feature or project? What I’ve found is that when I feel this way it is because I’m trying to think about too much at once.

Ward encourages over and over in the article, think about what is the most simple possible thing that could work. Notice he did not say what is the simplest thing that would work, but rather what could work.

This is something that I’ve noticed recently while pairing with Brandon Keepers. Both of us almost apologize for some of the code we first implement, as we are afraid the other will think that is all we are capable of. What is funny, is that we both realize that you have to start with something and thus never judge. It is far easier to incrementally work towards a brilliant solution than to think it in your head and instantly code it.

Start with a test. Make the test pass. Rinse and repeat. Small, tested changes that solve only the immediate problem at hand always end up with a more simple solution than trying to do it all in one fell swoop. I’ve also found I’m more productive this way as I have less moments of wondering what to do next. The failing test tells me.

Anyway, I thought the article was interesting enough that I would post some of the highlights here and encourage you all to read it. If you know of some oldie, but goodie articles, link them up in the comments below.


So you decide you want a way for a number of people to post short articles to a Web site, and maybe allow for other people to leave comments. What do you do? That’s easy: jump to the white board and start sketching out a vast array of boxes, blobs, and arrows that define a sophisticated content management system with multi-level admin roles, content versioning, threaded discussion boards, syntax highlighting, the works.


Don’t be too quick to judge. It’s easy to fall into the trap of defining the wrong requirements for a project.

Part of the reason is that building software is (or should be) fun, and often bigger projects are more fun. There is also the tendency to think about “what if”, imagining all the things that maybe one day who knows you never can tell might be needed.

People also tend to think in terms of what’s familiar, of how things have been done in the past or by others.

There are many ways to satisfy the needs described in the first paragraph. Some don’t require writing any software at all.

For the Ruby Best Practices blog, the general goals were modest. Allow a core set of people to easily add new content. Allow other people to contribute content as well, but only with the OK from someone in that core group. (BTW, there are eight of us in the RBP blog team. See here for my take on the RBP team logo)

We wanted to allow the use of Textile, and not make people use a browser for editing. Basically, turn simple flat files into a Web site with minimal fuss.

Korma takes an interesting approach to building Web sites. The whole app is ~230 lines of Ruby. Its key function is to take content from a Git repository, run it through some templating, and write out the site files.

Relying on Git for that back end is stunningly perfect. Git provides versioning, access control, and distributed contributions.

It becomes the database layer common to most blogging tools. For free. Right off the bat, no need to write any admin tools or content versioning code .

At the heart of Korma is the grit gem. As the project blurb says, “Grit gives you object oriented read/write access to Git repositories via Ruby.” Very sweet.

The korma.rb file takes two arguments. The first is required, and is the path to a git repository. The second argument is optional; it tells Korma what directory to use for the generated content, and defaults to ‘www’, relative to where you invoke the program.

The app uses grit to reach into the current contents of the repo and parse the needed files. Files have to be committed to be accessible to Korma.

There is a configuration file that describes some basic site metadata, such as the base URL, the site title, and the author list. When called, korma.rb grabs this config file, sets some Korma::Blog properties, and then writes out the static files.

An early version of Korma used Sinatra; basically, Korma was a lightweight Web app to serve the blog posts. But as simple as it was, it was overkill, since there was no real dynamic content. It made no sense to have the Web app regenerate the HTML on each request, since it changed so infrequently.

A next version replaced the Web app part with static files, making it a straightforward command-line program. This solved another problem: how to automate the regeneration of files. The answer: use Git’s post-commit hook to invoke the program.

For example:

   # File .git/hooks/post-commit   
   /usr/local/bin/ruby /home/james/data/vendor/korma/korma.rb /home/james/data/vendor/rbp-blog /home/james/data/vendor/korma/www

Early versions also used Haml for site-wide templates. Not being a fan of Haml, I added in a configurable option to use Erb. It was nice and all, but it was a feature without a requirement. No one was asking for configurable templating, so that option was dropped and Erb replaced Haml as the default.

If you are wondering why working code was removed, consider that any code you have is something you have to maintain. As bug-free and robust as you may like to think it, no code is easier to maintain than no code. Configurable templating was simply not a problem we were needed to solve, and a smaller, simpler code base is more valuable than a maybe “nice to have.”

There was some discussion about the need or value of allowing comments. In the end they were deemed good, but there was no good argument for hosting them as part of the blog site itself. That meant a 3rd-party solution (in this case, Disqus) was perfectly acceptable. Again, a goal was to have comments, not to write a commenting system (as entertaining as that may be).

Using Git allows for yet another feature for free: easy one-off contributions. In many systems, allowing anyone to contribute means creating an account, granting some sort of access, managing all the new user annoyances. Or, an existing user has to play proxy, accepting content and entering it into the system on behalf of the real author. That’s work! With git, one can clone the repo, add new content, and issue a pull request back to the master branch. Anyone with commit rights to the master can then merge it in (or politely decline). No-code features FTW.

None of the blog design requirements are written in stone, and they may change tomorrow, but by identifying the real needs, addressing what was deemed essential, offloading what we could, and skipping the feature bling, we have a system that is easy to understand, easy to maintain, and easy to change.


“this article shows how I used test-driven development tools and processes on a Greasemonkey script.” Though it also includes free ninjas.

1. Long drop downs hate humans

When I do online banking I need to select from a large list of other people’s bank accounts to which I might like to transfer money too. It is the massive drop down list that I must scroll through that I wish to raise issue with today. The problem of having to give other people money is probably a different discussion.

And take those time-zone selector drop down lists, for example, the massively long list rendered by Rails’ time_zone_select helper. Granted, I am thankful for you letting me choose my timezone in your web app. Though for those of us not living in the USA we must hunt for our closest city in the list. Dozens of locations, ordered by time zone and not the name of the city (see adjacent image). Unfortunately you can’t easily type a few letters of your current city to find it. Rather, you have to scroll. And if you live in the GMT+1000 time zone group (Eastern Australia), you have to scroll all the way to the bottom.

5. Choose from a small list

So I got to thinking I’d like a Greasemonkey (for Firefox) or GreaseKit (for Safari) script that automatically converted all ridiculously long HTML drop down lists into a sexy, autocompletion text field. You could then type in “bris” and be presented with “(GMT+1000) Brisbane”, or given the less amusing banking scenario then I could type “ATO” and get the bank account details for the Australian Tax Office.

I mean, how hard could it be?

This article is two things: an introduction to Ninja Search JS which gives a friendly ninja for every drop down field to solve the above problem. Mostly, the rest of this article shows how I used test-driven development tools and processes on a Greasemonkey script.

Introducing Ninja Search JS

Ninja Search JS banner

Click the banner to learn about and install the awesome Ninja Search JS. It includes free ninjas.

Currently it is a script for Greasemonkey (FireFox) or GreaseKit (Safari). It could be dynamically installed as necessary via a bookmarklet. I just haven’t done that yet. It could also be a FireFox extension so it didn’t have to fetch remote CSS and JS assets each time.

Ninja Search JS uses liquidmetal and jquery-flexselect projects created by Ryan McGeary.

Most importantly of all, I think, is that I wrote it all using TDD. That is, tests first. I don’t think this is an erroneous statement given the relatively ridiculous, and unimportant nature of Ninja Search JS itself.

TDD for Greasemonkey scripts

I love the simple idea of Greasemonkey scripts: run a script on a subset of all websites you visit. You can’t easily do this on desktop apps, which is why web apps are so awesome – its just HTML inside your browser, and with Greasemoney or browser extensions you can hook into that HTML, add your own DOM, remove DOM, add events etc.

But what stops me writing more of them is that once you cobble together a script, you push it out into the wild and then bug reports start coming back. Or feature requests, preferably. I’d now have a code base without any test coverage, so each new change is likely to break something else. Its also difficult to isolate bugs across different browsers, or in different environments (running Ninja Search JS in a page that used prototypejs originally failed), without a test suite.

And the best way to get yourself a test suite is to write it before you write the code itself. I believe this to be true because I know it sucks writing tests after I’ve writing the code.

I mostly focused on unit testing this script rather than integration testing. With integration testing I’d need to install the script into Greasemonkey, then display some HTML, then run the tests. I’ve no idea how’d I’d do that.

testing running

But I do know how to unit test JavaScript, and if I can get good coverage of the core libraries, then I should be able to slap the Greasemonkey specific code on top and do manual QA testing after that. The Greasemonkey specific code shouldn’t ever change much (it just loads up CSS and more JS code dynamically) so I feel ok about this approach.

For this project I used Screw.Unit for the first time (via a modified version of the blue-ridge rails plugin) and it was pretty sweet. Especially being able to run single tests or groups of tests in isolation.

Project structure

summary of project structure

All the JavaScript source – including dependent libraries such as jquery and jquery-flexselect – was put into the public folder. This is because I needed to be able to load the files into the browser without using file:// protocol (which was failing for me). So, I moved the entire project into my Sites folder, and added the project as a Passenger web app. I’m ahead of myself, but there is a reason I went with public for the JavaScript + assets folder.

In vendor/plugins, The blue-ridge rails plugin is a composite of several JavaScript libraries, including the test framework Screw.Unit, and a headless rake task to run all the tests without browser windows popping up everywhere. In my code base blue-ridge is slightly modified since my project doesn’t look like a rails app.

Our tests go in spec. In a Rails app using blue-ridge, they’d go in spec/javascripts, but since JavaScript is all we have in this project I’ve flattened the spec folder structure.

The website folder houses the github pages website (a git submodule to the gh-pages branch) and also the greasemonkey script and its runtime JavaScript, CSS, and ninja image assets.

A simple first test

For the Ninja Search JS I wanted to add the little ninja icon next to every <select> element on every page I ever visited. When the icon is clicked, it would convert the corresponding <select> element into a text field with fantastical autocompletion support.

For Screw.Unit, the first thing we need is a spec/ninja_search_spec.js file for the tests, and an HTML fixture file that will be loaded into the browser. The HTML file’s name must match to the corresponding test name, so it must be spec/fixtures/ninja_search.html.

For our first test we want the cute ninja icon to appear next to <select> drop downs.

require("../public/ninja_search.js"); // relative to spec folder

  describe("inline activation button", function(){
    it("should display NinjaSearch image button", function(){
      var button = $('a.ninja_search_activation');
      expect(button.size()).to(be_gte, 1);

The Blue Ridge textmate bundle makes it really easy to create the describe (des) and it (it) blocks, and ex expands into a useful expects(...).to(matcher, ...) snippet.

The two ellipses are values that are compared by a matcher. Matchers are available via global names such as equals, be_gte (greater than or equal) etc. See the matchers.js file for the default available matchers.

The HTML fixture file is important in that it includes the sample HTML upon which the tests are executed.

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "">
<html xmlns="">

  <title>Ninja Search | JavaScript Testing Results</title>
  <link rel="stylesheet" href="screw.css" type="text/css" charset="utf-8" />
  <script src="../../vendor/plugins/blue-ridge/lib/blue-ridge.js"></script>

    <label for="person_user_time_zone_id">Main drop down for tests</label>
    <select name="person[user][time_zone_id]" id="person_user_time_zone_id" style="display: inline;">
      <option value="Hawaii">(GMT-10:00) Hawaii</option>
      <option value="Alaska">(GMT-09:00) Alaska</option>
      <option value="Pacific Time (US & Canada)">(GMT-08:00) Pacific Time (US & Canada)</option>
      <option value="Arizona">(GMT-07:00) Arizona</option>
      <option value="Mountain Time (US & Canada)">(GMT-07:00) Mountain Time (US & Canada)</option>
      <option value="Central Time (US & Canada)">(GMT-06:00) Central Time (US & Canada)</option>
      <option value="Eastern Time (US & Canada)">(GMT-05:00) Eastern Time (US & Canada)</option>

In its header it loads the blue-ridge JavaScript library, which in turn loads Screw.Unit and ultimately our spec.js test file (based on corresponding file name), so ninja_search.html will cause a file spec/ninja_search_spec.js to be loaded.

To run our first test just load up the spec/fixtures/ninja_search.html file into your browser.

Your first test will fail. But that’s ok, that’s the point of TDD. Red, green, refactor.

Simple passing code

So now we need some code to make the test pass.

Create a file public/ninja_search.js and something like the following should work:

  $(function() {
    $('select').each(function(index) {
      var id = $(this).attr('id');

      // create the Ninja Search button, with rel attribute referencing corresponding >select id="...">
      $('> rel="' + id + '">ninja search>/a>')

Reload your test fixtures HTML file and the test should pass.

Now rinse and repeat. The final suite of tests and fixture files for Ninja Search JS are on github.

Building a Greasemonkey script

Typically Greasemonkey scripts are all-inclusive affairs. One JavaScript file, named my_script.user.js, typically does the trick.

I decided I wanted a thin Greasemonkey script that would dynamically load my ninja-search.js, and any stylesheets and dependent libraries. This would allow people to install the thin Greasemonkey script once, and I can deploy new versions of the actual code base over time without them having to re-install anything.

Ultimately in production, the stylesheets, images, and JavaScript code would be hosted on the intertubes somewhere. Though during development that would be long-winded and painful to push the code to a remote host just to run tests.

So I have three Greasemonkey scripts:

  • public/ – loads each dependent library and asset from the local file system
  • public/ninja_search.local.user.js – loads compressed library and asset from the local file system
  • public/ninja_search.user.js – loads compressed library and assets from remote server

Let’s ignore the optimisation of compressing dependent JavaScript libraries for the moment and just look at the dev.user.js and user.js files.

The two scripts differ in the target host from which they load assets and libraries. loads them from the local machine and ninja_search.user.js loads them from a remote server.

For example loads local dependencies like this:


And ninja_search.user.js loads remote dependencies like this:


In the final version of ninja_search.user.js we load a simple, conpressed library containing jquery, our code, and other dependencies, called ninja_search_complete.js.

Using Passenger to server local libraries

The problem with loading local JavaScript libraries using the file:// protocol, inferred earlier, is that it doesn’t work. So if I can’t load libraries using file:// then I must use the http:// protocol. That means I must route the requests through Apache/Ningx.

Fortunately there is a very simple solution: use Phusion Passenger which serves a “web app’s” public folder automatically. That’s why all the javascript, CSS and image assets have been placed in a folder public instead of src or lib or javascript.

On my OS X machine, I moved the repository folder into my Sites folder and wired up the folder as a Passenger web app using PassengerPane. It took 2 minutes and now I had http://ninja-search.local as a valid base URL to serve my JavaScript libraries to my Greasemonkey script.

Testing the Greasemonkey scripts

I can only have one of the three Greasemonkey scripts installed at a time, so I install the file to check that everything is basically working inside a browser on interesting, foreign sites (outside of the unit test HTML pages).

Once I’ve deployed the JavaScript files and assets to the remote server I can then install the ninja-search.user.js file (so can you) and double check that I haven’t screwed anything up.

Deploying via GitHub Pages

The normal, community place to upload and share Greasemonkey scripts is This is great for one file scripts, though if your script includes CSS and image assets, let alone additional JavaScript libraries, then I don’t think its as helpful, which is a pity.

So I decided to deploy the ninja-search-js files into the project’s own GitHub Pages site.

After creating the GitHub Pages site using Pages Generator, I then pulled down the gh-pages branch, and then linked (via submodules) that branch into my master branch as website folder.

Something like:

git checkout origin/gh-pages -b gh-pages
git checkout master
git submodule add -b gh-pages website

Now I can access the gh-pages branch from my master branch (where the code is).

Then to deploy our Greasemonkey script we just copy over all the public files into website/dist, and then commit and push the changes to the gh-pages branch.

mkdir -p website/dist
cp -R public/* website/dist/
cd website
git commit -a "latest script release"
git push origin gh-pages
cd ..

Then you wait very patiently for GitHub to deploy your latest website, which now contains your Greasemonkey script (dist/ninja-search.user.js) and all the libraries (our actual code), stylesheets and images.


Greasemonkey scripts might seem like small little chunks of code. But all code starts small and grows. At some stage you’ll wish you had some test coverage. And later you’ll hate yourself for ever having release the bloody thing in the first place.

I wrote all this up to summarise how I’d done TDD for the Ninja Search JS project, which is slightly different from how I added test cases to _why’s the octocat’s pajamas greasemonkey script when I first started hacking with unit testing Greasemonkey scripts. The next one will probably be slightly different again.

I feel good about the current project structure, I liked Screw.Unit and blue-ridge, and I’m amused by my use of GitHub Pages to deploy the application itself.

If anyone has any ideas on how this could be improved, or done radically differently, I’d love to hear them!



Sketches allows you to create and edit Ruby code from the comfort of your editor, while having it safely reloaded in IRB whenever changes to the code are saved.


  • Spawn an editor of your choosing from IRB.
  • Automatically reload your code when it changes.
  • Use a custom editor command.
  • Use a custom temp directory to store sketches in.


Download it here or run:

$ sudo gem install sketches

Then require sketches in your .irbrc file:

require 'sketches'

Sketches can be configured to use a custom editor command:

Sketches.config :editor => 'gvim'

Sketches.config :editor => lambda { |path|
  "xterm -fg gray -bg black -e vim #{path} &"


  • Open a new sketch:
  • Open a new named sketch:
    sketch :foo
  • Open a sketch from an existing file:
    sketch_from 'path/to/bar.rb'
  • Reopen an existing sketch:
    sketch 2
    sketch :foo
  • List all sketches:
  • Name a sketch:
    name_sketch 2, :foo
  • Save a sketch to an alternant location:
    save_sketch :foo, 'path/to/foo.rb'


Last week, in a few hours, I whipped together for Flight Control, a super fun iPhone game. The site allows users to upload screenshots of their high scores. I thought I would provide a few details here as some may find it interesting.

It is a pretty straightforward and simple site, but it did need a few permissions. I wanted users to be able to update their own profile, scores and photos, but not anyone else’s. On top of that, I, as an admin, should be able to update anything on the site. I’m sure there is a better way, but this is what I did and it is working just fine.

Add admin to users

I added an admin boolean to the users table. You may or may not know this, but Active Record adds handy boolean methods for all your columns. For example, if the user model has an email column and an admin column, you can do the following.

user = # => false = '' # => true

user.admin? # => false
user.admin = true
user.admin? # => true

Simple permissions module

Next up, I created a module called permissions, that looks something like this:

module Permissions
  def changeable_by?(other_user)
    return false if other_user.nil?
    user == other_user || other_user.admin?

I put this in app/concerns/ and added that directory to the load path, but it will work just fine in lib/.

Mixin the permission module

Then in the user, score and photo models, I just include that permission module.

class Score < ActiveRecord::Base
  include Permissions

class Photo < ActiveRecord::Base
  include Permissions

class User < ActiveRecord::Base
  include Permissions

Add checks in controllers/views

Now, in the view I can check if a user has permission before showing the edit and delete links.

<%- if score.changeable_by?(current_user) -%>
  <li class="actions">
    <%= link_to 'Edit', edit_score_url(score) %>
    <%= link_to 'Delete', score, :method => :delete %>
<%- end -%>

And in the controller, I can do the same.

class ScoresController < ApplicationController
  before_filter :authorize, :only => [:edit, :update, :destroy]

    def authorize
      unless @score.changeable_by?(current_user)
        render :text => 'Unauthorized', :status => :unauthorized

Macro for model tests

I didn’t forget about testing either. I created a quick macro for shoulda like this (also uses factory girl and matchy):

class ActiveSupport::TestCase
  def self.should_have_permissions(factory)
    should "know who has permission to change it" do
      object     = Factory(factory)
      admin      = Factory(:admin)
      other_user = Factory(:user)
      object.changeable_by?(other_user).should be(false)
      object.changeable_by?(object.user).should be(true)
      object.changeable_by?(admin).should be(true)
      object.changeable_by?(nil).should be(false)

Which I can then call from my various model tests:

class ScoreTest < ActiveSupport::TestCase
  should_have_permissions :score

Looking at it now, I probably could just infer the score factory as I’m in the ScoreTest, but for whatever reason, I didn’t go that far.

A sprinkle of controller tests

I also did something like the following to test the controllers:

class ScoresControllerTest < ActionController::TestCase  
  context "A regular user" do
    setup do
      @user = Factory(:email_confirmed_user)
      sign_in_as @user

    context "on GET to :edit" do
      context "for own score" do
        setup do
          @score = Factory(:score, :user => @user)
          get :edit, :id =>

        should_respond_with :success

      context "for another user's score" do
        setup do
          @score = Factory(:score)
          get :edit, :id =>

        should_respond_with :unauthorized

  context "An admin user" do
    setup do
      @admin = Factory(:admin)
      sign_in_as @admin

    context "on GET to :edit" do
      context "for own score" do
        setup do
          @score = Factory(:score, :user => @admin)
          get :edit, :id =>

        should_respond_with :success

      context "for another user's score" do
        setup do
          @score = Factory(:score)
          get :edit, :id =>

        should_respond_with :success

Summary of Tools

I should call flightcontrolled, the thoughtbot project as I used several of their awesome tools. I used clearance for authentication, shoulda and factory girl for testing, and paperclip for file uploads. This was the first project that I used factory girl on and I really like it. Again, I didn’t get the fuss until I used it, and then I was like “Oooooh! Sweet!”.

One of the cool things about paperclip is you can pass straight up convert options to imagemagick. Flight Control is a game that is played horizontally, so I knew all screenshots would need to be rotated 270 degress. I just added the following convert options (along with strip) to the paperclip call:

has_attached_file :image, 
  :styles => {:thumb => '100>', :full => '480>'},
  :default_style => :full,
  :convert_options => {:all => '-rotate 270 -strip'}


You don’t need some fancy plugin or a lot of code to add some basic permissions into your application. A simple module can go a long way. Also, start using Thoughtbot’s projects. I’m really impressed with the developer tools they have created thus far.

Include vs Extend in Ruby

Tháng Năm 20, 2009


Now that we know the difference between an instance method and a class method, let’s cover the difference between include and extend in regards to modules. Include is for adding methods to an instance of a class and extend is for adding class methods. Let’s take a look at a small example.

module Foo
  def foo
    puts 'heyyyyoooo!'

class Bar
  include Foo
end # heyyyyoooo! # NoMethodError: undefined method ‘foo’ for Bar:Class

class Baz
  extend Foo
end # heyyyyoooo! # NoMethodError: undefined method ‘foo’ for #<Baz:0x1e708>

As you can see, include makes the foo method available to an instance of a class and extend makes the foo method available to the class itself.

Include Example

If you want to see more examples of using include to share methods among models, you can read my article on how I added simple permissions to an app. The permissions module in that article is then included in a few models thus sharing the methods in it. That is all I’ll say here, so if you want to see more check out that article.

Extend Example

I’ve also got a simple example of using extend that I’ve plucked from the Twitter gem. Basically, Twitter supports two methods for authentication—httpauth and oauth. In order to share the maximum amount of code when using these two different authentication methods, I use a lot of delegation. Basically, the Twitter::Base class takes an instance of a “client”. A client is an instance of either Twitter::HTTPAuth or Twitter::OAuth.

Anytime a request is made from the Twitter::Base object, either get or post is called on the client. The Twitter::HTTPAuth client defines the get and post methods, but the Twitter::OAuth client does not. Twitter::OAuth is just a thin wrapper around the OAuth gem and the OAuth gem actually provides get and post methods on the access token, which automatically handles passing the OAuth information around with each request.

The implementation looks something like this (full file on github):

module Twitter
  class OAuth
    extend Forwardable
    def_delegators :access_token, :get, :post

    # a bunch of code removed for clarity

Rather than define get and post, I simply delegate the get and post instance methods to the access token, which already has them defined. I do that by extending the Forwardable module onto the Twitter::OAuth class and then using the def_delegators class method that it provides. This may not be the most clear example, but it was the first that came to mind so I hope it is understandable.

A Common Idiom

Even though include is for adding instance methods, a common idiom you’ll see in Ruby is to use include to append both class and instance methods. The reason for this is that include has a self.included hook you can use to modify the class that is including a module and, to my knowledge, extend does not have a hook. It’s highly debatable, but often used so I figured I would mention it. Let’s look at an example.

module Foo
  def self.included(base)

  module ClassMethods
    def bar
      puts 'class method'

  def foo
    puts 'instance method'

class Baz
  include Foo
end # class method # instance method # NoMethodError: undefined method ‘foo’ for Baz:Class # NoMethodError: undefined method ‘bar’ for #<Baz:0x1e3d4>

There are a ton of projects that use this idiom, including Rails, DataMapper, HTTParty, and HappyMapper. For example, when you use HTTParty, you do something like this.

class FetchyMcfetcherson
  include HTTParty


When you add the include to your class, HTTParty appends class methods, such as get, post, put, delete, base_uri, default_options and format. I think this idiom is what causes a lot of confusion in the include verse extend understanding. Because you are using include it seems like the HTTParty methods would be added to an instance of the FetchyMcfetcherson class, but they are actually added to the class itself.


Hope this helps those struggling with include verse extend. Use include for instance methods and extend for class methods. Also, it is sometimes ok to use include to add both instance and class methods. Both are really handy and allow for a great amount of code reuse. They also allow you to avoid deep inheritance, and instead just modularize code and include it where needed, which is much more the ruby way.