A revamped plugin/gem for benchmarking your ruby/rails test::units 6

Posted by Tim Connor Tue, 13 Jan 2009 19:38:00 GMT

and what I learned about hacking on Test::Unit.

For quite a while I’ve been using Geoffrey Grosenbach’s test_benchmark to see what tests were most egregiously slowing down whole test suites. I (and he as well, actually) was quite dissatisfied with its approach of spamming the console with the full output after each file completed running, as it made the plugin unusable for just keeping enabled. Unfortunately, how to hack test/unit best wasn’t immediately apparent the last time I looked into.

This time I’ve figured it out: I’ve now reworked the plugin so that it waits until all the tests are done running, and then outputs the slowest 10, while dumping the full list to the log (if you are in Rails, or Loggable is otherwise defined). Other info and options can be found at the new github home of test_benchmark.

The original version just redefined Test::Unit::TestSuite#run to wrap it with some benchmarking output.

#code trimmed down to functional base
class Test::Unit::TestSuite
  def run(result, &progress_block)
    @tests.each do |test|
      test.run(result, &progress_block)
      #code to store benchmark times here
    #code to output benchmark times here

The problem with this is the slightly confusing definition of TestSuite within test/unit (or at least how it ends up working the reality of most projects testing setup). I (and perhaps Geoffrey too) assumed the usual project wide definition of a ‘test suite’ being the entire collection of tests. As I was putting output statements through the codebase, I noticed that each individual test file was being treated as a separate TestSuite, despite inheriting from TestCase. So that meant each time a file completed, the benchmarking code at the end of TestSuite#run spammed the console.

Perhaps there is a way to better organize your tests into Suites, so that this doesn’t happen, but that is moot, because this is how pretty much all projects are organized in reality. As such, I needed to rework the test_benchmark codebase to handle this better.

Instead I used Test::Unit::UI::Console::TestRunner (which is instantiated when you runs test from the console, shockingly enough), which already uses the hooks for individual test, as well as entire rest run, start and stop. I just added a bit more functionality to these functions and BAM, easy-peasy benchmarking that only outputs when the full test run is done.

alias started_old started
def started(result)
@benchmark_times = {}

alias finished_old finished
def finished(elapsed_time)
benchmarks = @benchmark_times.sort{|a, b| b1 <=> a1}

alias test_started_old test_started
def test_started(name)
@benchmark_times[name] = Time.now

alias test_finished_old test_finished
def test_finished(name)
@benchmark_times[name] = Time.now – @benchmark_times[name]


Leave a comment

  1. Avatar
    Jon Dahl about 1 month later:

    Hi Tim – thanks for putting this together. This is a great library.

    I made this comment on my slow test article on Rail Spikes, but autotest runs this benchmarking, which isn’t ideal IMO. So you can silence the benchmarking with:

    BENCHMARK=false autotest

    That got me thinking: it might actually be better to not run the benchmarking by default, but only when BENCHMARK=true or BENCHMARK=full is specified. What do you think?

  2. Avatar
    Tim Connor about 1 month later:

    Following up on your end, Jon.

  3. Avatar
    Myron Marston 2 months later:

    I wish I had known about this before writing my own library to do this same thing. I took a slightly different approach—see my version.

  4. Avatar
    Tim Connor 2 months later:

    I figured I should use the hooks that were built into the T:U, even if they were a bit of a pain in the ass to get working, right. Once I did it gave nice flexibility without having to worry about the potential conflicts (with things like Rails, or various testing libraries, or changes to T:U itself) of reopening the core classes.

    You’ve put some more effort into the output, though, which is nice. Also nice, we can now borrow whatever parts we like from each other. :D

  5. Avatar
    Ryan Davis 5 months later:

    This is built into minitest if you run with -v.

  6. Avatar
    Tim Connor 5 months later:

    Awesome, Ryan. At work I’m stuck with a large test suite that I am not sure can be migrated over easily, but that’s good to know for future projects.