A revamped plugin/gem for benchmarking your ruby/rails test::units 6

Posted by Tim Connor Tue, 13 Jan 2009 19:38:00 GMT

and what I learned about hacking on Test::Unit.

For quite a while I’ve been using Geoffrey Grosenbach’s test_benchmark to see what tests were most egregiously slowing down whole test suites. I (and he as well, actually) was quite dissatisfied with its approach of spamming the console with the full output after each file completed running, as it made the plugin unusable for just keeping enabled. Unfortunately, how to hack test/unit best wasn’t immediately apparent the last time I looked into.

This time I’ve figured it out: I’ve now reworked the plugin so that it waits until all the tests are done running, and then outputs the slowest 10, while dumping the full list to the log (if you are in Rails, or Loggable is otherwise defined). Other info and options can be found at the new github home of test_benchmark.

The original version just redefined Test::Unit::TestSuite#run to wrap it with some benchmarking output.

#code trimmed down to functional base
class Test::Unit::TestSuite
  def run(result, &progress_block)
    @tests.each do |test|
      test.run(result, &progress_block)
      #code to store benchmark times here
    end
    #code to output benchmark times here
  end
end

The problem with this is the slightly confusing definition of TestSuite within test/unit (or at least how it ends up working the reality of most projects testing setup). I (and perhaps Geoffrey too) assumed the usual project wide definition of a ‘test suite’ being the entire collection of tests. As I was putting output statements through the codebase, I noticed that each individual test file was being treated as a separate TestSuite, despite inheriting from TestCase. So that meant each time a file completed, the benchmarking code at the end of TestSuite#run spammed the console.

Perhaps there is a way to better organize your tests into Suites, so that this doesn’t happen, but that is moot, because this is how pretty much all projects are organized in reality. As such, I needed to rework the test_benchmark codebase to handle this better.

Instead I used Test::Unit::UI::Console::TestRunner (which is instantiated when you runs test from the console, shockingly enough), which already uses the hooks for individual test, as well as entire rest run, start and stop. I just added a bit more functionality to these functions and BAM, easy-peasy benchmarking that only outputs when the full test run is done.


alias started_old started
def started(result)
started_old(result)
@benchmark_times = {}
end

alias finished_old finished
def finished(elapsed_time)
finished_old(elapsed_time)
benchmarks = @benchmark_times.sort{|a, b| b1 <=> a1}
output_benchmarks(benchmarks)
end

alias test_started_old test_started
def test_started(name)
test_started_old(name)
@benchmark_times[name] = Time.now
end

alias test_finished_old test_finished
def test_finished(name)
test_finished_old(name)
@benchmark_times[name] = Time.now – @benchmark_times[name]
end