I have been a bit concerned about the performance of our component that connects ActiveRecord with JDBC. Since ActiveRecord demands that every result of a select should be turned into a big array of hashes of strings to strings, I suspected we would be quite inefficient at this, and I wasn't sure I could put all my faith in JDBC either.
So, as a good developer, I decided to test this, with a very small microbenchmark, to see how bad the situation actually was.
Since I really wanted to check the raw database and unmarshalling performance, I decided to not use ActiveRecord classes, but do executions directly. The inner part of my benchmark execution looks like this:
conn.create_table :test_perf, :force => true do |t|
t.column :one, :string
t.column :two, :string
end
100.times do
conn.insert("INSERT INTO test_perf(one, two) VALUES('one','two')")
end
1000.times do
conn.select_all("SELECT * FROM test_perf")
end
conn.drop_table :test_perf
It is executed with a recent MySQL Community Edition 5 server, locally, with matching JDBC drivers. The MRI tests is run with 1.8.6, and both use ActiveRecord 1.15.3. ActiveRecord-JDBC is a prerelease of 0.2.4, available from trunk. My machine is an IBM Thinkpad T43p, running Debian. It's 32bit and Java 6.
The results were highly interesting. First, let's see the baseline: the Ruby results:
user system total real
7.730000 0.020000 7.750000 ( 8.531013)
Frankly, I wasn't that impressed with these numbers. I thought Ruby database performance was better. Oh well. The interesting part is the JRuby AR-JDBC results:
user system total real
6.948000 0.000000 6.948000 ( 6.948000)
WOW! We're actually faster in this quite interesting test. Not what I had expected at all, but very welcome news indeed. Note that there is still much block overhead in the interpreter, so the results are a little bit skewed in MRI's favour by this, too.