I wish this could be a happy story. But it's really not. If I have made any factual errors in this little rant, don't hesitate to correct me - I would love to be proved wrong about this.
Actually, I wrote this introduction before I went out to celebrate New Years. Now I'm back to finish the story, and the picture have changed a bit. Not enough yet, but we'll see.
Let's tell it from the start. I have this project I've just started working on. It seemed like a fun and quite large thing that I can tinker on in my own time. It also seemed like a perfect match to implement in Scala. I haven't done anything real in Scala yet, and wanted to have a chance to do it. I like everything I've seen about the language itself. I've said so before and I'll say it again. So I decided to use it.
As you all probably know, the first step in a new project is to set up your basic structure and getting all the simple stuff working together. Right, for me that means a simple Ant script that can compile Java and Scala, package it into a jar file, and run unit tests on the code. This was simple. ... Well, except for the testing bit, that is.
It seems there are a few options for testing in Scala. The ones I found was SUnit (included in the Scala distribution), ScUnit, Rehersal and specs (which is based on ScalaCheck, another framework). So these are our contestants.
First, take SUnit -- this is a very small project, no real support for anything spectacular. The syntax kinda stinks. One class for each test? No way. Also, no integration with Ant. I haven't even tried to run this. Testing should be painless, and in this case I feel that using Java would have been an improvement.
ScUnit looked really, really promising. Quite nice syntax, lots of smarts in the framework. I liked what the documentation showed me. It had a custom Ant task and so on. Very nice. It even worked for a simple Hello, World test case. I thought that this was it. So I started writing the starting points for the first test. For some reasons I needed 20 random numbers for this test. Scala has so many ways of achieving this... I think I've tried almost all of them. But all failed with this nice class loading exception just saying InstantiationException on an anonymous function. Lovely. Through some trial and error, I found out that basically any syntax that causes Scala to generate extra classes, ScUnit will fail to run. I have no idea why.
So I gave up and started on the next framework. Rehersal. I have no idea what the misspelling is about. Anyway, this was a no show quite quickly since the Ant test task didn't even load (it referenced scala.CaseClass, which doesn't seem to be in the distribution anymore). Well then.
Finally I found specs and ScalaCheck. Now, these frameworks look mighty good, but they need better Google numbers. Specs also has the problem of being the plural of a quite common word. Not a good recipe for success. So I tried to get it working. Specs is built on top of ScalaCheck, and I much preferred the specs way of doing things (being an RSpec fan boy and all). Now specs doesn't have Ant integration at all, but it does have a JUnit compatibility layer. So I followed the documentation exactly and tried to run it with the Ant JUnit task. KABOOM. "No runnable methods". This is an error message from JUnit4. But as far as I know, I have been able to run JUnit3 classes as good as JUnit4 classes with the same classpath. Hell, JRuby uses JUnit3 syntax. So obviously I have JUnit4 somewhere on my classpath. For the world of me I cannot find it though.
It doesn't really matter. At that point I had spent several hours getting simple unit testing working. I gave up and integrated JtestR. Lovely. Half of my project will now not be Scala. I imagine I would have learned more Scala by writing tests in it, than with the implementation. Apparently not. JtestR took less than a minute to get up and working.
I am not saying anything about Scala the language here. What I am saying is that things like this need to work. The integration points need to be there, especially with Ant. Testing is the most important thing a software developer does. I mean, seriously, no matter what code you write, how do you know it works correctly unless you test it in a repeatable way? It's the only responsible way of coding.
I'm not saying I'm the worlds best coder in any way. I know the Java and Ruby worlds quite well, and I've seen lots of other stuff. But the fact that I can't get any sane testing framework in Scala up and running in several hours, with Ant, tells me that the Scala ecosystem might not be ready for some time.
Now, we'll see what happens with specs. If I get it working I'll use it to test my code. I would love that to happen. I would love to help make it happen - except I haven't learned enough Scala to actually do it yet. Any way or another, I'm not giving up on using Scala for this project. I will see where this leads me. And you can probably expect a series of these first impression posts from me about Scala, since I have a tendency to rant or rave about my experiences.
Happy New Years, people!
måndag, december 31, 2007
fredag, december 28, 2007
JtestR 0.1 released
If people have wondered, this is what I have been working on in my spare time the last few weeks. But now it's finally released! The first version of JtestR.
So what is it? A library that allows you to easily test your Java code with Ruby libraries.
Homepage: http://jtestr.codehaus.org
Download: http://dist.codehaus.org/jtestr
JtestR 0.1 is the first public release of the JtestR testing tool. JtestR integrates JRuby with several Ruby frameworks to allow painless testing of Java code, using RSpec, Test/Unit, dust and Mocha.
Features:
Team:
Ola Bini - ola.bini@gmail.com
Anda Abramovici - anda.abramovici@gmail.com
So what is it? A library that allows you to easily test your Java code with Ruby libraries.
Homepage: http://jtestr.codehaus.org
Download: http://dist.codehaus.org/jtestr
JtestR 0.1 is the first public release of the JtestR testing tool. JtestR integrates JRuby with several Ruby frameworks to allow painless testing of Java code, using RSpec, Test/Unit, dust and Mocha.
Features:
- Integrates with Ant and Maven
- Includes JRuby 1.1, Test/Unit, RSpec, dust, Mocha and ActiveSupport
- Customizes Mocha so that mocking of any Java class is possible
- Background testing server for quick startup of tests
- Automatically runs your JUnit codebase as part of the build
Team:
Ola Bini - ola.bini@gmail.com
Anda Abramovici - anda.abramovici@gmail.com
måndag, december 24, 2007
Code size and dynamic languages
I've had a fun time the last week noting the reactions to Steve Yegge's latest post (Code's Worst Enemy). Now, Yegge always manages to write stuff that generate interesting - and in some cases insane - comments. This time, the results are actually quite a bit more aligned. I'm seeing several trends, the largest being that having generated a 500K LOC code base in the first case is a sin against mankind. The second one being that you should never have one code base that's so large, it should be modularized into several hundreds of smaller projects/modules. The third reaction is that Yegge should be using Scala for the rewrite.
Now, from my perspective I don't really care that he managed to generate that large of a code base. I think any programmer could fall down the same tar pit, especially if it's over a large amount of time. Secondly, you don't need to be one programmer to get this problem. I would wager that there are millions of heinous code bases like this, all over the place. So my reaction is rather the pragmatic one: how do you actually handle the situation if you find yourself in it? Provided you understand the whole project and have the time to rewrite it, how should it be done? The first step in my opinion, would probably be to not do it alone. The second step would be to do it in small steps, replacing small parts of the system while writing unit tests while going.
But at the end of the day, maybe a totally new approach is needed. So that's where Yegge chooses to go with Rhino for implementation language. Now, if I would have tackled the same problem, I would never reimplement the whole application in Rhino - rather, it would be more interesting to try to find the obvious place where the system needs to be dynamic and split it there, keep those parts in Java and then implement the new functionality on top of the stable Java layer. Emacs comes to mind as a typical example, where the base parts are implemented in C, but most of the actual functionality is implemented in Emacs Lisp.
The choice of language is something that Stevey gets a lot of comments about. People just can't seem to understand why it has to be a dynamic language. (This is another rant, but people who comment on Stevey's blog seems to have a real hard time distinguishing between static typing and strong typing. Interesting that.) So, one reason is obviously that Stevey prefers dynamic typing. Another is that hotswapping code is one of those intrinsic features of dynamic languages that are really useful, especially in a game. The compilation stage just gets in the way at that level, especially if we're talking something that's going to live for a long time, and hopefully not have any down time. I understand why Scala doesn't cut it in this case. As good as Scala is, it's good exactly because it has a fair amount of static features. These are things that are extremely nice for certain applications, but it doesn't fit the top level of a system that needs to be malleable. In fact, I'm getting more and more certain that Scala needs to replace Java, as the semi stable layer beneath a dynamic language, but that's yet another rant. At the end of it, something like Java needs to be there - so why not make that thing be a better Java?
I didn't see too many comments about Stevey's ideas about refactoring and design patterns. Now, refactoring is a highly useful technique in dynamic languages too. And I believe Stevey is wrong saying that refactorings almost always increase the code size. The standard refactorings tend to cause that in a language like Java, but that's more because of the language. Refactoring in itself is really just a systematic way of making small, safe changes to a code base. The end result of refactoring is usually a cleaner code base, better understanding of that code base, and easier code to read. As such, they are as applicable to dynamic languages as to static ones.
Design patterns are another matter. I believe they serve two purposes - the first and more important being communication. Patterns make it easier to to understand and communicate high level features of a code base. But the second purpose is to make up for deficiencies in the language, and that's mostly what people see when talking about design patterns. When you're moving in a language like Lisp, where most design patterns are already in the language, you tend to not need them for communication as much either. Since the language itself provides ways of creating new abstractions, you can use those directly, instead of using design patterns to create "artificial abstractions".
As a typical example of a case where a design pattern is totally invisible due to language design, take a look at the Factory. Now, Ruby has factories. In fact, they are all over the place. Lets take a very typical example. The Class.new method that you use to create new instances of a class. New is just a factory method. In fact, you can reimplement new yourself:
Now, from my perspective I don't really care that he managed to generate that large of a code base. I think any programmer could fall down the same tar pit, especially if it's over a large amount of time. Secondly, you don't need to be one programmer to get this problem. I would wager that there are millions of heinous code bases like this, all over the place. So my reaction is rather the pragmatic one: how do you actually handle the situation if you find yourself in it? Provided you understand the whole project and have the time to rewrite it, how should it be done? The first step in my opinion, would probably be to not do it alone. The second step would be to do it in small steps, replacing small parts of the system while writing unit tests while going.
But at the end of the day, maybe a totally new approach is needed. So that's where Yegge chooses to go with Rhino for implementation language. Now, if I would have tackled the same problem, I would never reimplement the whole application in Rhino - rather, it would be more interesting to try to find the obvious place where the system needs to be dynamic and split it there, keep those parts in Java and then implement the new functionality on top of the stable Java layer. Emacs comes to mind as a typical example, where the base parts are implemented in C, but most of the actual functionality is implemented in Emacs Lisp.
The choice of language is something that Stevey gets a lot of comments about. People just can't seem to understand why it has to be a dynamic language. (This is another rant, but people who comment on Stevey's blog seems to have a real hard time distinguishing between static typing and strong typing. Interesting that.) So, one reason is obviously that Stevey prefers dynamic typing. Another is that hotswapping code is one of those intrinsic features of dynamic languages that are really useful, especially in a game. The compilation stage just gets in the way at that level, especially if we're talking something that's going to live for a long time, and hopefully not have any down time. I understand why Scala doesn't cut it in this case. As good as Scala is, it's good exactly because it has a fair amount of static features. These are things that are extremely nice for certain applications, but it doesn't fit the top level of a system that needs to be malleable. In fact, I'm getting more and more certain that Scala needs to replace Java, as the semi stable layer beneath a dynamic language, but that's yet another rant. At the end of it, something like Java needs to be there - so why not make that thing be a better Java?
I didn't see too many comments about Stevey's ideas about refactoring and design patterns. Now, refactoring is a highly useful technique in dynamic languages too. And I believe Stevey is wrong saying that refactorings almost always increase the code size. The standard refactorings tend to cause that in a language like Java, but that's more because of the language. Refactoring in itself is really just a systematic way of making small, safe changes to a code base. The end result of refactoring is usually a cleaner code base, better understanding of that code base, and easier code to read. As such, they are as applicable to dynamic languages as to static ones.
Design patterns are another matter. I believe they serve two purposes - the first and more important being communication. Patterns make it easier to to understand and communicate high level features of a code base. But the second purpose is to make up for deficiencies in the language, and that's mostly what people see when talking about design patterns. When you're moving in a language like Lisp, where most design patterns are already in the language, you tend to not need them for communication as much either. Since the language itself provides ways of creating new abstractions, you can use those directly, instead of using design patterns to create "artificial abstractions".
As a typical example of a case where a design pattern is totally invisible due to language design, take a look at the Factory. Now, Ruby has factories. In fact, they are all over the place. Lets take a very typical example. The Class.new method that you use to create new instances of a class. New is just a factory method. In fact, you can reimplement new yourself:
class ClassYou could drop this code into any Ruby project, and everything would continue to work like before. That's because the new-method is just a regular method. The behavior of it can be changed. You can create a custom new method that returns different objects based on something:
def new(*args)
object = self.allocate
object.send :initialize, *args
object
end
end
class Werewolf;endHere, creating a new Werewolf will give you either an instance of Man or Wolf depending on the phase of the moon. So in this case we are actually creating and returning something from new that is not even sub classes of Werewolf. So new is just a factory method. Of course, the one lesson we should all take from Factory, is that if you can, you should name your things better than "new". And since there is no difference between new and other methods in Ruby, you should definitely make sure that creating objects uses the right name.
class Wolf;end
class Man;end
class << Werewolf
def new(*args)
object = if $phase_of_the_moon == :full
Wolf.allocate
else
Man.allocate
end
object.send :initialize, *args
object
end
end
$phase_of_the_moon = :half
p Werewolf.new
$phase_of_the_moon = :full
p Werewolf.new
Etiketter:
code size,
design patterns,
refactoring,
ruby,
yegge
fredag, december 21, 2007
Ruby closures and memory usage
You might have seen the trend - I've been spending time looking at memory usage in situations with larger applications. Specifically the things I've been looking at is mostly about deployments where a large number of JRuby runtimes is needed - but don't let that scare you. This information is exactly as applicable for regular Ruby as for JRuby.
One of the things that can really cause unintended high memory usage in Ruby programs is long lived blocks that close over things you might not intend. Remember, a closure actually has to close over all local variables, the surrounding blocks and also the living self at that moment.
Say that you have an object of some kind that has a method that returns a Proc. This proc will get saved somewhere and live for a long time - maybe even becoming a method with define_method:
So, if you're seeing yourself using long lived blocks that might leak memory, consider isolating the creation of them in as small of a scope as possible. The best way to do that is something like this:
Of course, if you actually need values from the outside, you can be selective and onle scope the values you actually need - unless you have to change them, of course:
This way of defining blocks is a bit heavy weight, but absolutely necessary in some cases. It's also the best way to get a blank slate binding, if you need that. Actually, to get a blank slate, you also need to remove all the Object methods from the "o" instance, and ActiveSupport have a library for blank slates. But this is the idea behind it.
It might seem stupid to care about memory at all in these days, but higher memory usage is one of the prices we pay for higher language abstractions. It's wasteful to take it too far though.
One of the things that can really cause unintended high memory usage in Ruby programs is long lived blocks that close over things you might not intend. Remember, a closure actually has to close over all local variables, the surrounding blocks and also the living self at that moment.
Say that you have an object of some kind that has a method that returns a Proc. This proc will get saved somewhere and live for a long time - maybe even becoming a method with define_method:
class FactoryNotice that this block doesn't even care about the actual environment it's created in. But as long as the variable block is still live, or something else points to the same Proc instance, the Factory instance will also stay alive. Think about a situation where you have an ActiveRecord instance of some kind that returns a Proc. Not an uncommon situation in medium to large applications. But the side effect will be that all the instance variables (and ActiveRecord objects usually have a few) and local variables will never disappear. No matter what you do in the block. Now, as I see it, there are really three different kinds of blocks in Ruby code:
def create_something
proc { puts "Hello World" }
end
end
block = Factory.new.create_something
- Blocks that process something without needing access to variables outside. (Stuff like [1,2,3,4,5].select {|n| n%2 == 0} doesn't need closure at all)
- Blocks that process or does something based on living variables.
- Blocks that need to change variables on the outside.
So, if you're seeing yourself using long lived blocks that might leak memory, consider isolating the creation of them in as small of a scope as possible. The best way to do that is something like this:
o = Object.newObviously, this is overkill if you don't know that the block needs to be long lived and it will capture things it shouldn't. The way it works is simple - just define a new clean Object instance, define a singleton method in that instance, and use that singleton method to create the block. The only things that will be captured will be the "o" instance. Since "o" doesn't have any instance variables that's fine, and the only local variables captured will be the one in the scope of the create_something method - which in this case doesn't have any.
class << o
def create_something
proc { puts "Hello World" }
end
end
block = o.create_something
Of course, if you actually need values from the outside, you can be selective and onle scope the values you actually need - unless you have to change them, of course:
o = Object.newIn this case, only "v" and "v2" will be available to the block, through the usage of regular method arguments.
class << o
def create_something(v, v2)
proc { puts "#{v} #{v2}" }
end
end
v = "hello"
v2 = "world"
v3 = "foobar" #will not be captured by the block
block = o.create_something(v, v2)
This way of defining blocks is a bit heavy weight, but absolutely necessary in some cases. It's also the best way to get a blank slate binding, if you need that. Actually, to get a blank slate, you also need to remove all the Object methods from the "o" instance, and ActiveSupport have a library for blank slates. But this is the idea behind it.
It might seem stupid to care about memory at all in these days, but higher memory usage is one of the prices we pay for higher language abstractions. It's wasteful to take it too far though.
onsdag, december 19, 2007
ThoughtWorks is looking at Sweden
I am not sure how well it comes across in my blog posts, but joining ThoughtWorks have been the best move of my life. I can't really describe what a wonderful place this is to be (for me at least). I sometimes try - in person, after a few beers - but I always end up not being able to capture the real feeling of working for a company that is more than a company.
I'm happy at being with ThoughtWorks. It's as simple as that - I feel like I've found my home.
So imagine how happy I am to tell you that ThoughtWorks is exploring opportunities for an office in Sweden!
Now, I am one of the persons involved in this effort, and we have been talking about it for a while (actually, we started talking about it for real not long after I joined). But now it's reality. The first trips to Sweden will be in January. ThoughtWorks will be sponsoring JFokus (which is shaping up to be a really good conference, by the way. I'm happy to have been presenting there the first year). We will have a few representatives at JFokus, of course. I will be there, for example. =)
Of course, exploring Sweden is not the same thing as saying that an office will actually happen. But we think there are good reasons to at least consider it. I personally think it would be a perfect fit, but I am a bit biased about it.
So what are we doing for exploration? Well, of course we have started to look into business opportunities and possible clients. We are looking at partnerships and collaboration. We are looking at potential recruits. But really, the most important thing at this stage is to talk to people, get a feeling for the lay of the land, get to know interesting folks that can give us advice and so on. And that is what our travels in January will be about.
So. Do you feel you might fit any of the categories of people above? We'd love to meet you and talk - very informally. So get in touch.
This is exciting times for us!
I'm happy at being with ThoughtWorks. It's as simple as that - I feel like I've found my home.
So imagine how happy I am to tell you that ThoughtWorks is exploring opportunities for an office in Sweden!
Now, I am one of the persons involved in this effort, and we have been talking about it for a while (actually, we started talking about it for real not long after I joined). But now it's reality. The first trips to Sweden will be in January. ThoughtWorks will be sponsoring JFokus (which is shaping up to be a really good conference, by the way. I'm happy to have been presenting there the first year). We will have a few representatives at JFokus, of course. I will be there, for example. =)
Of course, exploring Sweden is not the same thing as saying that an office will actually happen. But we think there are good reasons to at least consider it. I personally think it would be a perfect fit, but I am a bit biased about it.
So what are we doing for exploration? Well, of course we have started to look into business opportunities and possible clients. We are looking at partnerships and collaboration. We are looking at potential recruits. But really, the most important thing at this stage is to talk to people, get a feeling for the lay of the land, get to know interesting folks that can give us advice and so on. And that is what our travels in January will be about.
So. Do you feel you might fit any of the categories of people above? We'd love to meet you and talk - very informally. So get in touch.
This is exciting times for us!
Your Ruby tests are memory leaks
The title says it all. The only reason you haven't noticed, is that you probably aren't working on a large enough application, or have enough tests. But the fact is, Test::Unit leaks memory. Of course, that's to be expected if it is going to be able to report results. But that leak should be more or less linear.
That is not the case. So. To make this concrete, lets take a look at a test that exhibits the problem:
... Or God forbid - if you have a closure somewhere inside of that stuff. Closures are a good way to leak lots of memory, if they aren't collected. The way the structures work, they refer to many things all over the place. Leaking closures will kill your application.
What's the solution? Well, the good one would be for test unit to change it's implementation of TestCase.run to remove all instance variables after teardown. Lacking that, something like this will do it:
That is not the case. So. To make this concrete, lets take a look at a test that exhibits the problem:
class LargeTest < Test::Unit::TestCaseThis is obviously fabricated. The important details are these: the setup method will create two semi large objects and assign them to instance variables. This is a common pattern in many test suites - you want to have the same objects created, so you assign them to instance variables. In most cases there is some tear down associated, but I rarely see teardown that includes assigning nil to the instance variables. Now, this will run one million tests, with one million setup calls. Not only that - the way Test::Unit works, it will actually create one million LargeTest instances. Each of those instances will have those two instance variables defined. Now, if you take a look at your test suites, you probably have less than one million tests all over. You also probably don't have that large objects all over the place. But remember, it's the object graph that counts. If you have a small object that refers to something else, the whole referral chain will be stopped from garbage collection.
def setup
@large1 = ["foobar"] * 1000
@large2 = ["fruxy"] * 1000
end
1000_000.times do |n|
define_method :"test_abc#{n}" do
assert true
end
end
end
... Or God forbid - if you have a closure somewhere inside of that stuff. Closures are a good way to leak lots of memory, if they aren't collected. The way the structures work, they refer to many things all over the place. Leaking closures will kill your application.
What's the solution? Well, the good one would be for test unit to change it's implementation of TestCase.run to remove all instance variables after teardown. Lacking that, something like this will do it:
class Test::Unit::TestCaseThis code will make sure that all instance variables except for those that Test::Unit needs will be removed at teardown time. That means the instances will still be there, but no memory will be leaked for the things you're using. Much better, but at the end of the day, I feel that the approach Test::Unit uses is dangerous. At some point, this probably needs to be fixed for real.
NEEDED_INSTANCE_VARIABLES = %w(@loaded_fixtures @_assertion_wrapped @fixture_cache @test_passed @method_name)
def teardown_instance_variables
teardown_real
instance_variables.each do |name|
unless NEEDED_INSTANCE_VARIABLES.include?(name)
instance_variable_set name, nil
end
end
end
def teardown_real; end
alias teardown teardown_instance_variables
def self.method_added(name)
if name == :teardown && !@__inside
alias_method :teardown_real, :teardown
@__inside = true
alias_method :teardown, :teardown_instance_variables
@__inside = false
end
end
end
tisdag, december 18, 2007
Joda Time
I spent a few hours this weekend converting RubyTime in JRuby to use Joda Time instead of Calendar or Date. That was a very nice experience actually. I'm incredibly impressed by Joda, and overall I think it was worth adding a new dependency to JRuby for this. The API is very nice, and immutability in these classes make things so much easier.
There were a few things I got a bit annoyed at though. First, that Joda is ISO 8601 compliant is a really good thing, but I missed the functionality to tune a few things. Stuff like saying which weekday a week should start on, for the calculation of current week would be very nice. As it is right now, that functionality has to use Calendar. It might be in Joda, but I couldn't find it.
The other thing I had a problem with - and this actually made me a bit annoyed - was how Joda handles GMT and UTC. Now, it says clearly in the documentation that Joda works with the UTC concept, and that GMT is not exactly the same thing. So why is it this code passes (if assertNotEquals is assumed):
Yeah, you're reading it right - UTC and GMT is the same time zone. +00:00 is the same as UTC too. But Etc/GMT is not the same as UTC or GMT or +00:00. Isn't that a bit strange?
There were a few things I got a bit annoyed at though. First, that Joda is ISO 8601 compliant is a really good thing, but I missed the functionality to tune a few things. Stuff like saying which weekday a week should start on, for the calculation of current week would be very nice. As it is right now, that functionality has to use Calendar. It might be in Joda, but I couldn't find it.
The other thing I had a problem with - and this actually made me a bit annoyed - was how Joda handles GMT and UTC. Now, it says clearly in the documentation that Joda works with the UTC concept, and that GMT is not exactly the same thing. So why is it this code passes (if assertNotEquals is assumed):
public void testJodaStrangeNess() {
assertEquals(DateTimeZone.UTC, DateTimeZone.forID("UTC"));
assertEquals(DateTimeZone.UTC, DateTimeZone.forID("GMT"));
assertEquals(DateTimeZone.UTC, DateTimeZone.forOffsetHours(0));
assertNotEquals(DateTimeZone.UTC, DateTimeZone.forID("Etc/GMT"));
assertNotEquals(DateTimeZone.forID("GMT"), DateTimeZone.forID("Etc/GMT"));
}
Yeah, you're reading it right - UTC and GMT is the same time zone. +00:00 is the same as UTC too. But Etc/GMT is not the same as UTC or GMT or +00:00. Isn't that a bit strange?
fredag, december 14, 2007
JavaPolis report
Earlier today I attended the last sessions for this years JavaPolis. This was the first time I attended, and I've been incredibly impressed by it. The whole conference have been very good.
I arrived on Monday, sneaking in on Brian Leonard and Charlies JRuby tutorial. I didn't see much of it though, and after that me and Charles had to prepare our session a bit, so no BOFs.
Tuesday I slept late (being sick and all), and then saw Jim Weavers JavaFX tutorial, which was very adept. I feel I have a fairly good grasp of the capabilities of Java FX Script now, at least. There were a few BOFs I wanted to go to that evening, but since the speaker dinner/open bar was that night, I obviously choose that. Cue getting to bed at 3am, after getting home to the hotel from... uhm. somewhere in or around Antwerpen.
On Wednesday, the real conference started. My first session was the Groovy Update. I always enjoy seeing presentations of other language implementations, partly because I'm a language geek, but also because everyone has a very different presentation style that I like to contrast with each other. One thing I noticed about the Groovy presentation was that much of it was spent comparing Groovy to "other" languages.
Right, after that I saw two quickies - the first one about IntelliJ's new support for JRuby. And yes, this is support for JRuby, not just Ruby. You can use IntelliJ to navigate from Ruby code to Java code, where you have used that Java code in your Ruby. It looks really promising actually, and I spent some time showing the presenter a few things more that could be included. I don't know of any IDE that supports JRuby specific things like that, actually.
After that I saw Dick Wall's presentation on GWT. Since I have actually managed to avoid any knowledge about GWT, it was kinda interesting.
The next sessions didn't seem too interesting, so I worked a bit more on my presentation, and walked around talking to people.
Charles and my presentation went quite well, even though I managed to tank all the demonstrations quite heavily. For some reason I actually locked JIRB in comment mode, and couldn't get out of it, and then I fell upon the block coercion bug that happens when you call a Java method that is overloaded so that one takes no arguments and another overload takes the interface you want to coerce into. Charles didn't stop me until afterwards... =)
But yes, it went well. Lots of people in the audience, and lots of interest.
The final session of the day was the future of computing panel, with Gosling, Bloch, Gafter and Odersky. To be honest, I found it boring - Quinn was moderator, but didn't really manage to get the panel as enthused about anything.
After that, it was BOF time, I sat in on the Adobe one to pass the time, but didn't learn anything spectacular. The Groovy BOF was nice - it's always fun to see lots of code.
I started Thursday with the Scala presentation. Now, I didn't learn anything I didn't know here, but it was still a very good presentation. And oh, I found out that there is a Scala book on the way. (It's actually available as a Rough Cut from Artima. Very nice.)
The next session was supposed to be Blochs Effective Java, but he used to spot to rant about the BGGA closures proposal instead. Of course, Joshua Bloch always rants in a very entertaining way, and he had chosen insidiously good examples for his point of view - but I'm still not convinced.
The Java Posse live show was good fun. After that I managed to see Bob Lee's Web Beans presentation, and then the one on JAX-RS. Doesn't really have much to say about those two. Except... am I the only one who starts getting bored by annotations all over the place?
The day was nearly over, and then it was time for BOF's. The main difference being that it was time for the JRuby BOF. All went well, except that Charles didn't show up, I didn't have a projector the first half of the BOF, Tom introduced a bug on Wednesday that made all my examples fail, and so on. A huge thanks to Damian Steer who saved me by keeping the audience entertained while I fixed the bug in front of everyone.
I sat through Chet Haase's talk about Update N, but didn't pay that much attention since I was hacking on JRuby.
Finally, it was time for the BOF on other new language features in Java, with Gafter and Bloch. This was actually very interesting stuff. It ended up being almost 2 hours. But I think most people got their fill of new language syntax in it. The question is, which parts are good? I particularly didn't like method extensions. All the proposals seems to lose the runtime component of it, and in that case it just stops being interesting. I would much rather see the language add real categories or something like that.
Friday was a lazy day. I sat in on the OGIi presentation and the TDD one, but nothing really exciting there either.
So that's my JavaPolis week. It's been a good time. And now I think it's time to have some more beers with JRuby people before moving out from here.
I arrived on Monday, sneaking in on Brian Leonard and Charlies JRuby tutorial. I didn't see much of it though, and after that me and Charles had to prepare our session a bit, so no BOFs.
Tuesday I slept late (being sick and all), and then saw Jim Weavers JavaFX tutorial, which was very adept. I feel I have a fairly good grasp of the capabilities of Java FX Script now, at least. There were a few BOFs I wanted to go to that evening, but since the speaker dinner/open bar was that night, I obviously choose that. Cue getting to bed at 3am, after getting home to the hotel from... uhm. somewhere in or around Antwerpen.
On Wednesday, the real conference started. My first session was the Groovy Update. I always enjoy seeing presentations of other language implementations, partly because I'm a language geek, but also because everyone has a very different presentation style that I like to contrast with each other. One thing I noticed about the Groovy presentation was that much of it was spent comparing Groovy to "other" languages.
Right, after that I saw two quickies - the first one about IntelliJ's new support for JRuby. And yes, this is support for JRuby, not just Ruby. You can use IntelliJ to navigate from Ruby code to Java code, where you have used that Java code in your Ruby. It looks really promising actually, and I spent some time showing the presenter a few things more that could be included. I don't know of any IDE that supports JRuby specific things like that, actually.
After that I saw Dick Wall's presentation on GWT. Since I have actually managed to avoid any knowledge about GWT, it was kinda interesting.
The next sessions didn't seem too interesting, so I worked a bit more on my presentation, and walked around talking to people.
Charles and my presentation went quite well, even though I managed to tank all the demonstrations quite heavily. For some reason I actually locked JIRB in comment mode, and couldn't get out of it, and then I fell upon the block coercion bug that happens when you call a Java method that is overloaded so that one takes no arguments and another overload takes the interface you want to coerce into. Charles didn't stop me until afterwards... =)
But yes, it went well. Lots of people in the audience, and lots of interest.
The final session of the day was the future of computing panel, with Gosling, Bloch, Gafter and Odersky. To be honest, I found it boring - Quinn was moderator, but didn't really manage to get the panel as enthused about anything.
After that, it was BOF time, I sat in on the Adobe one to pass the time, but didn't learn anything spectacular. The Groovy BOF was nice - it's always fun to see lots of code.
I started Thursday with the Scala presentation. Now, I didn't learn anything I didn't know here, but it was still a very good presentation. And oh, I found out that there is a Scala book on the way. (It's actually available as a Rough Cut from Artima. Very nice.)
The next session was supposed to be Blochs Effective Java, but he used to spot to rant about the BGGA closures proposal instead. Of course, Joshua Bloch always rants in a very entertaining way, and he had chosen insidiously good examples for his point of view - but I'm still not convinced.
The Java Posse live show was good fun. After that I managed to see Bob Lee's Web Beans presentation, and then the one on JAX-RS. Doesn't really have much to say about those two. Except... am I the only one who starts getting bored by annotations all over the place?
The day was nearly over, and then it was time for BOF's. The main difference being that it was time for the JRuby BOF. All went well, except that Charles didn't show up, I didn't have a projector the first half of the BOF, Tom introduced a bug on Wednesday that made all my examples fail, and so on. A huge thanks to Damian Steer who saved me by keeping the audience entertained while I fixed the bug in front of everyone.
I sat through Chet Haase's talk about Update N, but didn't pay that much attention since I was hacking on JRuby.
Finally, it was time for the BOF on other new language features in Java, with Gafter and Bloch. This was actually very interesting stuff. It ended up being almost 2 hours. But I think most people got their fill of new language syntax in it. The question is, which parts are good? I particularly didn't like method extensions. All the proposals seems to lose the runtime component of it, and in that case it just stops being interesting. I would much rather see the language add real categories or something like that.
Friday was a lazy day. I sat in on the OGIi presentation and the TDD one, but nothing really exciting there either.
So that's my JavaPolis week. It's been a good time. And now I think it's time to have some more beers with JRuby people before moving out from here.
JDBC and DDL
It would really be time for JDBC to add support for database agnostic DDL. This is still one of the more gross areas of many database libraries (just look at dialects in Hibernate). Most of it is actually caused by DDL. But at the end of the day, most of the operations supported are actually exactly the same. Am I the only one who thinks it would be nice to have programmatic access to DDL operations?
söndag, december 09, 2007
JavaPolis
Tomorrow I'm going to JavaPolis - me and Charles are presenting on JRuby on Rails on Wednesday. We'll be there the whole week so if you wanna get in touch, don't hesitate. We're aiming to taste many nice Belgian beers.
Except for that, it's actually quite quiet right now. There are some things in the works which will soon be announced, though.
Except for that, it's actually quite quiet right now. There are some things in the works which will soon be announced, though.
torsdag, december 06, 2007
AspectJ and JRuby?
This is one of those idea-posts. There is no implementation and no code. But if someone wants to take the idea and do something with it, go ahead.
The gist of it is this: what if you could implement the actions for AspectJ in Ruby? You could define an on-load pointcut that matches everything and dispatches to Ruby. From there on you could do basically anything from the Ruby side - including dynamically changing the stuff happening. Of course, there would be a performance cost, but it could be incredibly useful for debugging, when you don't really want to restart your application and recompile every time you want to change the implementation of the aspect code.
The gist of it is this: what if you could implement the actions for AspectJ in Ruby? You could define an on-load pointcut that matches everything and dispatches to Ruby. From there on you could do basically anything from the Ruby side - including dynamically changing the stuff happening. Of course, there would be a performance cost, but it could be incredibly useful for debugging, when you don't really want to restart your application and recompile every time you want to change the implementation of the aspect code.
tisdag, november 27, 2007
ThoughtWorks calling Ruby developers in San Francisco
Friends! ThoughtWorks is hiring Ruby developers all over the world, but right now San Francisco is the hottest place to be. So if you're located in the Bay Area and want to work with Ruby and Rails, don't hesitate to make contact.
The time since I joined ThoughtWorks about 6 months ago have been the best of my life, and of all our offices around the world, I like the San Francisco one best. ThoughtWorks is really the home for people passionate about development and people who love Ruby.
If you have been following my blog, you know about Oracle Mix and other interesting things we have been doing out of the SF office. And there's more to come - exciting times!
Could ThoughtWorks in San Francisco be your home? Take contact with recruiting here: http://www.thoughtworks.com/work-for-us/apply-online.html, or email me directly and I'll see to it that your information gets to the right place!
The time since I joined ThoughtWorks about 6 months ago have been the best of my life, and of all our offices around the world, I like the San Francisco one best. ThoughtWorks is really the home for people passionate about development and people who love Ruby.
If you have been following my blog, you know about Oracle Mix and other interesting things we have been doing out of the SF office. And there's more to come - exciting times!
Could ThoughtWorks in San Francisco be your home? Take contact with recruiting here: http://www.thoughtworks.com/work-for-us/apply-online.html, or email me directly and I'll see to it that your information gets to the right place!
Joni merged to JRuby trunk
This is a glorious day! Joni (Marcin's incredible Java port of the Oniguruma regexp engine) has been merged to JRuby trunk. It seems to work really well right now.
I did some initial testing, and the Petstore numbers are more or less the same as before, actually. This is explained by the fact that I did the integration quite quick and tried to get stuff working without concern for performance. We will go through the implementations and tune them for Joni soon, and this will absolutely give JRuby a valuable boost.
Marcin is also continuing to improve Joni performance, so over all this is a very nice approach.
Happy merge day!
I did some initial testing, and the Petstore numbers are more or less the same as before, actually. This is explained by the fact that I did the integration quite quick and tried to get stuff working without concern for performance. We will go through the implementations and tune them for Joni soon, and this will absolutely give JRuby a valuable boost.
Marcin is also continuing to improve Joni performance, so over all this is a very nice approach.
Happy merge day!
Etiketter:
joni,
jruby,
oniguruma,
regular expressions
söndag, november 25, 2007
JRuby regular expression update
It's been some time since I wrote about what's happening in JRuby trunk right now, and what we're working on. The reason is I've been really boring. All my time I've spent on Regular Expressions and the REJ implementation. Well, that's ended now. After Marcin got the Oniguruma port close enough, we are both focusing on that instead. REJ's implementation had some fundamental problems that would make it really hard to get better performance. In this regard, Joni is a better implementation. Also, Marcin is incredible at optimization so if everything goes as planned, we're looking at better general Regular Expression performance, better compatibility and a much more competent implementation.
And boy am I bored by this now. =) I'd really like to get back to fixing bugs and get JRuby ready for the next release. That might happen soon, though - I've spent the weekend getting Joni integrated with JRuby inside a branch and today reached the goal of getting everything to compile. Also, easier programs run, like jirb. Our test suite fails, though, so there are still things to do. But getting everything compiling and ditching JRegex is a major point on the way of replacing JRegex in JRuby core. It shouldn't be too far off, and I think it will be fair to say we will have Joni in JRuby 1.1. Actually, 1.1 is really going to be an awesome release.
And boy am I bored by this now. =) I'd really like to get back to fixing bugs and get JRuby ready for the next release. That might happen soon, though - I've spent the weekend getting Joni integrated with JRuby inside a branch and today reached the goal of getting everything to compile. Also, easier programs run, like jirb. Our test suite fails, though, so there are still things to do. But getting everything compiling and ditching JRegex is a major point on the way of replacing JRegex in JRuby core. It shouldn't be too far off, and I think it will be fair to say we will have Joni in JRuby 1.1. Actually, 1.1 is really going to be an awesome release.
Etiketter:
joni,
jruby,
oniguruma,
regular expressions
torsdag, november 22, 2007
The development of Oracle Mix
Rich Manalang just posted a very nice entry on the Oracle AppsLab about the technology behind Oracle Mix, how we developed it and so on. Read it here.
Etiketter:
jruby on rails,
mix,
oracle,
thoughtworks
tisdag, november 20, 2007
Accumulators in Ruby
So, me and Ben Butler-Cole discussed the fact that accumulators in Ruby isn't really done in the obvious way. This is due to the somewhat annoying feature of Ruby that nested method definitions with the def-keyword isn't lexically scoped, so you can't implement an internal accumulator like you would in Python, Lisp, Haskell or any other languages like that.
I've seen three different ways to handle this in Ruby code. To illustrate, let's take the classic example of reversing a list. The functional way of doing this is to define an internal accumulator, this takes care of making the implementation tail recursive, and very efficient on linked list.
So, the task is to reverse a list in a functional, recursive approach. First version, using optional arguments:
So, the next solution is to just define a private accumulator method:
Does anyone have a better solution on how to handle this? Accumulators aren't really that common in Ruby - is this a result of Ruby making functional programming unneat, or is it just don't needed?
I've seen three different ways to handle this in Ruby code. To illustrate, let's take the classic example of reversing a list. The functional way of doing this is to define an internal accumulator, this takes care of making the implementation tail recursive, and very efficient on linked list.
So, the task is to reverse a list in a functional, recursive approach. First version, using optional arguments:
class ReverserSo, this one uses two default arguments, which makes it very easy to reuse the same method in the recursive case. The problem here is that the optional arguments expose an implementation detail which the caller really has no need of knowing. The implementation is simple but it puts more burden on the caller. This is also the pattern I see in most places in Ruby code. From a design perspective it's not really that great.
def reverse(list, index = list.length-1, result = [])
return result if index == -1
result << list[index]
reverse(list, index - 1, result)
end
end
So, the next solution is to just define a private accumulator method:
class ReverserThis is probably in many cases the preferable solution. It makes the interface easier, but adds to the class namespace. To be sure, the responsibility for the implementation of an algorithm should ideally belong at the same place. With this solution you might have it spread out all over the place. Which brings us to the original problem - you can't define lexically scoped methods within another method. So, in the third solution I make use of the fact that you can actually have recursive block invocations:
def reverse(list)
reverse_accumulator(list, list.length-1, [])
end
private
def reverse_accumulator(list, index, result)
return result if index == -1
result << list[index]
reverse_accumulator(list, index - 1, result)
end
end
class ReverserThe good thing about this implementation is that we avoid the added burden of both a divided implementation and an exposed implementation. It might seem a bit more complex to read if you're not familiar with the pattern. Remember that [] is an alias for the call method on Procs. Also, since the assignment to rec happens in a static scope we can actually refer to it from inside the block and get the right value. Finally, all assignments return the assigned value which means that we can just enclose everything in parens and apply it directly. Another neat aspect of this is that since the block is a closure, we don't need to pass the list variable around anymore.
def reverse(list)
(rec = lambda do |index, result|
return result if index == -1
result << list[index]
rec[index - 1, result]
end)[list.length-1, []]
end
end
Does anyone have a better solution on how to handle this? Accumulators aren't really that common in Ruby - is this a result of Ruby making functional programming unneat, or is it just don't needed?
måndag, november 19, 2007
A new language
So, it's that time of the year again. The restlessness flows over me. I feel cold and numb. And no, it's not because I live in London - it's because I need the warmth of learning a new language.
Now, I want something I can actually get into and learn. I've tried to get into OCaml, but I gotta admit I hate the type system. I have no problem with bondage static typed languages (Haskell's type system is really nice, for example) but OCaml's really feels like half of it exists just to cover up holes in the other half. There seems to be a large overlap in functionality, and lots of workarounds for handling things that should be simple.
I'm half way into Erlang, but for several reasons the language feels very primitive.
I've kinda thought about maybe getting serious with Scala. I like many of the language features, it's a nicely designed language and so on. But - hear this, people - I would love to get away from the JVM for a while, just for the sake of it. I can do Scala later. I actually have a medium sized project lined up for my Scala learning. But not right now.
So, what do I want? Something I haven't touched before. I would love something that involves radically new language features, if there are any left to discover. I have no need for it to be static or dynamic specifically. Doesn't really matter. It would be fun if it's new, but if it's old, good and still in use in some sectors that would be fun too. Specifically something that's not mainly run on the JVM or CLR. And of course, not any of the "mainstream" languages, who I actually tend to know fairly well (and yeah, to my sorrow that includes the whole W-family...).
Please help me! Give this December new meaning for me. I promise, if someone comes up with a nice language to try out, I'll be very fair to it when I evaluate and learn it. =)
Now, I want something I can actually get into and learn. I've tried to get into OCaml, but I gotta admit I hate the type system. I have no problem with bondage static typed languages (Haskell's type system is really nice, for example) but OCaml's really feels like half of it exists just to cover up holes in the other half. There seems to be a large overlap in functionality, and lots of workarounds for handling things that should be simple.
I'm half way into Erlang, but for several reasons the language feels very primitive.
I've kinda thought about maybe getting serious with Scala. I like many of the language features, it's a nicely designed language and so on. But - hear this, people - I would love to get away from the JVM for a while, just for the sake of it. I can do Scala later. I actually have a medium sized project lined up for my Scala learning. But not right now.
So, what do I want? Something I haven't touched before. I would love something that involves radically new language features, if there are any left to discover. I have no need for it to be static or dynamic specifically. Doesn't really matter. It would be fun if it's new, but if it's old, good and still in use in some sectors that would be fun too. Specifically something that's not mainly run on the JVM or CLR. And of course, not any of the "mainstream" languages, who I actually tend to know fairly well (and yeah, to my sorrow that includes the whole W-family...).
Please help me! Give this December new meaning for me. I promise, if someone comes up with a nice language to try out, I'll be very fair to it when I evaluate and learn it. =)
Ruby memory leaks
They aren't really common, but they do exist. As with any other garbage collected language, you can still be susceptible to memory leaks. In many cases they can also be very insidious. Say that you have a really large Rails application. After some time it grinds to a halt, CPU bound in GC. It may not even be a leak, it could just be something that creates so much garbage that the collector cannot take care of it.
I gotta admit, I'm not sure how to find such a problem. After getting histograms of objects, and trying to profile it, maybe run with ruby-debug, I would be out of options. Maybe some kind of shotgun technique - shutting down parts of the application, trying to pinpoint the location of the problem.
Now, ordinarily, that would have been the end of my search. A failure. Or maybe several weeks of trying to read through the sources.
The alternative? Run the application in JRuby. See if the same memory leak shows up (remember, it might be a bad interaction with MRI's runtime system that gives you grief. Or maybe even a bug in MRI Garbage Collector). But if it doesn't go away, you're in luck. Wait until the CPU starts chugging for real, and then take a heap dump using the jmap Java SDK tool. Once that's done, you'll be sitting with a large honking binary file that you can't do much with. The standard way of reading it is through jhat, but that don't give much to go on.
But then I found this wonderful tool called SAP Memory Analyzer. Google it and download it. It's marvelous. Easily the best heap analyzer I've run across in a long time. It's only flaw is that it runs in Eclipse... But well, it can't be everything, right?
Once you've opened up the file in SAP, you can do pretty much everything. It's quite self explanatory. The way I usually go about things is to use the core option, and then choose "find_leak". That almost always gives me some good suspects that I can continue investigating. From there on it's just to drill down and find out exactly what's going on.
Tell me if you can do that in any way as easy as that with MRI. I would love to know. But right now, JRuby is kicking butt in this regard.
I gotta admit, I'm not sure how to find such a problem. After getting histograms of objects, and trying to profile it, maybe run with ruby-debug, I would be out of options. Maybe some kind of shotgun technique - shutting down parts of the application, trying to pinpoint the location of the problem.
Now, ordinarily, that would have been the end of my search. A failure. Or maybe several weeks of trying to read through the sources.
The alternative? Run the application in JRuby. See if the same memory leak shows up (remember, it might be a bad interaction with MRI's runtime system that gives you grief. Or maybe even a bug in MRI Garbage Collector). But if it doesn't go away, you're in luck. Wait until the CPU starts chugging for real, and then take a heap dump using the jmap Java SDK tool. Once that's done, you'll be sitting with a large honking binary file that you can't do much with. The standard way of reading it is through jhat, but that don't give much to go on.
But then I found this wonderful tool called SAP Memory Analyzer. Google it and download it. It's marvelous. Easily the best heap analyzer I've run across in a long time. It's only flaw is that it runs in Eclipse... But well, it can't be everything, right?
Once you've opened up the file in SAP, you can do pretty much everything. It's quite self explanatory. The way I usually go about things is to use the core option, and then choose "find_leak". That almost always gives me some good suspects that I can continue investigating. From there on it's just to drill down and find out exactly what's going on.
Tell me if you can do that in any way as easy as that with MRI. I would love to know. But right now, JRuby is kicking butt in this regard.
Etiketter:
jruby,
memory leak,
rails,
ruby on rails
söndag, november 18, 2007
Oracle developers on OSX unite!
All my ranting aside, Oracle RDBMS is pretty good. It's got good performance, and lots of features you really need in a database. I shan't proclaim it my favorite database, but it's definitely something I have no problem working with. Except for that one small detail...
Yeah, you guessed it. Oracle support on Mac OS X is kinda... nonexistent. The best solution I've come up with is to run Parallels with a Windows or Linux instance and run Oracle XE inside of that. But that only works if I want to use the JDBC thing driver. OCI development? You're screwed. And the Parallels route isn't exactly painless either. Especially from a performance point of view.
So what do we need? OCI8 precompiled binaries would be a good start. But in the end, the only workable solution for all developers on OSX in the world who wants to be able to use Oracle is a compatible Oracle XE for Intel OS X. It shouldn't really be to hard, right? It's just a BSD beneath the covers...
Anyway, it's kinda interesting. If you're a consultant or a developer, OS X is definitely the superior platform. That's a fact (well, except for Java 6...). The lack of Oracle support forces people to develop their application against Postgres and then let continuous integration - you are using CI, right? - tell you if you made any Oracle-unfriendly mistakes. That doesn't really sound to professional.
So, go on and vote for this in Oracle Mix. The links are here: https://mix.oracle.com/ideas/we-need-the-oracle-clients-oci-jdbc-for-the-apple-intel-osx-platform,
https://mix.oracle.com/ideas/compile-oracle-xe-for-intel-os-x
Yeah, you guessed it. Oracle support on Mac OS X is kinda... nonexistent. The best solution I've come up with is to run Parallels with a Windows or Linux instance and run Oracle XE inside of that. But that only works if I want to use the JDBC thing driver. OCI development? You're screwed. And the Parallels route isn't exactly painless either. Especially from a performance point of view.
So what do we need? OCI8 precompiled binaries would be a good start. But in the end, the only workable solution for all developers on OSX in the world who wants to be able to use Oracle is a compatible Oracle XE for Intel OS X. It shouldn't really be to hard, right? It's just a BSD beneath the covers...
Anyway, it's kinda interesting. If you're a consultant or a developer, OS X is definitely the superior platform. That's a fact (well, except for Java 6...). The lack of Oracle support forces people to develop their application against Postgres and then let continuous integration - you are using CI, right? - tell you if you made any Oracle-unfriendly mistakes. That doesn't really sound to professional.
So, go on and vote for this in Oracle Mix. The links are here: https://mix.oracle.com/ideas/we-need-the-oracle-clients-oci-jdbc-for-the-apple-intel-osx-platform,
https://mix.oracle.com/ideas/compile-oracle-xe-for-intel-os-x
måndag, november 12, 2007
Oracle Mix has launched
The last 5 weeks, a team consisting of me, Alexey Verkhovsky, Matt Wastrodowski and Toby Tripp from ThoughtWorks, and Rich Manalang from Oracle have created a new application based on an internal Oracle application. This site is called Oracle Mix, and is aimed to be the way Oracles customers communicate with Oracle and each other, suggesting ideas, answering each others questions and generally networking.
Why is this a huge deal? Well, for me personally it's really kinda cool... It's the first public JRuby on Rails site in existance. It's deployed on the "red stack": Oracle Enterprise Linux, Oracle Application Server, Oracle Database, Oracle SSO, Oracle Internet Directory. And JRuby on Rails.
It's cool. Go check it out: http://mix.oracle.com.
Why is this a huge deal? Well, for me personally it's really kinda cool... It's the first public JRuby on Rails site in existance. It's deployed on the "red stack": Oracle Enterprise Linux, Oracle Application Server, Oracle Database, Oracle SSO, Oracle Internet Directory. And JRuby on Rails.
It's cool. Go check it out: http://mix.oracle.com.
Etiketter:
jruby,
jruby on rails,
mix,
oracle,
thoughtworks
QCon San Francisco recap
Last week I attended QCon San Francisco, a conference organized by InfoQ and Trifork (the company behind JAOO). It must admit that I was very positively surprised. I had expected it to be good, but I was blown away by the quality of most presentations. The conference had a system where you rated sessions by handing in a green, yellow or red card - I think I handed in two yellow cards, and the rest was green.
Everything started out with tutorials. I didn't go to the first tutorial day, but the second day tutorial was my colleagues Martin Fowler and Neal Ford talking about Domain Specific Languages, so I decided to attend that. All in all it was lots of very interesting material. Sadly, I managed to get slightly food poisoned from the lunch, so I didn't stay the whole day out.
On Wednesday, Kent Beck started the conference proper with a really good keynote on why Agile development really isn't anything else than the way the world expects software development to happen nowadays. It's clear to see that the Agile way provides many of the ilities that we have a responsibility to deliver. A very good talk.
After that Richard Gabriel delivered an extremely interesting presentation on how to think about ultralarge, self sustaining systems, and how we must shift the way we think about software to be able to handle large challenges like this.
The afternoons sessions was dominated by Brian Goetz extremely accomplished presentation on concurrency. I really liked seeing most of the knowledge available right now into a 45 minute presentation, discussion most of the things we as programmers need to think about regarding concurrency. I am so glad other people are concentrating on these hard problems, though - concurrency scares me.
The panel on the future of Java was interesting, albeit I didn't really agree with some of the conclusions Rod Johnson and Josh Bloch arrived at.
The day was capped by Richard Gabriel doing a keynote called 50 in 50. I'm not sure keynote is the right word. A poem, maybe? Or just a performance. It was very memorable, though. And beautiful. It's interesting that you can apply that word to something that discusses different programming languages, but there you have it.
During the Thursday I was lazy and didn't attend as many sessions as I did on the Wednesday. I saw Charles doing the JRuby presentation, Neal Ford discussing DSLs again, and my coworker Jim Webber rant about REST, SOA and WDSL. (Highly amusing, but beneath the hilarious surface Jim definitely had something very important to say about how we build Internet applications. I totally agree. Read his blog for more info.)
The Friday was also very good, but I missed the session about Second Life architecture which seemed very interesting. Justin Gehtland talked about CAS and OpenID in Rails, both solutions that I think is really important, and have their place in basically any organization. Something he said that rang especially true with me is that a Single Sign-On architecture isn't just about security - it's a way to make it easier to refactor your applications, giving you the possibility to combine or separate applications at will. Very good. Although it was scary to see the code the Ruby CAS server uses to generate token IDs. (Hint, it's very easy to attack that part of the server.
Just to strike a balance I had to satisfy my language geekery by attending Erik Meijer's presentation on C#. It was real good fun, and Erik didn't get annoyed at the fact that me and Josh Graham interrupted him after more or less every sentence, with new questions.
Finally, I saw half of Obie's talk about the new REST support in Rails 2.0 (and he gave me a preview copy of his book - review forthcoming). There is lots of stuff there that can really make your application so much easier to code. Nice.
The day ended with two panels, first me, Charles, Josh Susser, Obie and James Cox talking about Rails, the future of the framework and some about the FUD that inevitably happens.
The final panel was Martin Fowler moderating me, Erik Meijer, Aino Vonge Corry and Dan Pritchett, talking about the things we had seen at the conference. The discussion ranged from large scale architecture down to concurrency implementations. Hopefully the audience were satisfied.
All in all, an incredibly good time.
Everything started out with tutorials. I didn't go to the first tutorial day, but the second day tutorial was my colleagues Martin Fowler and Neal Ford talking about Domain Specific Languages, so I decided to attend that. All in all it was lots of very interesting material. Sadly, I managed to get slightly food poisoned from the lunch, so I didn't stay the whole day out.
On Wednesday, Kent Beck started the conference proper with a really good keynote on why Agile development really isn't anything else than the way the world expects software development to happen nowadays. It's clear to see that the Agile way provides many of the ilities that we have a responsibility to deliver. A very good talk.
After that Richard Gabriel delivered an extremely interesting presentation on how to think about ultralarge, self sustaining systems, and how we must shift the way we think about software to be able to handle large challenges like this.
The afternoons sessions was dominated by Brian Goetz extremely accomplished presentation on concurrency. I really liked seeing most of the knowledge available right now into a 45 minute presentation, discussion most of the things we as programmers need to think about regarding concurrency. I am so glad other people are concentrating on these hard problems, though - concurrency scares me.
The panel on the future of Java was interesting, albeit I didn't really agree with some of the conclusions Rod Johnson and Josh Bloch arrived at.
The day was capped by Richard Gabriel doing a keynote called 50 in 50. I'm not sure keynote is the right word. A poem, maybe? Or just a performance. It was very memorable, though. And beautiful. It's interesting that you can apply that word to something that discusses different programming languages, but there you have it.
During the Thursday I was lazy and didn't attend as many sessions as I did on the Wednesday. I saw Charles doing the JRuby presentation, Neal Ford discussing DSLs again, and my coworker Jim Webber rant about REST, SOA and WDSL. (Highly amusing, but beneath the hilarious surface Jim definitely had something very important to say about how we build Internet applications. I totally agree. Read his blog for more info.)
The Friday was also very good, but I missed the session about Second Life architecture which seemed very interesting. Justin Gehtland talked about CAS and OpenID in Rails, both solutions that I think is really important, and have their place in basically any organization. Something he said that rang especially true with me is that a Single Sign-On architecture isn't just about security - it's a way to make it easier to refactor your applications, giving you the possibility to combine or separate applications at will. Very good. Although it was scary to see the code the Ruby CAS server uses to generate token IDs. (Hint, it's very easy to attack that part of the server.
Just to strike a balance I had to satisfy my language geekery by attending Erik Meijer's presentation on C#. It was real good fun, and Erik didn't get annoyed at the fact that me and Josh Graham interrupted him after more or less every sentence, with new questions.
Finally, I saw half of Obie's talk about the new REST support in Rails 2.0 (and he gave me a preview copy of his book - review forthcoming). There is lots of stuff there that can really make your application so much easier to code. Nice.
The day ended with two panels, first me, Charles, Josh Susser, Obie and James Cox talking about Rails, the future of the framework and some about the FUD that inevitably happens.
The final panel was Martin Fowler moderating me, Erik Meijer, Aino Vonge Corry and Dan Pritchett, talking about the things we had seen at the conference. The discussion ranged from large scale architecture down to concurrency implementations. Hopefully the audience were satisfied.
All in all, an incredibly good time.
torsdag, november 01, 2007
JRuby 1.0.2 released
The JRuby community is pleased to announce the release of JRuby 1.0.2.
Homepage: http://www.jruby.org/
Download: http://dist.codehaus.org/jruby/
JRuby 1.0.2 is a minor release of our stable 1.0 branch. The fixes in this
release include primarily obvious compatibility issues that we felt were
low risk. We periodically push out point releases to continue supporting
production users of JRuby 1.0.x.
Highlights:
- Fixed several nasty issues for users on Windows
- Fixed a number of network compatibility issues
- Includes support for Rails 1.2.5
- Reduced memory footprint
- Improved File IO performance
- trap() fix
- 99 total issues resolved since JRuby 1.0.1
Special thanks to the new JRuby contributors who rose to Charlie's challenge
to write patches for some outstanding bugs: Riley Lynch, Mathias Biilmann
Christensen, Peter Brant, and Niels Bech Nielsen. Welcome aboard...
onsdag, oktober 31, 2007
An interesting memory leak in JRuby
The last two days I had lots of fun with the interesting task of finding a major memory leak in JRuby. The only way I could reliably reproduce it was by running Mingle's test suite and see memory being eaten. I tried several approaches, the first being using jhat to analyze the heap dumps. That didn't really help me much, since all the interesting queries I tried to run with OQL had a tendency to just cause out of memory errors. Not nice.
Next step was to install SAP Memory Analyzer which actually worked really well, even though it's built on top of Eclipse. After several false starts, including one where I thought we had found the memory leak I finally got somewhere. Actually, I did find a memory leak in our method cache implementation. But alas, after fixing that it was obvious there was another leak in there.
I finally got SAP to tell me that RubyClasses was being retained. But when I tried to find the root chain to see how that happened I couldn't see anything strange. In fact, what I saw what the normal chaining of frames, blocks, classes and other interesting parts. And this is really the problem when debugging this kind of problem in JRuby. Since a leak almost always be leaking several different objects, it can be hard to pinpoint the exact problem. In this case I guess that the problem was in a large branch that Bill merged a few weeks back, so I tried going back to it and checking. Alas, the branch was good. In fact, since I went back 200 revisions I finally knew within which range the problem had to be. Since I couldn't find anything more from the heap dumps I resorted to the venerable tradition of binary search. Namely going through the revisions and finding the faulty one. According to log2, I would find the bad revision in less than 8 tries, so I started out.
After I while I actually found the problem. Let me show it to you here:
Well, wrong. Actually, it seems that ActiveRecord traps abort in transactions and then restore the original handler. So each transaction created new trap handlers. That would have been fine, except for the last line. In effect, in the current signal handler we save a reference to the previous signal handler. After a few iterations we will have a long chain of signal handlers, all pointing back, all holding a hard reference from one of the single static root sets in the JVM (namely, the list of all signal handlers). That isn't so bad though. Except a saved block has references to dynamic scopes (which reference variables). It has a reference to the Frame, and the Frame has references to RubyClass. RubyClass has references to method objects, and method objects have in some cases references to RubyProcs, which in turn have more references to Blocks. At the end, we have a massive leak.
The solution? To simple remove saving of the previous handler and simplify the signal handler.
Next step was to install SAP Memory Analyzer which actually worked really well, even though it's built on top of Eclipse. After several false starts, including one where I thought we had found the memory leak I finally got somewhere. Actually, I did find a memory leak in our method cache implementation. But alas, after fixing that it was obvious there was another leak in there.
I finally got SAP to tell me that RubyClasses was being retained. But when I tried to find the root chain to see how that happened I couldn't see anything strange. In fact, what I saw what the normal chaining of frames, blocks, classes and other interesting parts. And this is really the problem when debugging this kind of problem in JRuby. Since a leak almost always be leaking several different objects, it can be hard to pinpoint the exact problem. In this case I guess that the problem was in a large branch that Bill merged a few weeks back, so I tried going back to it and checking. Alas, the branch was good. In fact, since I went back 200 revisions I finally knew within which range the problem had to be. Since I couldn't find anything more from the heap dumps I resorted to the venerable tradition of binary search. Namely going through the revisions and finding the faulty one. According to log2, I would find the bad revision in less than 8 tries, so I started out.
After I while I actually found the problem. Let me show it to you here:
def __jtrap(*args, &block)This is part of our signal handling code. Interestingly enough, I was nonplussed. How could trap leak? I mean, noone actually calls trap enough times to make it leak, right?
sig = args.first
sig = SIGNALS[sig] if sig.kind_of?(Fixnum)
sig = sig.to_s.sub(/^SIG(.+)/,'\1')
signal_class = Java::sun.misc.Signal
signal_class.send :attr_accessor, :prev_handler
signal_object = signal_class.new(sig) rescue nil
return unless signal_object
signal_handler = Java::sun.misc.SignalHandler.impl do
begin
block.call
rescue Exception => e
Thread.main.raise(e) rescue nil
ensure
# re-register the handler
signal_class.handle(signal_object, signal_handler)
end
end
signal_object.prev_handler = signal_class.handle(signal_object, signal_handler)
end
Well, wrong. Actually, it seems that ActiveRecord traps abort in transactions and then restore the original handler. So each transaction created new trap handlers. That would have been fine, except for the last line. In effect, in the current signal handler we save a reference to the previous signal handler. After a few iterations we will have a long chain of signal handlers, all pointing back, all holding a hard reference from one of the single static root sets in the JVM (namely, the list of all signal handlers). That isn't so bad though. Except a saved block has references to dynamic scopes (which reference variables). It has a reference to the Frame, and the Frame has references to RubyClass. RubyClass has references to method objects, and method objects have in some cases references to RubyProcs, which in turn have more references to Blocks. At the end, we have a massive leak.
The solution? To simple remove saving of the previous handler and simplify the signal handler.
QCon and OpenWorld
As mentioned before I will be in San Francisco next week for QCon, and the week after that for Oracle OpenWorld. I will be part of a panel debate at QCon and man a booth on Oracle OpenWorld. In fact, if you're attending OpenWorld you should visit ThoughtWorks booth at 343 Moscone South. Looking forward to seeing you there.
söndag, oktober 28, 2007
Michael Gira and Tim Bray
Doesn't Tim Bray and Michael Gira look very much alike? See for yourself:
I'm not sure if Tim has the voice of Michael, though. Anyway, I gotta say, warming up with preparty with Bill Hicks, and then a 90 minutes Michael Gira show was awesome.
Is this the wrong blog for this? Yeah, probably, but the Tim Bray inclusion warrants it. =)
I'm not sure if Tim has the voice of Michael, though. Anyway, I gotta say, warming up with preparty with Bill Hicks, and then a 90 minutes Michael Gira show was awesome.
Is this the wrong blog for this? Yeah, probably, but the Tim Bray inclusion warrants it. =)
fredag, oktober 26, 2007
Current state of Regular Expressions
As I've made clear earlier, the current regular expression situation has once again become impractical. To reiterate the history: We began with regular Java regex support. This started to cave in when we found out that the algorithm used is actually recursive, and fails for some common regexps used inside Rails among others. To fix that, we integrated JRegex instead. That's the engine 1.0 was released with and is still the engine in use. It works fairly well, and is fast for a Java engine. But not fast enough. In particular, there is no support for searching for exact strings and failing fast, and the engine requires us to transform our byte[]-strings to char[] or String. Not exactly optimal. Another problem is that compatibility with MRI suffers, especially in the multi byte support.
There are two solutions currently on the way. Core developer Marcin are working on a port of the 1.9 regexp engine Oniguruma. This port still has some way to go, and is not integrated with JRuby. The other effort is called REJ, and is a port of the MRI engine I did a few months back. I've freshened up the work and integrated it with JRuby in a branch. At the moment this work actually seems to go quite well, but there are some snags.
First of all, let me point out that this approach gives us more or less total multibyte compatibility for 1.8, which is quite nice.
When doing benchmarking, I'm generally using Rails as the bar. I have a series of regular expressions that Petstore uses for each requst, and I'm using these to check performance. As a first datapoint, JRuby+REJ is faster at parsing regexps than JRuby trunk for basically all regexps. This ranges from slightly faster to twice as fast.
Most of the Rails regexen are actually faster in REJ than in JRuby+trunk, but the problem is that some of them are actually quite a bit slower. 4 of the 22 Rails regexps are slower, by between 20 and 250% percent. There are also this one: /.*_f/ =~ "_fxxxxxxxxxxxxxxxxxxxxxxx" which basically runs about 10x slower than JRuby trunk. Not nice at all.
In the end, the problem is backtracking. Since REJ is a straight port of the MRI code, the backtracking is also ported. But it seems that Java is unusually bad at handling that specific algorithm, and it performs quite badly. At the moment I'm continuing to look at it and trying to improve performance in all ways possible, so we'll see what happens. Charles Nutter have also started to look at it.
But what's really interesting is that I reran my Petstore benchmarks with the current REJ code. To rehash, my last results with JRuby trunk looked like this:
There are two solutions currently on the way. Core developer Marcin are working on a port of the 1.9 regexp engine Oniguruma. This port still has some way to go, and is not integrated with JRuby. The other effort is called REJ, and is a port of the MRI engine I did a few months back. I've freshened up the work and integrated it with JRuby in a branch. At the moment this work actually seems to go quite well, but there are some snags.
First of all, let me point out that this approach gives us more or less total multibyte compatibility for 1.8, which is quite nice.
When doing benchmarking, I'm generally using Rails as the bar. I have a series of regular expressions that Petstore uses for each requst, and I'm using these to check performance. As a first datapoint, JRuby+REJ is faster at parsing regexps than JRuby trunk for basically all regexps. This ranges from slightly faster to twice as fast.
Most of the Rails regexen are actually faster in REJ than in JRuby+trunk, but the problem is that some of them are actually quite a bit slower. 4 of the 22 Rails regexps are slower, by between 20 and 250% percent. There are also this one: /.*_f/ =~ "_fxxxxxxxxxxxxxxxxxxxxxxx" which basically runs about 10x slower than JRuby trunk. Not nice at all.
In the end, the problem is backtracking. Since REJ is a straight port of the MRI code, the backtracking is also ported. But it seems that Java is unusually bad at handling that specific algorithm, and it performs quite badly. At the moment I'm continuing to look at it and trying to improve performance in all ways possible, so we'll see what happens. Charles Nutter have also started to look at it.
But what's really interesting is that I reran my Petstore benchmarks with the current REJ code. To rehash, my last results with JRuby trunk looked like this:
controller : 1.804000 0.000000 1.804000 ( 1.804000)But the results from rerunning with REJ was interesting, to say the least. I expected bad results because of the bad backtracking performance, but it seems the other speed improvements weigh up:
view : 5.510000 0.000000 5.510000 ( 5.510000)
full action: 13.876000 0.000000 13.876000 ( 13.876000)
controller : 1.782000 0.000000 1.782000 ( 1.782000)As you can see, the improvement is quite large in the view numbers. It is also almost there compared to MRI which had 4.57. Finally, the full action is better by a full second too. Again, MRI is 9.57s and JRuby 12.72. It's getting closer. I am quite optimistic right now, provided that we manage to fix the remaining problems with backtracking, our regexp engine might well be a great boon to performance.
view : 4.735000 0.000000 4.735000 ( 4.735000)
full action: 12.727000 0.000000 12.727000 ( 12.727000)
Etiketter:
jruby,
performance,
regular expressions
Interesting times in JRuby land
The last week or two have been quite interesting. We are finally starting to see performance numbers that seem good enough. Take a look at Nick's posts here and here for more information on this. The second post contains some valuable tips. In particular, make sure to turn off ObjectSpace and run with -server. Both of these will improve your performance and scalability quite much.
Secondly, JRuby on Rails on Oracle Application Server. Nice, huh? I would imagine more interesting things coming out of all this.
But the end message seems to be that JRuby is really ready now. The 1.1 release looks like it's going to be something really amazing. I can't wait!
Secondly, JRuby on Rails on Oracle Application Server. Nice, huh? I would imagine more interesting things coming out of all this.
But the end message seems to be that JRuby is really ready now. The 1.1 release looks like it's going to be something really amazing. I can't wait!
tisdag, oktober 16, 2007
Updated JRuby on Rails performance numbers
So, after my last post, several of us have spent time looking at different parts of Rails and JRuby performance. We have managed to improve things quite nicely since my last performance note. Some of the things changed have been JRuby's each_line implementation, JRuby's split implementation, a few other improvements, and some small fixes to AR-JDBC. After that, here are some new numbers for Petstore. Remember, the MRI numbers are for MRI 1.8.6 with native C MySQL. The JRuby numbers is with ActiveRecord-JDBC trunk and MySQL. (I'm only showing the best numbers of each). The number in bold is the most important one for comparison.
Once the port of Oniguruma lands, this story will almost certainly look very different. But even so, this is looking good.
MRI controller : 1.000000 0.070000 1.070000 ( 1.430260)As you can see, we are talking about 9.5s MRI to 13.8s for JRuby, which I find is a quite nice achievement if you look at the numbers from Friday. We are inching closer and closer. Both the view and the controller numbers are looking very nice. This is actually indicative of a nice trend - since general JRuby primitive performance is really good, the slowness in our Regular Expression engine is weighed up by much faster execution speed.
JRuby controller : 1.804000 0.000000 1.804000 ( 1.804000)
MRI view : 4.410000 0.150000 4.560000 ( 4.575399)
JRuby view : 5.510000 0.000000 5.510000 ( 5.510000)
MRI full action: 8.260000 0.410000 8.670000 ( 9.574460)
JRuby full action: 13.876000 0.000000 13.876000 ( 13.876000)
Once the port of Oniguruma lands, this story will almost certainly look very different. But even so, this is looking good.
Etiketter:
jruby,
jruby on rails,
performance,
petstore
söndag, oktober 14, 2007
JRuby discovery number one
After my last entry I've spent lots of time checking different parts of JRuby, trying to find the one true bottleneck for Rails. Of course, I still haven't found it (otherwise I would have said YAY in the subject for this blog). But I have found a few things - for example, symbols are slow right now, but Bill's work will make them better. And it doesn't affect Rails performance at all.
But the discovery I made was when I looked at the performance of the regular expressions used in Rails. There are exactly 50 of them for each request, so I did a script that checked the performance of each of them against MRI. And I found that there was one in particular that had really interesting performance when comparing MRI to JRuby. In fact, it was between 200 and a 1000 times slower. What's worse, the performance wasn't linear.
So which regular expression was the culprit? Well, /.*?\n/m. That doesn't look to bad. And in fact, this expression displayed not one, but two problems with JRuby. The first one is that any regular expression engine should be able to fail fast on something like this, simply because there is a string that always needs to be part of a string for this expression to match. In MRI, that part of the engine is called bm_search, and is a very fast way to fail. JRuby doesn't have that. Marcin is working on a port of Oniguruma though, so that will fix that part of the problem.
But wait, if you grep for this regexp in the Rails sources you won't find it. So where was it actually used? Here is the kicker: it was used in JRuby's implementation of String#each_line. So, let's take some time to look at a quick benchmark for each_line:
There are a few interesting lessons to take away from this exercise:
But the discovery I made was when I looked at the performance of the regular expressions used in Rails. There are exactly 50 of them for each request, so I did a script that checked the performance of each of them against MRI. And I found that there was one in particular that had really interesting performance when comparing MRI to JRuby. In fact, it was between 200 and a 1000 times slower. What's worse, the performance wasn't linear.
So which regular expression was the culprit? Well, /.*?\n/m. That doesn't look to bad. And in fact, this expression displayed not one, but two problems with JRuby. The first one is that any regular expression engine should be able to fail fast on something like this, simply because there is a string that always needs to be part of a string for this expression to match. In MRI, that part of the engine is called bm_search, and is a very fast way to fail. JRuby doesn't have that. Marcin is working on a port of Oniguruma though, so that will fix that part of the problem.
But wait, if you grep for this regexp in the Rails sources you won't find it. So where was it actually used? Here is the kicker: it was used in JRuby's implementation of String#each_line. So, let's take some time to look at a quick benchmark for each_line:
require 'benchmark'As you can see, we simple measure the performance of doing a 100 000 each_line calls on three different strings. The first one is a short string with several newlines, the second is a short string with no newlines, and the last is a long string with no newlines. How does MRI run this benchmark?
str = "Content-Type: text/html; charset=utf-8\r\nSet-Cookie: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa "
TIMES=100_000
puts "each_line on small string with several lines"
10.times do
puts(Benchmark.measure{TIMES.times { str.each_line{} }})
end
str = "abc" * 15
puts "each_line on short string with no line divisions"
10.times do
puts(Benchmark.measure{TIMES.times { str.each_line{} }})
end
str = "abc" * 4000
puts "each_line on large string with no line divisions"
10.times do
puts(Benchmark.measure{TIMES.times { str.each_line{} }})
end
each_line on small string with several linesOK, this looks reasonable. The large string is obviously taking more time to search, but not incredibly much time. What about trunk JRuby?
0.160000 0.000000 0.160000 ( 0.157664)
0.150000 0.000000 0.150000 ( 0.160450)
0.160000 0.000000 0.160000 ( 0.171563)
0.150000 0.000000 0.150000 ( 0.157854)
0.150000 0.000000 0.150000 ( 0.154578)
0.150000 0.000000 0.150000 ( 0.154547)
0.160000 0.000000 0.160000 ( 0.158894)
0.150000 0.000000 0.150000 ( 0.158064)
0.150000 0.010000 0.160000 ( 0.156975)
0.160000 0.000000 0.160000 ( 0.156857)
each_line on short string with no line divisions
0.080000 0.000000 0.080000 ( 0.086789)
0.090000 0.000000 0.090000 ( 0.084559)
0.080000 0.000000 0.080000 ( 0.093477)
0.090000 0.000000 0.090000 ( 0.084700)
0.080000 0.000000 0.080000 ( 0.089917)
0.090000 0.000000 0.090000 ( 0.084176)
0.080000 0.000000 0.080000 ( 0.086735)
0.090000 0.000000 0.090000 ( 0.085536)
0.080000 0.000000 0.080000 ( 0.084668)
0.090000 0.000000 0.090000 ( 0.090176)
each_line on large string with no line divisions
3.350000 0.020000 3.370000 ( 3.404514)
3.330000 0.020000 3.350000 ( 3.690576)
3.320000 0.040000 3.360000 ( 3.851804)
3.320000 0.020000 3.340000 ( 3.651748)
3.340000 0.020000 3.360000 ( 3.478186)
3.340000 0.020000 3.360000 ( 3.447704)
3.330000 0.020000 3.350000 ( 3.448651)
3.350000 0.010000 3.360000 ( 3.489842)
3.350000 0.020000 3.370000 ( 3.429135)
3.350000 0.010000 3.360000 ( 3.372925)
each_line on small string with several linesOoops. That doesn't look so good... And also, where is the last ten lines? Eh... It's still running. It's been running for two hours to produce the first line. That means that it's taking at least 7200 seconds which is more than 2400 times slower than MRI. But in fact, since the matching of the regular expression above is not linear, but exponential in performance, I don't expect this to ever finish.
32.668000 0.000000 32.668000 ( 32.668000)
30.785000 0.000000 30.785000 ( 30.785000)
30.824000 0.000000 30.824000 ( 30.824000)
30.878000 0.000000 30.878000 ( 30.877000)
30.904000 0.000000 30.904000 ( 30.904000)
30.826000 0.000000 30.826000 ( 30.826000)
30.550000 0.000000 30.550000 ( 30.550000)
32.331000 0.000000 32.331000 ( 32.331000)
30.971000 0.000000 30.971000 ( 30.971000)
30.537000 0.000000 30.537000 ( 30.537000)
each_line on short string with no line divisions
7.472000 0.000000 7.472000 ( 7.472000)
7.350000 0.000000 7.350000 ( 7.350000)
7.516000 0.000000 7.516000 ( 7.516000)
7.252000 0.000000 7.252000 ( 7.252000)
7.313000 0.000000 7.313000 ( 7.313000)
7.262000 0.000000 7.262000 ( 7.262000)
7.383000 0.000000 7.383000 ( 7.383000)
7.786000 0.000000 7.786000 ( 7.786000)
7.583000 0.000000 7.583000 ( 7.583000)
7.529000 0.000000 7.529000 ( 7.529000)
each_line on large string with no line divisions
There are a few interesting lessons to take away from this exercise:
- There may still be implementation problems like this in many parts of JRuby - performance will improve by quite much every time we find something like this. I haven't measured Rails performance after this is fixed, and I don't expect it to actually fix the whole problem, but I think I'll see better numbers.
- Understand regular expressions. Why is /.*?\n/ so incredibly bad for strings over a certain length? In this case it's the combination of .* and ?. What would be a better implementation in almost all cases? /[^\n]\n/. Notice that there is no backtracking in this implementation, and because of that, this regexp will have performance O(n) while the earlier one was O(n^2). Learn and know these things. They are the difference between usage and expertise.
Etiketter:
jruby,
performance,
regular expressions
fredag, oktober 12, 2007
Mystery: An exposé on JRuby performance
So, after Charlies awesome work yesterday (documented here), I felt it was time I put the story straight on general JRuby performance. It's something of a mystery. I'll begin by showing you the results of lots of different benchmarks and comparison to MRI. All these benchmarks have been run on my MacBook Pro, dual 2.33Ghz cores with 2Gb memory. JRuby revision is 4578, Java is Apple 1.6 beta, and Ruby version is 1.8.6 (2007-03-13 patchlevel 0). JRuby was ran with -J-server and -O.
Now, lets begin with the YARV benchmarks. The first values is always MRI, the second i JRuby. I've made the benchmark red when MRI is faster, and green when JRuby is faster.
So, this is the baseline against MRI. Let's take a look at a few other benchmarks. These can all be found in JRuby, in the test/bench directory. When you run them, all of them generate 5 or 10 runs of all measures, but I've simply taken the best one for each. The repetition is to allow Java to warm up. Here are a few different benchmarks, same convention and running parameters as above:
But here is the mystery. General Rails performance sucks. This test case can be ran by checking out petstore from tw-commons at RubyForge. There is a file called script/console_bench that will run the benchmarks. The two commands ran was: ruby script/console_bench production, and jruby -J-server -O script/console_bench ar_jdbc. With further ado, here are the numbers:
My current thesis is that either symbols or regexps are responsible. I'll spend the day checking that.
Now, lets begin with the YARV benchmarks. The first values is always MRI, the second i JRuby. I've made the benchmark red when MRI is faster, and green when JRuby is faster.
bm_app_answer.rb:What is interesting about these numbers is that almost all of them are way faster, and the ones that are slower are so by a quite narrow margin (except for the regexp and thread tests).
0.480000 0.010000 0.490000 ( 0.511641)
0.479000 0.000000 0.479000 ( 0.478000)
bm_app_factorial.rb:
ERROR stack level too deep
2.687000 0.000000 2.687000 ( 2.687000)
bm_app_fib.rb:
5.880000 0.030000 5.910000 ( 6.151324)
2.687000 0.000000 2.687000 ( 2.687000)
bm_app_mandelbrot.rb:
1.930000 0.010000 1.940000 ( 1.992984)
2.752000 0.000000 2.752000 ( 2.876000)
bm_app_pentomino.rb:
84.160000 0.450000 84.610000 ( 88.205519)
77.117000 0.000000 77.117000 ( 77.117000)
bm_app_raise.rb:
2.600000 0.430000 3.030000 ( 3.169156)
2.162000 0.000000 2.162000 ( 2.162000)
bm_app_strconcat.rb:
1.390000 0.010000 1.400000 ( 1.427766)
1.003000 0.000000 1.003000 ( 1.003000)
bm_app_tak.rb:
7.570000 0.060000 7.630000 ( 7.888381)
2.676000 0.000000 2.676000 ( 2.676000)
bm_app_tarai.rb:
6.020000 0.030000 6.050000 ( 6.186971)
2.236000 0.000000 2.236000 ( 2.236000)
bm_loop_times.rb:
4.240000 0.020000 4.260000 ( 4.404826)
3.354000 0.000000 3.354000 ( 3.354000)
bm_loop_whileloop.rb:
9.450000 0.050000 9.500000 ( 9.678552)
5.037000 0.000000 5.037000 ( 5.037000)
bm_loop_whileloop2.rb:
1.890000 0.010000 1.900000 ( 1.936502)
1.039000 0.000000 1.039000 ( 1.039000)
bm_so_ackermann.rb:
ERROR stack level too deep
4.928000 0.000000 4.928000 ( 4.927000)
bm_so_array.rb:
5.580000 0.020000 5.600000 ( 5.709101)
5.552000 0.000000 5.552000 ( 5.552000)
bm_so_concatenate.rb:
1.580000 0.040000 1.620000 ( 1.647592)
1.602000 0.000000 1.602000 ( 1.602000)
bm_so_exception.rb:
4.170000 0.390000 4.560000 ( 4.597234)
4.683000 0.000000 4.683000 ( 4.683000)
bm_so_lists.rb:
0.970000 0.030000 1.000000 ( 1.036678)
0.814000 0.000000 0.814000 ( 0.814000)
bm_so_matrix.rb:
1.700000 0.010000 1.710000 ( 1.765739)
1.878000 0.000000 1.878000 ( 1.879000)
bm_so_nested_loop.rb:
5.130000 0.020000 5.150000 ( 5.258066)
4.661000 0.000000 4.661000 ( 4.661000)
bm_so_object.rb:
5.480000 0.030000 5.510000 ( 5.615154)
3.095000 0.000000 3.095000 ( 3.095000)
bm_so_random.rb:
1.760000 0.010000 1.770000 ( 1.806116)
1.495000 0.000000 1.495000 ( 1.495000)
bm_so_sieve.rb:
0.680000 0.010000 0.690000 ( 0.705296)
0.853000 0.000000 0.853000 ( 0.853000)
bm_vm1_block.rb:
19.920000 0.110000 20.030000 ( 20.547236)
12.698000 0.000000 12.698000 ( 12.698000)
bm_vm1_const.rb:
15.720000 0.080000 15.800000 ( 16.426734)
7.654000 0.000000 7.654000 ( 7.654000)
bm_vm1_ensure.rb:
14.530000 0.070000 14.600000 ( 15.137106)
7.588000 0.000000 7.588000 ( 7.589000)
bm_vm1_length.rb:
17.230000 0.090000 17.320000 ( 20.406438)
6.415000 0.000000 6.415000 ( 6.416000)
bm_vm1_rescue.rb:
11.520000 0.040000 11.560000 ( 11.736435)
5.604000 0.000000 5.604000 ( 5.603000)
bm_vm1_simplereturn.rb:
17.560000 0.080000 17.640000 ( 18.178065)
5.413000 0.000000 5.413000 ( 5.413000)
bm_vm1_swap.rb:
22.160000 0.110000 22.270000 ( 22.698746)
6.836000 0.000000 6.836000 ( 6.835000)
bm_vm2_array.rb:
5.600000 0.020000 5.620000 ( 5.675354)
1.844000 0.000000 1.844000 ( 1.844000)
bm_vm2_method.rb:
9.800000 0.030000 9.830000 ( 9.918884)
5.152000 0.000000 5.152000 ( 5.151000)
bm_vm2_poly_method.rb:
13.570000 0.050000 13.620000 ( 13.803066)
10.289000 0.000000 10.289000 ( 10.289000)
bm_vm2_poly_method_ov.rb:
3.990000 0.010000 4.000000 ( 4.071277)
1.750000 0.000000 1.750000 ( 1.749000)
bm_vm2_proc.rb:
5.670000 0.020000 5.690000 ( 5.723124)
3.267000 0.000000 3.267000 ( 3.267000)
bm_vm2_regexp.rb:
0.380000 0.000000 0.380000 ( 0.387671)
0.961000 0.000000 0.961000 ( 0.961000)
bm_vm2_send.rb:
3.720000 0.010000 3.730000 ( 3.748266)
2.135000 0.000000 2.135000 ( 2.136000)
bm_vm2_super.rb:
4.100000 0.010000 4.110000 ( 4.138355)
1.781000 0.000000 1.781000 ( 1.781000)
bm_vm2_unif1.rb:
3.320000 0.010000 3.330000 ( 3.348069)
1.385000 0.000000 1.385000 ( 1.385000)
bm_vm2_zsuper.rb:
4.810000 0.020000 4.830000 ( 4.856368)
2.920000 0.000000 2.920000 ( 2.921000)
bm_vm3_thread_create_join.rb:
0.000000 0.000000 0.000000 ( 0.006621)
0.368000 0.000000 0.368000 ( 0.368000)
So, this is the baseline against MRI. Let's take a look at a few other benchmarks. These can all be found in JRuby, in the test/bench directory. When you run them, all of them generate 5 or 10 runs of all measures, but I've simply taken the best one for each. The repetition is to allow Java to warm up. Here are a few different benchmarks, same convention and running parameters as above:
bench_fib_recursive.rbAll of this is really good, of course. Lots of green. And the red parts are not that bad either.
----------------------
1.390000 0.000000 1.390000 ( 1.412710)
0.532000 0.000000 0.532000 ( 0.532000)
bench_method_dispatch.rb
------------------------
Control: 1m loops accessing a local variable 100 times
2.830000 0.010000 2.840000 ( 2.864822)
0.105000 0.000000 0.105000 ( 0.105000)
Test STI: 1m loops accessing a fixnum var and calling to_i 100 times
10.000000 0.030000 10.030000 ( 10.100846)
2.111000 0.000000 2.111000 ( 2.111000)
Test ruby method: 1m loops calling self's foo 100 times
16.130000 0.060000 16.190000 ( 16.359876)
7.971000 0.000000 7.971000 ( 7.971000)
bench_method_dispatch_only.rb
-----------------------------
Test ruby method: 100k loops calling self's foo 100 times
1.570000 0.000000 1.570000 ( 1.588715)
0.587000 0.000000 0.587000 ( 0.587000)
bench_block_invocation.rb
-------------------------
1m loops yielding a fixnum 10 times to a block that just retrieves dvar
2.800000 0.010000 2.810000 ( 2.822425)
1.194000 0.000000 1.194000 ( 1.194000)
1m loops yielding two fixnums 10 times to block accessing one
6.550000 0.030000 6.580000 ( 6.623452)
2.064000 0.000000 2.064000 ( 2.064000)
1m loops yielding three fixnums 10 times to block accessing one
7.390000 0.020000 7.410000 ( 7.467841)
2.120000 0.000000 2.120000 ( 2.120000)
1m loops yielding three fixnums 10 times to block splatting and accessing them
9.250000 0.040000 9.290000 ( 9.339131)
2.451000 0.000000 2.451000 ( 2.451000)
1m loops yielding a fixnums 10 times to block with just a fixnum (no vars)
1.890000 0.000000 1.890000 ( 1.908501)
1.278000 0.000000 1.278000 ( 1.277000)
1m loops calling a method with a fixnum that just returns it
2.740000 0.010000 2.750000 ( 2.766255)
1.426000 0.000000 1.426000 ( 1.426000)
bench_string_ops.rb
----
Measure string array sort time
5.950000 0.060000 6.010000 ( 6.055483)
8.061000 0.000000 8.061000 ( 8.061000)
Measure hash put time
0.390000 0.010000 0.400000 ( 0.398593)
0.208000 0.000000 0.208000 ( 0.209000)
Measure hash get time (note: not same scale as put test)
1.620000 0.000000 1.620000 ( 1.646155)
0.740000 0.000000 0.740000 ( 0.740000)
Measure string == comparison time
2.340000 0.010000 2.350000 ( 2.368579)
0.812000 0.000000 0.812000 ( 0.812000)
Measure string == comparison time, different last pos
2.690000 0.000000 2.690000 ( 2.724772)
0.860000 0.000000 0.860000 ( 0.860000)
Measure string <=> comparison time
2.340000 0.010000 2.350000 ( 2.369915)
0.824000 0.000000 0.824000 ( 0.824000)
Measure 'string'.index(fixnum) time
0.790000 0.010000 0.800000 ( 0.808189)
1.113000 0.000000 1.113000 ( 1.113000)
Measure 'string'.index(str) time
2.860000 0.010000 2.870000 ( 2.892730)
0.956000 0.000000 0.956000 ( 0.956000)
Measure 'string'.rindex(fixnum) time
0.800000 0.000000 0.800000 ( 0.817300)
0.631000 0.000000 0.631000 ( 0.631000)
Measure 'string'.rindex(str) time
12.190000 0.040000 12.230000 ( 12.310492)
1.247000 0.000000 1.247000 ( 1.247000)
bench_ivar_access.rb
----
100k * 100 ivar gets, 1 ivar
0.500000 0.000000 0.500000 ( 0.582682)
0.340000 0.000000 0.340000 ( 0.340000)
100k * 100 ivar sets, 1 ivar
0.700000 0.010000 0.710000 ( 0.816724)
0.402000 0.000000 0.402000 ( 0.401000)
100k * 100 attr gets, 1 ivar
0.970000 0.000000 0.970000 ( 0.988212)
0.875000 0.000000 0.875000 ( 0.874000)
100k * 100 attr sets, 1 ivar
1.390000 0.010000 1.400000 ( 1.406535)
1.114000 0.000000 1.114000 ( 1.114000)
100k * 100 ivar gets, 2 ivars
0.490000 0.000000 0.490000 ( 0.506206)
0.344000 0.000000 0.344000 ( 0.344000)
100k * 100 ivar sets, 2 ivars
0.680000 0.000000 0.680000 ( 0.693064)
0.388000 0.000000 0.388000 ( 0.388000)
100k * 100 attr gets, 2 ivars
0.970000 0.000000 0.970000 ( 0.989313)
0.878000 0.000000 0.878000 ( 0.878000)
100k * 100 attr sets, 2 ivars
1.400000 0.000000 1.400000 ( 1.434206)
1.129000 0.000000 1.129000 ( 1.128000)
100k * 100 ivar gets, 4 ivars
0.490000 0.000000 0.490000 ( 0.502097)
0.340000 0.000000 0.340000 ( 0.340000)
100k * 100 ivar sets, 4 ivars
0.690000 0.000000 0.690000 ( 0.696852)
0.389000 0.000000 0.389000 ( 0.389000)
100k * 100 attr gets, 4 ivars
0.970000 0.010000 0.980000 ( 0.986163)
0.872000 0.000000 0.872000 ( 0.872000)
100k * 100 attr sets, 4 ivars
1.370000 0.010000 1.380000 ( 1.394921)
1.128000 0.000000 1.128000 ( 1.128000)
100k * 100 ivar gets, 8 ivars
0.500000 0.000000 0.500000 ( 0.519511)
0.344000 0.000000 0.344000 ( 0.344000)
100k * 100 ivar sets, 8 ivars
0.690000 0.000000 0.690000 ( 0.710896)
0.389000 0.000000 0.389000 ( 0.389000)
100k * 100 attr gets, 8 ivars
0.970000 0.000000 0.970000 ( 0.987582)
0.870000 0.000000 0.870000 ( 0.870000)
100k * 100 attr sets, 8 ivars
1.380000 0.000000 1.380000 ( 1.400542)
1.132000 0.000000 1.132000 ( 1.132000)
100k * 100 ivar gets, 16 ivars
0.500000 0.000000 0.500000 ( 0.523690)
0.342000 0.000000 0.342000 ( 0.343000)
100k * 100 ivar sets, 16 ivars
0.680000 0.000000 0.680000 ( 0.707385)
0.391000 0.000000 0.391000 ( 0.391000)
100k * 100 attr gets, 16 ivars
0.970000 0.010000 0.980000 ( 1.017880)
0.879000 0.000000 0.879000 ( 0.879000)
100k * 100 attr sets, 16 ivars
1.370000 0.010000 1.380000 ( 1.387713)
1.128000 0.000000 1.128000 ( 1.128000)
bench_for_loop.rb
----
100k calls to a method containing 5x a for loop over a 10-element range
0.890000 0.000000 0.890000 ( 0.917563)
0.654000 0.000000 0.654000 ( 0.654000)
But here is the mystery. General Rails performance sucks. This test case can be ran by checking out petstore from tw-commons at RubyForge. There is a file called script/console_bench that will run the benchmarks. The two commands ran was: ruby script/console_bench production, and jruby -J-server -O script/console_bench ar_jdbc. With further ado, here are the numbers:
controller : 1.000000 0.070000 1.070000 ( 1.472701)
controller : 2.144000 0.000000 2.144000 ( 2.144000)
view : 4.470000 0.160000 4.630000 ( 4.794336)
view : 6.335000 0.000000 6.335000 ( 6.335000)
full action: 8.300000 0.400000 8.700000 ( 9.597351)
full action: 16.055000 0.000000 16.055000 ( 16.055000)
These numbers plainly stink. And we can't find the reason for it. As you can see, most benchmarks are much better, and there is nothing we can think of that would add up to a general degradation like this. It can't be IO since these tests use app.get directly. So what is it? There is very likely to be one or two bottlenecks causing this problem, and when we have found it, Rails will most likely blaze along. But profiling haven't helped yet. We found a few things, but there is still stuff missing. It's an interesting problem.My current thesis is that either symbols or regexps are responsible. I'll spend the day checking that.
Recent interviews
I have been doing several interviews recently, mostly in connection with my book. Here are the ones I know of right now:
Etiketter:
interview,
jruby,
practical jruby on rails
torsdag, oktober 11, 2007
Grails eXchange, QCon SF and OpenWorld
It seems conference season is just coming up for me. I'll actually not be presenting at any of these events - just attending, being marketing and so on. First of, I'll run around at Grails eXchange in London next week as official ThoughtWorks representative. That is bound to be interesting.
I'll also attend QCon in San Francisco from 5th to 9th November. Directly following that I'll also be at Oracle's OpenWorld event from 11th November to 15th November. I'll have a fair amount of spare time in the evenings during the two weeks in San Francisco, so if anyone want to have a beer and discuss geeky things, feel free to mail me. Also, if anyone is interested in JRuby, I'm more or less always available to do talks on things like that.
Later on, I'll also be at JavaPolis in Antwerp, Belgium, in December.
See you out there.
I'll also attend QCon in San Francisco from 5th to 9th November. Directly following that I'll also be at Oracle's OpenWorld event from 11th November to 15th November. I'll have a fair amount of spare time in the evenings during the two weeks in San Francisco, so if anyone want to have a beer and discuss geeky things, feel free to mail me. Also, if anyone is interested in JRuby, I'm more or less always available to do talks on things like that.
Later on, I'll also be at JavaPolis in Antwerp, Belgium, in December.
See you out there.
Etiketter:
grails,
javapolis,
openworld,
qcon,
san francisco
onsdag, oktober 10, 2007
Announcing JRuby/LDAP
I have just released JRuby/LDAP, a new project within JRuby-extras, that aim to be interface compatible with Ruby/LDAP. That is, if you have a Ruby project written for Ruby/LDAP, you should be able to run it on JRuby without any changes.
This is a first release and some of the more obscure functionalities haven't been added yet.
Installation is easy:
JRuby/LDAP uses JNDI to achieve nice LDAP access. You can also use standard Java ways (i.e. jndi.properties) to change the LDAP access implementation.
The code is BSD licensed.
This is a first release and some of the more obscure functionalities haven't been added yet.
Installation is easy:
jruby -S gem install jruby-ldapIf you're interested in the source, it's available on RubyForge, within the JRuby-extras project.
JRuby/LDAP uses JNDI to achieve nice LDAP access. You can also use standard Java ways (i.e. jndi.properties) to change the LDAP access implementation.
The code is BSD licensed.
tisdag, oktober 09, 2007
In favor of Ruby
Chad Wathington recently posted on the official ThoughtWorks Studios blog a post called Many Facets of Ruby. I would like to expand on some of the points on it, and how I see it. To be sure, what is posted on the TW Studios blog is the "official" ThoughtWorks views - whereas what I write in this blog is purely my own opinions, with no relationship to ThoughtWorks at all.
The point Chad writes about is that ThoughtWorks lately have been talking a lot about JRuby, in such a way that it's easy to get the impression that we as a company have chosen one implementation over the others. As Chad writes, that's not correct.
I've probably done the same thing in my blog. Obviously, I really like JRuby and hope it will work out well. I really like Rubinius effort and I predicted a while back that Rubinius may take over after MRI as the standard C Ruby implementation. But that doesn't mean I'm not interested in the the other approaches around. MRI and YARV definitely has strong points going for them (MRI and JRuby are still the only fully working implementations of Ruby). But when IronRuby, XRuby, Rubinius, YARV, Garden Points and Cardinal is more complete, the Ruby environment will be that much richer for it.
I'm not in this game for a specific implementation. I would use Ruby no matter if there was a JRuby or not. It's just that JRuby solves some of my problems, and allows me to hack on something that I know a segment of the Ruby user group will find useful. I'm in this for the language. I have chosen Ruby as my language, but the language is the same over the implementations. And it's going to be really exciting in the Ruby space the next few months.
The point Chad writes about is that ThoughtWorks lately have been talking a lot about JRuby, in such a way that it's easy to get the impression that we as a company have chosen one implementation over the others. As Chad writes, that's not correct.
I've probably done the same thing in my blog. Obviously, I really like JRuby and hope it will work out well. I really like Rubinius effort and I predicted a while back that Rubinius may take over after MRI as the standard C Ruby implementation. But that doesn't mean I'm not interested in the the other approaches around. MRI and YARV definitely has strong points going for them (MRI and JRuby are still the only fully working implementations of Ruby). But when IronRuby, XRuby, Rubinius, YARV, Garden Points and Cardinal is more complete, the Ruby environment will be that much richer for it.
I'm not in this game for a specific implementation. I would use Ruby no matter if there was a JRuby or not. It's just that JRuby solves some of my problems, and allows me to hack on something that I know a segment of the Ruby user group will find useful. I'm in this for the language. I have chosen Ruby as my language, but the language is the same over the implementations. And it's going to be really exciting in the Ruby space the next few months.
Prenumerera på:
Inlägg (Atom)