fredag, december 21, 2007

Ruby closures and memory usage

You might have seen the trend - I've been spending time looking at memory usage in situations with larger applications. Specifically the things I've been looking at is mostly about deployments where a large number of JRuby runtimes is needed - but don't let that scare you. This information is exactly as applicable for regular Ruby as for JRuby.

One of the things that can really cause unintended high memory usage in Ruby programs is long lived blocks that close over things you might not intend. Remember, a closure actually has to close over all local variables, the surrounding blocks and also the living self at that moment.

Say that you have an object of some kind that has a method that returns a Proc. This proc will get saved somewhere and live for a long time - maybe even becoming a method with define_method:
class Factory
def create_something
proc { puts "Hello World" }
end
end

block = Factory.new.create_something
Notice that this block doesn't even care about the actual environment it's created in. But as long as the variable block is still live, or something else points to the same Proc instance, the Factory instance will also stay alive. Think about a situation where you have an ActiveRecord instance of some kind that returns a Proc. Not an uncommon situation in medium to large applications. But the side effect will be that all the instance variables (and ActiveRecord objects usually have a few) and local variables will never disappear. No matter what you do in the block. Now, as I see it, there are really three different kinds of blocks in Ruby code:
  1. Blocks that process something without needing access to variables outside. (Stuff like [1,2,3,4,5].select {|n| n%2 == 0} doesn't need closure at all)
  2. Blocks that process or does something based on living variables.
  3. Blocks that need to change variables on the outside.
What's interesting is that 1 and 2 are much more common than 3. I would imagine that this is because number 3 is really bad design in many cases. There are situations where it's really useful, but you can get really far with the first two alternatives.

So, if you're seeing yourself using long lived blocks that might leak memory, consider isolating the creation of them in as small of a scope as possible. The best way to do that is something like this:
o = Object.new
class << o
def create_something
proc { puts "Hello World" }
end
end
block = o.create_something
Obviously, this is overkill if you don't know that the block needs to be long lived and it will capture things it shouldn't. The way it works is simple - just define a new clean Object instance, define a singleton method in that instance, and use that singleton method to create the block. The only things that will be captured will be the "o" instance. Since "o" doesn't have any instance variables that's fine, and the only local variables captured will be the one in the scope of the create_something method - which in this case doesn't have any.

Of course, if you actually need values from the outside, you can be selective and onle scope the values you actually need - unless you have to change them, of course:
o = Object.new
class << o
def create_something(v, v2)
proc { puts "#{v} #{v2}" }
end
end
v = "hello"
v2 = "world"
v3 = "foobar" #will not be captured by the block
block = o.create_something(v, v2)
In this case, only "v" and "v2" will be available to the block, through the usage of regular method arguments.

This way of defining blocks is a bit heavy weight, but absolutely necessary in some cases. It's also the best way to get a blank slate binding, if you need that. Actually, to get a blank slate, you also need to remove all the Object methods from the "o" instance, and ActiveSupport have a library for blank slates. But this is the idea behind it.

It might seem stupid to care about memory at all in these days, but higher memory usage is one of the prices we pay for higher language abstractions. It's wasteful to take it too far though.

14 kommentarer:

flgr sa...

Hm. Would it make sense to define a built-in separate { ... } construct that would establish a separate scope? One case where this would be quite useful is finalizers.

I'm not sure how important it is to be able to selectively pull new variables into the scope. It's hard to come up with an intuitive and simple syntax for that. separate(a, b) { puts a; puts b } has an odd feeling to it in my opinion...

Tomas sa...

Many languages have compilers that perform analysis on the closures and capture only the variables it actually closes on; This includes the "self" variable of the instance. This is not currently the case with Ruby? Is there any reason why the compiler could not be implemented in such a way?

Ola Bini sa...

Tomas: yes, there is a marvelous reason for this: eval. There is no way for the parser to know which variables will be used at any point. The price you pay for a VERY dynamic language.

shisohan sa...

But eval is a very rare case. My suggestion would be to make eval syntax (makes it impossible to use e.g. send(:eval, code) which I've never seen and probably nobody uses anyway) and optimize the 99.9% of procs/blocks that don't use eval. Those with eval - just enclose as it does now.
Or do I miss something else?

Anonym sa...

@ola bini

You are right. Even if we would develop a complete typing system that would detect wether or not the the block would actually use eval-in-any-way, there are 100 other ways of reflection that could mess it up.

Imagine a block that tries to list all the properties in its own scope. It doesn't explicitly refer to any variables.

Any type checker that could figure out it would use reflection anywhere without a doubt would definately hit the halting-problem.

But it should be possible to make a pretty good guess on the safe side. That is: we can write something that says 'it doesn't use any reflection, it doesn't refer to variable X, we don't need to close over variable X' ..

Johannes sa...

Or you could constraint the ways reflection/eval can access variables from the lexical scope - which would mean changing the language, of course. So that you would have a better chance finding all the variables to close over.

Tomas sa...

ola: Naturally... *slaps forehead*

Myself, I tend to use eval very rarely in those languages that support it. Seeing as it seems costly to include it in a language, I presume it must be valuable. Is there any eye-opening information on the web on the use of eval?

Anonym sa...

I agree we could use this for some discussion over how the language could be best adapted to improve performance. Maybe if binding became a keyword, and eval strictly a method of it (that is, no Kernel.eval)? There could also be an "eval" keyword that would be short for binding.eval, but personally I don't think even that would be necessary. That way, I believe, it would be clear whenever the full binding would be involved in a closure.

Anonym sa...

About creating a minimal binding, I just realized: wouldn't the following suffice?

block = class << Object.new; proc{ ... } end

A bit shorter than defining a method, and fits (somewhat) in one line :) Or am I missing something here?

tea42 sa...

flgr, I've been saying that for years now. many blocks do not need the closure. I bet performance gain would be fairly substantial too. I wonder if matz hasn't embraced this simply becuase of implementation difficulties?

rocky sa...

There seems to be a tension of between optimizer and dynamic language. But also a tension between optimized code at the expense programmer ability to understand what's going on let alone effect what goes on there or introspect at run time on the other side.

I like dynamic languages; I like optimizers and fast code too. And I don't see why I should have to give up either.

Rather, I'd like to see an ongoing dialog between the program and the transformation system.

So rather than resigning to the fact that a transformation can't optimize closures in Ruby, why couldn't that system indicate the consequences of that code and ask: would it be alright here to remove the outer scope in this closure here? The choices might be:

1. okay do the transformation at the source level, or

2. do it in the optimization phase without changing the source code, possibly adding in some annotation comments about indicating the transformation.

In other words, too often optimization tends to be a static, overly conservative process that doesn't interact with the programmer. This is sad because there's lots of information the programmer might be able to offer that could be used to great benefit.

Similarly there's lots of information that a code improver can offer to inform a programmer regarding how to speed up the code.

In some cases the programmer may decide to change the source code. In other cases maybe the programmer would rather have the compiler system make the changes on its end.

Here's an example of the former. Suppose I write "([1] * 10).size == 0". Unless size() is redefined on Arrays which probably won't be the case, a compiler could just replace this with "false" which will probably be more efficient if not clearer. Maybe this seems like a silly thing to write but it could fall out of propagating constants. I know that I've written things like "x.size > 0" where "x.empty?" might be more efficient. And I've written "x.class == String" instead of "x.isa?(String)" because I didn't know about "isa?".

In these cases an optimizing system could educate and inform the programmer and make possibly the person a better Ruby programmer. In these kinds of cases, a concerned programmer might change the source code to those other forms.

Here is another situation, also from real life. However in the following, I think a user would not change code but ask the transformation system do it internally.

I once worked on an optimizing compiler that had a case statement similar to Ruby's. You could write "case ; when 1 ; when 2; when 3; when 4; when 1000; when 2000; ... end" If there are lots of "when" clauses a good way to implement this to sort, the "when clauses" and do a binary search on them. So first test if >= 4 if so try the appropriate sets of when clauses based on this outcome. For example if the value we would less than 4 then we'd test if >=2 and so on. In theory a person could write the code an optimizer could transform the code into, but in practice this is cumbersome, ugly, prone to mistake, and not robust if "when clauses" are added or removed.

I'm not suggesting the above is really that new if at all. There currently are "lint" systems that critique one's code. However most of the ones I've seen aren't that interactive, are separate from the optimizer, and won't change code.

rocky sa...

As I reread my comment, I see a number of small typos.

The first couple of sentences may be a little awkward. Particularly klunky is the second sentence which might be better written "But also a tension between optimized code and programmer ability..."

After option 1. and 2. there is of course option 3. - do nothing. And the example with a case statement should have a variable name in both the case and corresponding if expressions. That is "case x" and "if x >= 4".

Alex sa...

To keep syntax compact you can also define 'lambda2' method.
That will transform you Proc to string and then reevaluates it in minimal closure.

Alex sa...

I mean transform to string using for example ruby2ruby gem.