fredag, juni 27, 2008

JtestR, RubyGems, and external code

One question I've gotten a few times now that people are starting to use JtestR, is how to make it work with external libraries. This is actually two different questions, masquerading as one. The first one regard the libraries that are already included with JtestR, such as JRuby, RSpec or ActiveSupport. There is an open bug in JIRA for this, called JTESTR-57, but the reason I've been a bit hesitant to add this functionality until now, is because JtestR actually does some pretty hairy things in places. Especially the JRuby integration does ClassLoader magic that can potentially be quite version dependent. The RSpec and Mocha integration is the same. I don't actually modify these libraries, but the code using them is a bit brittle at the moment. I've worked on fixing this by providing patches to the framework maintainers to include the hook functionality I need. This has worked with great success for both Expectations and RSpec.

That said, I will provide something that allows you to use local versions of these libraries, at your own risk. It will probably be part of 0.4, and if you're interested JTESTR-57 is the one to follow.

The second problem is a bit more complicated. You will have seen this problem if you try to do "require 'rubygems'". JtestR does not include RubyGems. There are both tecnnical and non-technical reasons for this. Simply, the technical problem is that RubyGems is coded in such a way that it doesn't interact well with loading things from JAR-packaged files. That means I can't distribute the full JtestR in one JAR-file if I wanted RubyGems, and that's just unacceptable. I need to be able to bundle everything in a way that makes it easy to use.

The non-technical reason is a bit more subtle. If RubyGems can be used in your tests, it encourages locally installed gems. It's a bit less pain to do it that way initially, but remember that as soon as you check the tests in to version control (you are using version control, right?) it will break in unexpected ways if other persons using the code doesn't have the same gems installed, with the same versions.

Luckily, it's quite simple to work provide functionality to JtestR, even if no gems are used. The first step is to create a directory that contains all the third party code. I will call it test_lib and place it in the root of the project. After you have done that you must first unpack your gems:
mkdir test_lib
cd test_lib
jruby -S gem unpack activerecord
When you have the gems you want unpacked in this directory, you can add something like this to your jtestr_config.rb:
Dir["test_lib/*/lib"].each do |dir|
$LOAD_PATH << dir
end
And finally you can load the libraries you need:
require 'active_record'

lördag, juni 21, 2008

TheServerSide Java Symposium Europe is over

Well, I'm home from Prague, from another edition of TheServerSide Java Symposium. This year was definitely a few notches up from last year in Barcelona in my opinion. And being in beautiful Prague didn't really cause any trouble either. =)

I landed on Tuesday, and worked quite heavily on my talks. Due to the ThoughtWorks AwayDay I was really out in the last second with my two slide decks. But I still got to see parts of the city in the evening. Very nice.

I managed to sleep over the opening keynote, but dragged myself down to the main room to watch the session on Spring Dynamic Modules. This ended up being more about OSGi style things than really dynamic things, so I felt a bit cheated, and kept on working on my slides instead. Before lunch I sat in on Alex Popescu's talk about scripting databases with Groovy. Over all a very good overview of the database landscape from a Groovy point of view, including just using the language to make the JDBC API's more flexible, building a builder style DSL for working with SQL, to the full blown GORM framework. All in all quite nice. But the funniest part was definitely peoples reaction to the SQL DSL, where most in the room preferred the real SQL to the Groovy version.

After lunch I had planned to see the session that compared different dependency injection frameworks, but the speaker never showed up, so I found myself listening to info about JSR-275, that provides support for units in a monetary system. Quite useful if you're working in that domain, but at the same time it felt like this would look so much cleaner in Ruby. Of course, that's how I react to most Java code nowadays.

Holly Cummins gave a very good talk about Java Performance Tooling. Of course it was coming with a slight IBM slant, but that's fair. The tools built around their JVM is actually really good for identifying several kinds of performance problems. So I'm actually in a mind to try JRuby on the IBM JVM and see if we can glean some more interesting information from that.

Geert gave his Terracotta talk about JVM clustering, and it's really interesting if you haven't seen it before. In this case I took the opportunity to listen while working on my slides.

And that was the end of day one.

Day two I was a good boy and was actually up in time for the keynote. This might have something to do with the fact that it was Neal Ford giving it, and he talked about Language-Oriented Programming. This is one of my favorite topics, and I'd only seen his slides to this talk before, not heard him give it. If you've been following the discussions about polyglot programming, the content made lots of sense. If you don't believe in polyglot programming, you might have been convinced.

After the keynote, it was time for breakfast, so I didn't see the sessions in that slot. After breakfast I sat in on Guillaume's Groovy in the Enterprise: Case Studies. While the presentation were good, he spent more than half of it just giving an introduction to Groovy. I'm not one to throw stones in glass houses, though, so I have to admit that this is something I can be found guilty of too. I'm trying to improve on this though. It makes a disservice to the audience - if they have to sit through the same kind of intro they might already have seen to get to the actual meat. That's one of the reasons I tried to minimize introductionary material in my testing session.

It was also in this session that a slide with the words "Groovy is the fastest dynamic language on the JVM" showed up. That's based only on the Alioth benchmarks, and it doesn't actually matter if it's true or not. It's a disservice to the audience. Especially in this case where even if Groovy actually on average is faster than JRuby, we are talking maximum 1-2% in average. The speed differences aren't really why you would be interested in using such a language, and in my opinion Groovy has got lots of other interesting features you can use to sell and market it. In summary, it felt a bit unnecessary.

Directly after that session, me, Ted Neward, and Guillaume was featured in a panel on the languages of the next generation. Eugene Ciurana who was supposed to moderate didn't really show up, so John Davies and Kirk Pepperdine had to jump in instead. It ended up being quite fun, but no real heat in the discussion. In something like this, I think it would be useful to have someone with different views to spice it up. Me, Ted and Guillaume just agree about these things way too much. But we got some nice Czeckian vodka. That was good. =)

After lunch I spent more time prepping my talk, and then it was finally time to give it. This was the JRuby on Rails introduction, and it ended up being quite nice. I had a good turn-up, and interestingly enough, many in the audience had actually tried Ruby already.

After my session was up, I could relax, so I went to Kirks talk about Concurrency and High Performance, which included many things to think about while working on the performance of an enterprise scale application. Very useful material.

Finally, at the end of the day it was time for the fireside chats, which is basically another word for BOFs. I sat in on the Zero Turnaround in Java Development session, which ended up not being as much discussion as I had expected, and more talking about the three principals different approachs (RIFE, Grails and JavaRebel).

The Fireside Performance Clinic was good fun, and some useful material. In particular, knowing whether JRuby startup time is CPU or IO bound is something I have never thought about, and might yield some interesting insights.

Day three felt a bit slower, as the last day usually does. The first session for me was Ted's Scala talk. I've seen it a few times before, but the most interesting part is actually the audience questions. As usual I wasn't disappointed. And Ted did his regular thing and weaved me into the examples. One of the more funny bits were when he was explaining the differences between var and val in Scala, and he decided that it might be good to be able to switch my surname. Then came the killer, where he said something like this: "well, and you might want to change the surname of Ola. Since Ola was just married, congratulations by the way, and he's from Sweden where the husband generally takes the surname of the wife, so we need to change his surname". At that point I had a hard time keeping it together.

The session on what's new and exciting in JPA 2 ended up not exciting me at all, so frankly I don't remember anything at all about that. I have vague blurry images of many at-signs.

Shashank Tiwari gave a presentation on how to choose your web framework, and this generated some discussion that were quite interesting. At this point I still wasn't finished with the examples for my testing session though, so I had to work on them. And I finally managed to finish it. Because lo, at that time I did the presentation on testing with JRuby. I spent some time on the different Ruby testing frameworks, first showing off how you can test Ruby code with them. Then I switched the model to a Java class, and used basically the same tests again. The cutest example is probably my story about a Stack. Not a literary master piece, but it's still prose.

People seemed to like the session and get something out of it, and that feels great since this was the first time I showed JtestR to a larger group of people. My mocking domain consisting of Primates, Food and Factories also seemed to go home. I got the expected laughs at the source code line where a Chimpanzee tries to eat Tuna and "throw new Up();".

Typesafe Embedded Java DSLs basically talked about how you can use the standard generic builder patterns to create DSLs that your IDE can help you quite much with. Sadly, my computer decided to give me a heart attach during this presentation, so I had to run out and give it CPR instead of sitting in on the rest of the session.

And that was TSSJS-E. For me, the first day was quite weak, but the content of the other two days were definitely extremely good. I can recommend it to anyone next year.

onsdag, juni 18, 2008

Testing programming language implementations

While writing the post yesterday about testing regular expressions, I realized that this problem is not really specific to regular expressions. I got a very good comment noting that testing any place that uses some kind of DSL is definitely prudent. SQL is another example.

But these examples are both about actually testing the usage of them, and the problem becomes that you have two languages, but you're mostly only testing the code written in the outer language. This is due to several reasons. One of the most obvious ones is that our tools really doesn't make it that easy to do.

Thinking about these issues made me start thinking about how we generally test languages. Having worked on several language implementations and worked on both new languages, and implementations of existing languages, I've come to the conclusion that the whole area of testing languages are actually quite complicated, and also there are no real best practices for doing it.

First, there is a problem of terminology. Many implementations of languages that are really executable specifications of how the language should work. What's the difference? Well, testing the language according to such a spec, you are really only doing functional, black-box testing. I've looked at several of the open source language implementations, and I don't really see much usage of anything else than such language spec tests. This means basically that some parts of the implementation can be implemented wrongly, and by some freak chance it still works correctly in all the cases you have tests for, but it might fail in other ways.

Unit tests for the actual implementation would help with this - it helps since you will be doing TDD on the unit level, it helps because you make a conscious decision about the implementation and what it should be doing in these cases. It still doesn't make everything clear cut and simple, but it absolutely would help. So why don't most implementations do unit testing of the internals? I don't really know. Maybe it's because implementations can be extremely complicated. But that should be a reason for testing more, not testing less. One reason I feel a bit about is that it makes larger changes quite hard. Large refactorings are one of the ways JRuby has used to get incredible performance improvements and new subsystems, but unit tests can sometimes act as inertia for these.

I'm totally disregarding the academic approaches here. Yeah, in soem cases for simple languages, you can actually prove that it does what you want it to do, and for small enough implementations using a suitable language, you can actually prove the same things about the implementation. The problem is that this approach doesn't scale.

And since a language almost always is turing complete, that means that you can't exhaustively test it. There is no way of testing all permutations - either manually or automatically. So what should a language spec do? The first thing that many languages do are to specify that whole areas of functionality result in undefined behavior. That makes it easier. But the real problems exist when you start combining different features which can interact in different ways.

At the end of the day, I have no idea how to actually do this well. I would like to know though - how should I test the implementation, and how should I write an executable language specification? And these questions doesn't even touch on the question of testing the core libraries. Many of the some problems apply, but it gets even more complicated.

Local things in Emacs

This is just a small note, since this have bugged me for a while. Basically, I have lots of extra key bindings running around in my Emacs configuration. Now, I use local-set-key for many of these. The problem is I hadn't actually read the documentation for local-set-key enough.

One example that annoyed me was this: I had some local key bindings for RSpec buffers, that differed from the regular Ruby buffers. My RSpec minor mode still uses the ruby-mode-map though. My assumption was that local-set-key did things exactly as all other things with "local" in their name, namely doing a buffer local modification only. I finally found out that this wasn't the case. Instead, when the RSpec minor mode was loaded for the first time, it ended up modifying the ruby-mode-map with its key bindings, which were then visible for all other Ruby buffers. Ouch.

So, if you use local-set-key, make sure you actually want to set that key in the current mode map, instead of only for the current buffer.

As far as I know, there is no way to set a real buffer local key binding without some acrobatics that unsets and resets the keys manually. I ended up solving my problem with the RSpec minor mode to having it clone the Ruby mode map and have its own mode map. Not an ideal solution, but it works for now.

tisdag, juni 17, 2008

Testing Regular Expressions

Something has been worrying me a bit lately. Being test infected and all, and working for ThoughtWorks, where testing is part of the life blood, I think more and more about these issues. And one thing I've started noticing is that regular expressions seems to be a total blind spot in many cases. I first started thinking about it when I changed a quite complicated regular expression in RSpec. Now RSpec has coverage tests as part of their build, and if the test coverage is less than a 100%, the build will fail. Now, since I had changed something to add new functionality, but hadn't added any tests for it, I instinctively assumed that it would be caught be the coverage tool.

Guess what? It wasn't. Of course, if I had changed the regexp to do something that the surrounding code couldn't support, one of the tests for surrounding lines of code would have caught it, but I got no mention from the coverage tool that I needed more tests to fully handle the regular expressions. This is logical if you think about it. There is no way that a coverage tool could find all the regular expressions in your source code, and then make sure that all branches and alternatives of that particular regular expression was exercised. So that means that the coverage tool doesn't do anything with them at all.

OK, I can live with that, but it's still one of those points that would be very good to keep in mind. Every time you write a regular expression in your code, you need to take special care to actually exercise that part of the code with many inputs. What is many in this case? That's another part of the problem - it depends on the regular expression. It depends on how complicated it is, how long it is, how many special operators are used, and so on. There is no real way around it. To test a regular expression, you really need to understand how they work. The corollary is obvious - to use a regular expression in your code, you need to know how to test it. Conclusion - you need to understand regular expressions.

In many code bases I haven't seen any tests for regular expressions at all. In most cases these have been crafted by writing them outside the code, testing them by hand, and then putting them in the code. This is brittle to say the least. In the cases where there are tests, it's much more common that they only test positives, and not negatives. And I've seldom heard of code bases with enough tests for regular expressions. One of the problems is that in a language like Ruby, they are so easy to use, so you stick them in all over the place. A standard refactoring could help here, by extracting all literal regular expressions to constants. But then the problem becomes another - as soon as you use regular expressions to extract values from a string, it's a pain to not have the regular expression at the same place as the extracted groups are used. Example:
PhoneRegexp = /(\d{3})-?(\d{4})-?(\d{4})/
# 200 lines of code
if phone_number =~ PhoneRegexp
puts "phone number is: #$1-#$2-#$3"
end
If the regular expression had been at the same place as the usage of the $1, $2 and $3 it would have been easy to tie them to the parts of the string. In this case it would be easy anyway, but in more complicated cases it's more complicated. The solution to this is easy - the dollar numbers are evil: don't use them. Instead use an idiom like this:
area, number, extension = PhoneRegexp.match(phone_number).captures
In Ruby 1.9 you will be able to use named captures, and that will make it even easier to make readable usage of the extracted parts of a string. But fact is, the difference between the usage point and the definition point can still cause trouble. A way of getting around this would be to take any complicated regular expression and putting it inside of a specific class for only that purpose. The class would then encapsulate the usage, and would also allow you to test the regular expression more or less in isolation. In the example above, maybe creating a PhoneNumberParser would be a good idea.

At the end of the day, regular expressions are an extremely complicated feature, and in general we don't test the usage of them enough. So you should start. Begin by first creating both positive and negative tests for them. Figure out the boundaries, and see where they can go wrong. Know regular expressions well enough to know what happens in these strange circumstances. Think about unicode characters. Think about whitespace. Think about greedy and lazy matching. As an example of something that took a long time to cause trouble; what's wrong with this regexp that tries to discern if a string is a select statement or not?
/^\s*\(*\s*SELECT\W+/i
And this example actually covers most of the ground, already. It checks case insensitive. It checks for white space before any optional parenthesis, and for any white space after. It makes sure that the word SELECT isn't continued by checking for at least one non word character. So what's wrong with it? Well... It's the caret. Imagine if we had a string like this:
"INSERT INTO foo(a,b,c)\nSELECT * FROM bar"
The regular expression will in fact match this, even though it's not a select statement. Why? Well, it just so happens that the caret matches the beginning of lines, not the beginning of strings. The dollar sign works the same way, matching the end of lines. How do you solve it? Change the caret to \A and the dollar sign to \Z and it will work as expected. A similar problem can show up with the "." to match any character. Depending on which language you are using, the dot might or might not match a newline. Always make sure you know which one you want, and what you don't want.

Finally, these are just some thoughts I had while writing it. There is much more advice to give, but it can be condensed to this: understand regular expressions, and test them. The dot isn't as simple as it seem. Regular expressions are a full blown language, even though it's not turing complete (in most implementations). That means that you can't test it completely, in the general case. This doesn't mean you shouldn't try to cover all eventualities.

How are you testing your regular expressions? How much?

Applications and libraries

In a recent discussion around one of Steve Yegge's blog post, an incidental remark was that it's OK that a language makes it harder for a library creator than for an application developer. This point was made by David Pollak and Martin Odersky in relation to some of the complications that you need to handle when creating a Scala library that you can intuitively use without a full understanding of the Scala type system. Make no mistake, I have lots of respect for both Martin and David, it's just that in this case I think it's actually a quite damaging assumption to make. And they are not the only ones who reason like that either. Joshua Bloch's book Effective Java includes this assumption too, in many places.

So what's wrong with it then? Isn't there a difference between developing an application and a library. Yes, there is a difference, but it's definitely not as large as people make it out to be. And even more importantly: it _shouldn't_ be that much of a difference. The argument from David was that when creating a library in Scala, he needs to focus and work with quite complicated parts of the type system so that the consumer gets a nice API to use the library through. This process is much harder than just using the library would be.

Effective Java contains much good advice, but most of them are from the perspective of someone who creates libraries for a living, and there are a few places where Josh explicitly says that his advice isn't necessarily applicable when writing an application, since he doesn't have that point of view.

Let's take a look at a fundamental question then. What is actually a library, and what is an application? In my opinion, a library is a module providing functionality of some kind, restricted to a specific domain. This can be a horizontal or vertical domain, that doesn't matter, but it's usually something that is usable in more than one circumstance. It's not uncommon that libraries use other libraries to implements its functionality. An application is usually a collection of libraries that provide functionality to an end user. That end user can be either a person, a program or another computer - that doesn't matter. But wait, isn't libraries usually also created to provide functionality to other pieces of code? And even though libraries have a tendency to contain more specific code, and less usage of other libraries, the line is extremely fuzzy.

The way most applications seems to be built now, most of the work is done to collect libraries, provide the missing functionality and glue them together in some way. But that doesn't mean that the code you write in the application won't be used as a library by another consumer. In fact, it's more and more common to try to reuse as much as possible, and especially when you extend an existing application, it's extremely important that you can consume the existing functionality in a sane way.

So why make the distinction? Doing that seems to me to be an excuse for writing bad code if it's in an application. Why won't we as programmers admit that we don't know if someone else will need to consume the code later, and write the best code we can, including creating usable ad well thought out public APIs? Yes, the cost and time will be higher, but that's true for writing tests too. I don't see any value in arguing that libraries should be designed with more care than application code. In fact, I think that attitude is actively detrimental to the industry. And adding a language feature to a language that is complicated, and then arguing that only "library developers" will need to understand it is definitely not the right way to go. A responsible developer using a language needs to understand how that language works. Otherwise that developer will sooner or later cause a great mess. It's just a matter of time.

söndag, juni 15, 2008

JtestR 0.3 Released

JtestR allows you to test your Java code with Ruby frameworks.

Homepage: http://jtestr.codehaus.org
Download: http://dist.codehaus.org/jtestr

JtestR 0.3 is the current release of the JtestR testing tool. JtestR integrates JRuby with several Ruby frameworks to allow painless testing of Java code, using RSpec, Test/Unit, Expectations, dust and Mocha.

Features:
- Integrates with Ant, Maven and JUnit
- Includes JRuby 1.1, Test/Unit, RSpec, Expectations, dust, Mocha and ActiveSupport
- Customizes Mocha so that mocking of any Java class is possible
- Background testing server for quick startup of tests
- Automatically runs your JUnit and TestNG codebase as part of the build

Getting started: http://jtestr.codehaus.org/Getting+Started

The 0.3 release has focused on stabilizing Maven support, and adding new capabilities for JUnit integration.

New and fixed in this release:
JTESTR-47 Maven with subprojects should work intuitively
JTESTR-42 Maven dependencies should be automatically picked up by the test run
JTESTR-41 Driver jtestr from junit
JTESTR-37 Can't expect a specific Java exception correctly
JTESTR-36 IDE integration, possibility to run single tests
JTESTR-35 Support XML output of test reports

Team:
Ola Bini - ola.bini@gmail.com
Anda Abramovici - anda.abramovici@gmail.com

tisdag, juni 10, 2008

Ruby can't be good since I won't bother learning it...

Best quote this whole day, found in http://www.codinghorror.com/blog/archives/001131.html#comments.

If Ruby offered something new I would have learned it fine tbh... its just difficult enough to not be able to "pick up and run with" like almost everything else out there... but honestly, it wouldn't let me do anything I can't already do.

My brain almost exploded reading that.

onsdag, juni 04, 2008

Git completion in tcsh

So I've been a bit envious at the lovely git completion bash users have - but obviously I can't just switch to bash. Anyone who is in the same kind of situation might like the fact that I've started a project to provide this functionality for tcsh.

The first thing you need to do is download the source for tcsh 6.15, and apply the patch you can find here: http://bugs.gw.com/bug_view_advanced_page.php?bug_id=60. Without it it won't work. Compile and install the new tcsh version. The next step is to check out the project from github, at http://github.com/olabini/git_complete_tcsh. Make sure that git_complete is executable and on your path. You need to have Ruby installed for this, btw.

The final step is to modify your .cshrc to add something like this: complete git{,-*} 'p/*/`git_complete`/'.

Now git completion should work, although most of the commands aren't implemented yet. I'll get to them in time. The whole project is a port of the bash completion for git

tisdag, juni 03, 2008

Fractal Programming

This is a continuation of my previous posts describing layers of code written in different programming languages. I have thought about the things involved for a while, and had several discussions with people about it. There were some parts that I didn't describe as well as I thought in my posts, and I will try to do better in this one.

The core of these ideas are based on polyglot programming, the thinking that you should use several different languages in a project, based on which languages are better suited for different parts of it. Another term for this concept is Language-oriented programming. So how do you organize a polyglot system? The most natural way for me is to divide it into layers. In most cases you will find that different categories of languages will be better suited to different layers of the application.

In my original post I identified three layers that can be used to organize polyglot systems. These layers are the stable layer, the dynamic layer, and the domain layer. There are several reasons for organizing them this way, and I'll take a harder look at each of the layers further down. But first let me note that these layers are usually depicted in the form of a pyramid, with the stable layer being that base. That is definitely not how I think about it. In fact, I see it as an inverted pyramid, where the stable layer is the tip of the pyramid, providing the base. The Dynamic layer is the middle part. The domain layer should be the largest part and will very often include more than one dynamic language. So in my mind I represent the different domain languages as smaller pyramids standing upside down, covering the base area. Now, the dynamic layer can also be divided into smaller parts like this, based on language or functionality. This is a bounded fractal representation, which is the reason for the title of this blog post.

This diagram shows how I think about it:Of course, the smaller pyramids can be all the same language and system, or several different ones. It all depends on the application or system you are building. So you can for example use a combination of Ruby, Java and external or internal DSLs:Or you could use Clojure, Scala and JavaScript:
Or any other combination you can imagine. As long as the combination is what's best suited for the problem.

Let's take a look at the definitions of the different layers. There have been some discussion about the names I've chosen for them, so let me describe a little more what the responsibility of each part is, and why it's in that part of the system.

The Domain Layer
This layer is the simplest. This is where all the actual domain rules are defined. In general that means one or more domain specific languages. It doesn't really matter if they are internal or external. This model see them as the same layer. This part of the system is what needs to be malleable enough that it should be possible to change rules in production, allow domain experts to do things with it, or just plain a very complicated configuration. The languages used in this layer are mostly external DSLs, but can also include extremely DSL-friendly languages like Ruby, Python or Groovy.

The Dynamic Layer
Neal Ford argues that this layer isn't so uch about dynamic, as it is about essence. That was never my intention. The problem is that even if you take a language like Scala, which is usually classified as an essential language, Scala requires compilation. To me, compilation is ceremony, which means that it's one extra thing you don't want to care about when writing most of your application code. That's why this layer needs to be dynamic. This is where languages like Ruby, Groovy, Python, JavaScript, Clojure and others live.

The Stable Layer
I view the stable layer as the core set of axioms, the hard kernel or the thin foundation that you can build the rest of your system in. There is definitely advantages to having this layer be written in an expressive language, but performance and static type checking is most interesting here. There is always a tradeof in giving up static typing, and the point of having this layer is to make that tradeof smaller. The dynamic layer runs on top of the stable layer, utilizing resources and services provided.

Another important feature of this layer is that this is where all interfaces are defined. By interfaces I mean external API's. They need to be hard for other clients to be able to trust them. But the implementations for them lives in the dynamic layer, not in the stable. By doing it this way you can take advantage of static type information for your API's while still retaining full flexbility in implementation of them. Languages in the stable layer can be Java, Scala or F#. It should be fairly small compared to the rest of the application, and just provide the base necessary services needed for everything to function.

The most common objection I hear from people about this strategy is the same as for the general polyglot programming idea: if we have a proliferation of languages in a system, it will be harder to find skilled programmers who can work with it.

This objection is true to a degree, but there are several ways around it. First, I have to say that I don't believe this is such a big problem as many others think. Programmers nowadays depend on their tool chains quite heavily, all of them including many advanced features that takes lots of time to learn. But most programmers doesn't even view their languages as tools. In my mind, the programming language is the most important tool. And once we start using better languages for systems, many of the things we need other tools for will disappear or become less of a problem.

I tend to believe that programming languages are quite easy to learn as soon as you understand the fundamental building blocks of programming languages. And if you don't have a fair understanding of these building blocks, I would say that you probably aren't using your current language as well as you should either. I see this as part of being responsible programmers.

I also believe quite strongly that if we used better languages for our code, many code bases would be smaller, easier to understand, easier to maintain and cost less - which means you could afford to find a more skilled programmer to do the work for you. This would mean that both parties win - the programmer gets more interesting work and better code, while the client gets more worth for his money in less time.

RailsConf 2008

I've landed, gotten mostly back in the right timezone without too many incidents (except running through SFO to board very badly scheduled connection).

After allowing the impressions from the last 6-7 days to sink in a little, it's time to summarize RailsConf. I'll go through the sessions I saw and then do some concluding remarks.

The first day was tutorials. I had a good time in Neal Fords and Pat Farleys tutorial on Metaprogramming. I can't say I learned much from the sessions, but it was very good content, extremely well presented, and I got the impression that many in the room learned lots of crucial things. The kind of knowledge about internals you get from a talk like this allows you to understand how metaprogramming in Ruby actually works, which makes it easier to achieve the effects you want.

After that I sat around hacking in the Community Code Drive for the rest of the day, with lots of other people. I wasn't involved in gitjour (which by the way is incredibly cool), but I did manage to find a memory leak in iTerms Bonjour handling due to gitjour. Neat. Me and David Chelimsky paired on getting support for multiline plain text story arguments into RSpec, and by the end of the afternoon it was in.

Finally, we headed out to the JRuby hackfest, which ended up being over full with people. That's a good problem to have. We had a great time, hacking on different things, helping people to get started and debugging various problems. All in all it was a very productive day.

I began the Friday with Joel Spolsky's keynote. In contrast to many other people I didn't like it. There wasn't really any content at all, just some humorous content and lots of jokes about naked women. I expect something a bit more profound for the first keynote of the conference, since they have a tendency to actually set the standard for the rest of the days.

After the keynote, John Lam showed off IronRuby running a few simple Rails requests. This is a great achievement, and I'm very impressed with their results. I have argued that IronRuby would probably never reach this point, and I'm very happy to admit I was wrong and offer my apologies to John Lam and the IronRuby team. That said, the fact that IronRuby runs a few different Rails requests is not the same thing as saying that IronRuby runs Rails. My personal definition of running Rails is more about having the Rails test suite run at a high percentage of success (something like 96-98% would be good enough for almost all Rails apps to work, provided they are the right 98%). (ED: Evan Phoenix just told me that MRI doesn't run the Rails test suite totally clean either, because of the way the Rails development process works. So a 100% is probably not a good measure of Rails compatibility.) I assume that this is going to be the next goal for the IronRuby team, and I wish them good luck.

I saw the Hosting talk after that, but I have to admit I was wrapped up in a seriously annoying JRuby bug at the moment so I didn't really pay attention.

The DataMapper talk was very full and gave a good overview of why DataMapper might be a better choice than AR in many cases. The presentation style could possibly have been a bit less dry, but the content was definitely delicious.

If the next two days were the JRuby days, the Friday was the day for all other alternative implementations. I sat in on the Rubinius talk by Evan Phoenix and friends, and then the much talked about MagLev presentation.

I first want to congratulate Rubinius on running several different Rails requests. It's very cool and a great milestone. The same caveats as for IronRuby applies of course. But wow, the debugging features is awesome. First class meta objects are extremely powerful, and will provide many capabilities to the platform. The presentation was also extremely entertaining. One of the best presentations for the sheer fun everyone seemed to have. Props to Evan, Brian and Wilson for this.

So. The MagLev talk. First, there seems to be some misunderstandings about what MagLev actually is. It is not a hosting service. Gemstone might offer a hosting service around MagLev in the future, but that's not what is going on here. MagLev is a new virtual machine for Ruby, based on Gemstone/S. Basing it on a Smalltalk machine makes it very easy for Gemstone to implement a large subset of Ruby and having it running cleanly and with good performance. Exactly how much has been implemented at this point is not really clear, since no major applications run, and the RubySpecs have not been used on it yet. I assume that the implementation doesn't handle enough Ruby features yet to be able to run the mspec runner and other important machinery.

Was this presentation important? Yeah, sure. To a degree. It was a cool presentation, whetting peoples appetite by showing something that might some day become a real Ruby platform with built in support for an incredible OODB. But it's still early days.

The Saturday began with Jeremy's keynote. He talked about the new things in Rails 2.1 and also showed the same app running in Ruby 1.8, 1.9, Rubinius and JRuby. Very cool.

I ended up in Nathaniel Talbotts 23 Hacks session which was fun. Good stuff.

After that the JRuby day began in earnest with Nick's talk about deploying JRuby on Rails. This was mostly the same talk as given at JavaOne, but more geared towards Ruby programmers. Useful information.

Dan Manges and Zak Tamsen gave an extremely useful talk about how to test Rails applications correctly. Very good material. Exactly the strong kind of deep technical knowledge, gained by experience, that people go to conferences to get.

My talk about JRuby on Rails was generally well received. I had a fun time, and of course I managed to run out of time as usual. I wonder why I'm always afraid of running out of material. That has never happened when I'm talking bout JRuby.

The final technical session of the day ended up being a walk-around to all the different presentations going on and taking a peek, and then ending up hacking in the speakers room.

The evening keynote was by Kent Beck, and as usual he is fantastic to listen to.

The Sunday started with the CS nerds anonymous session, held by Evan Phoenix. It ended up being a kind of lightning talk session, and had some nice points.

After that Ezra gave his talk - that had nothing to do with the session title. He presented Vertebra, which is a cloud computing control system, based on XMPP, Erlang and the actors model. Very cool stuff, although it might not be that useful for people who aren't in charge of a quite large number of computers. But if you have your own botnet, this might be the best way to control them all. =)

The final session of the day was the JRuby Q&A session, which basically flew by. The first ten minutes went in normal time, and then suddenly the session was over. I think we had good attendance, and the right level of questions. You can see all the points covered in Nicks blog, here.

And then it was over.

So, what was good? The technical level was definitely deeper and more rooted in experience. I have to say that this was probably the best Ruby conference I've been to, based on the depth and level of the presentations. Kudos to the scheduling people.

And what was bad? A little bit too much hype about MagLev, and everyone's tendency to use dark colors on black backgrounds in their presentations. Hey, they look good on your computer screen, but it's really not readable!