OK. It's time to get rid of this terminology problem. Ruby does NOT have meta classes. You can define them yourself, but it's not the same thing as what is commonly called the meta class. That is more correctly called the eigen class. The singleton class is also better than meta class, but eigen class is definitely the most correct term.
So what is a meta class then? Well, it's a class that defines the behavior of other classes. You can define meta classes in Ruby if you want too by defining a subclass of Class. Those classes would be metaclasses.
Edit: Of course, if you actually try to define a subclass of Class you will find that Ruby doesn't allow you to do that, which means that you don't have any meta classes in Ruby. Period.
torsdag, maj 29, 2008
Ruby closures addendum - yield
This should probably have been part of one of my posts on closures or defining methods, but I'm just going to write this separately, because it's a very common mistake.
So, say that I want to have a class, and I want to send a block when creating the instance of this class, and then be able to invoke that block later, by calling a method. The idiomatic way of doing this would be to use the ampersand and save away the block in an instance variable. But maybe you don't want to do this for some reason. An alternative would be to create a new singleton method that yields to the block. A first implementation might look like this:
To fix this is quite simple. Use define_method instead:
So, say that I want to have a class, and I want to send a block when creating the instance of this class, and then be able to invoke that block later, by calling a method. The idiomatic way of doing this would be to use the ampersand and save away the block in an instance variable. But maybe you don't want to do this for some reason. An alternative would be to create a new singleton method that yields to the block. A first implementation might look like this:
class DoSomethingBut this code will not work. Why not? Because as I mentioned in my post about defining methods, "def" will never create a closure. Why is this important? Well, because the current block is actually part of the closure. The yield keyword will use the current frames block, and if the method is not defined as a closure, the block invoked by the yield keyword will actually be the block sent to the "call" method. Since we don't provide a block to "call", things will fail.
def initialize
def self.call
yield
end
end
end
d = DoSomething.new do
puts "hello world"
end
d.call
d.call
To fix this is quite simple. Use define_method instead:
class DoSomethingAs usual with define_method we need to open the singleton class, and use send. This will work, since the block sent to define_method is a real closure, that closes over the block sent to initialize.
def initialize
(class << self; self; end).send :define_method, :call do
yield
end
end
end
d = DoSomething.new do
puts "hello world"
end
d.call
d.call
Etiketter:
closures,
methods,
ruby,
singleton class
torsdag, maj 22, 2008
JRuby RailsConf hackfest next Thursday
LinkedIn, Joyent and Sun Microsystems is sponsoring a JRuby hackfest in conjunction with RailsConf. It will happen next Thursday from 6:30 PM in Portland, there will be some food and beer and so on. Oh, Charles, Nick, Tom and me will be there - bring your laptops and any and all questions/patches/bugs/ideas with regards to JRuby.
Read more in Charles blog, here: http://headius.blogspot.com/2008/05/jruby-pre-railsconf-hackfest-on.html
Remember to RSVP to Charles if you're coming. Space is limited so RSVP as soon as possible.
Read more in Charles blog, here: http://headius.blogspot.com/2008/05/jruby-pre-railsconf-hackfest-on.html
Remember to RSVP to Charles if you're coming. Space is limited so RSVP as soon as possible.
onsdag, maj 21, 2008
How large is your .emacs?
I've been reading lots of blogs and opinions about emacs the last few days. What strikes me is all of these people who brag about how large their .emacs files have become. So let me make this very clear:
If your .emacs file is longer than a page YOU ARE DOING IT WRONG.
Why? Well. Unless you are a casual Emacs user, your .emacs should not be regarded as a configuration file. Rather, the .emacs file is actually the entry point to the source repository of your own version of Emacs. In effect, when you configure Emacs you create a fork, which has it's own source that you need to maintain. This is not configuration. This is programming and you should approach it like you do all programming. What does that mean? Modularization. Clean code. Code comments. Source control. Tests. But modularization and source control are the ones that are most important for my Emacs configuration. I have loads of files in ~/emacs and every kind of extension I do has it's own kind of file or directory to put it in. The ~/emacs directory is checked out from source control, and has got customizations for different platforms. That's why my .emacs file is 4-5 lines long. Two for setting customizations that are specific to this computer, and the rest to load the stuff inside of ~/emacs. And that's all.
So how do you handle modularization of Emacs Lisp code? This won't be a tutorial. Just a few advices that might make things easier.
In no specific order:
Addendum: As Phil just pointed out (and which was part of my plan from the beginning) is that Autoloads should be used as much as possible. Also, make sure to bytecompile as much as possible.
If your .emacs file is longer than a page YOU ARE DOING IT WRONG.
Why? Well. Unless you are a casual Emacs user, your .emacs should not be regarded as a configuration file. Rather, the .emacs file is actually the entry point to the source repository of your own version of Emacs. In effect, when you configure Emacs you create a fork, which has it's own source that you need to maintain. This is not configuration. This is programming and you should approach it like you do all programming. What does that mean? Modularization. Clean code. Code comments. Source control. Tests. But modularization and source control are the ones that are most important for my Emacs configuration. I have loads of files in ~/emacs and every kind of extension I do has it's own kind of file or directory to put it in. The ~/emacs directory is checked out from source control, and has got customizations for different platforms. That's why my .emacs file is 4-5 lines long. Two for setting customizations that are specific to this computer, and the rest to load the stuff inside of ~/emacs. And that's all.
So how do you handle modularization of Emacs Lisp code? This won't be a tutorial. Just a few advices that might make things easier.
In no specific order:
- (load "file.el") will allow you to just load another file.
- (require 'cl) will give you lots of nice functionality from Common Lisp
- I recommend you have one place where you add all your load paths. Mine look something like this:
(labels ((add-path (p)
(add-to-list 'load-path
(concat emacs-root p))))
(add-path "emacs/jde/lisp")
(add-path "emacs/nxml")
(add-path "emacs/own") ;; Personal elisp code
(add-path "emacs/lisp") ;; Various elisp code, just dumped here
) - Why do it like this? Well, it gives you an easier way to add full paths to your load path without repeating lots of stuff. This depends on you defining emacs-root somewhere - do define it, it can be highly useful.
- Set custom-file. (The custom-file is the file where Emacs saves customizations. If you don't set a specific file for this, you will end up getting all customizations saved into .emacs which you really don't want.) The code for this is simple. Just do (setq custom-file "the-file-name.el")
- Use hooks and advice liberally. They allow you to attach new functionality without monkey patching.
- If you ever edit XML, NEVER use Emacs builtin XML editor. Instead download the excellent NXML package.
- Learn how to use Info and customizations
- Use Ido mode
Addendum: As Phil just pointed out (and which was part of my plan from the beginning) is that Autoloads should be used as much as possible. Also, make sure to bytecompile as much as possible.
måndag, maj 19, 2008
Break Java!
As some of you might have noticed I am not extremely fond of everything the Java language. I have spent some time lately trying to figure out how I would change the language if I could. These changes are of course breaking, and would never be included in regular Java. I've had several names for it, but my current favorite is unJava. You can call it Java .314 or minijava if you want. Anyway, here's a quick breakdown of what I'd like to see done to make a better language out of Java without straying to far away from the current language:
- No primitives. No ints, bytes, chars, shorts, floats, booleans, doubles or longs. They are all evil and should not be in the language.
- No primitive arrays. Javas primitive arrays are not typesafe and are evil. With generics there is no real point in having them, especially since they interact so badly with generic collections. This point would mean that certain primitive collection types can't be implemented in the language itself. This is a price I'm willing to pay.
- Scala style generics, with usage-defined contra/co-variance and more flexible type bounds.
- No anonymous inner classes. There is no need for them with the next points.
- First class methods.
- Anonymous methods (these obviously need to be closures).
- Interfaces that carries optional implementation.
- No abstract classes - since you don't need them with the above.
- Limited type inference, to avoid some typing. Scala or C# style is fine.
- Annotations for many of the current keywords - accessibility specifically, but also things like transient, volatile and synchronized.
- No checked exceptions.
- No angle brackets for generics (they really hurt my eyes. are there XML induced illnesses? XII?). Square brackets look so much better.
- Explicit separation of nullable from non-nullable values.
Etiketter:
java,
language design,
programming languages,
unJava
lördag, maj 17, 2008
ThoughtWorks comes to Sweden
I few months back I blogged about the possibility that ThoughtWorks would come to Sweden. Well, this is now reality. I have the extreme honor to be a part of this initiative together with Marcus Ahnve (who blogged about it here). If you read that blog you will know that Marcus will head the Swedish operation. I am immensely happy about having him as my new colleague and also boss. =)
People might ask what my role in this new office will be. That's a valid question. My main goal is to stay out of trouble - and trying my very best to not scare potential customers away. Marcus is extremely capable and will handle all challenges, which means that I'll do my best to bask in the glory of opening an new office. I might also have a hand in any billable work we do, and help out with recruitment and possibly even (shudder) marketing.
Not sure if you catched the meaning in that last sentence, but let me spell two points out. We will be selling work in Sweden from day one. I will be one of the consultants sold and that means that if you have an Ruby or JRuby work you want to start up, this might be an excellent time to call your local ThoughtWorks office... =)
The recruitment point is simply this. We plan on accumulating the best people we can find - as we aim to do in every country we enter. If you feel like you could fit this bill, mail me and we can talk.
It's important to note that this operation will initially be very low profile. Don't expect center folds in DN's Economy pages. We will work mostly with word-of-mouth. So if you hear of someone that might need our help, don't hesitate to mention our name. And even though we are low profile, we will still have the resources of the whole company to draw on. A 1000 ThoughtWorkers. That feels rather good, and it should feel even better for any prospective clients.
One thing I have been a bit worried about is my commitment to JRuby, and other open source projects. I assure everyone I'll do my best to live up to these commitments. Sleep be damned!
These are exciting times. I for one is looking forward to it very much. Me and Marcus will officially start on this from June this year. Get in touch if you have any questions. It's my name separated with dots at gmail, or obini at the official thoughtworks domain.
People might ask what my role in this new office will be. That's a valid question. My main goal is to stay out of trouble - and trying my very best to not scare potential customers away. Marcus is extremely capable and will handle all challenges, which means that I'll do my best to bask in the glory of opening an new office. I might also have a hand in any billable work we do, and help out with recruitment and possibly even (shudder) marketing.
Not sure if you catched the meaning in that last sentence, but let me spell two points out. We will be selling work in Sweden from day one. I will be one of the consultants sold and that means that if you have an Ruby or JRuby work you want to start up, this might be an excellent time to call your local ThoughtWorks office... =)
The recruitment point is simply this. We plan on accumulating the best people we can find - as we aim to do in every country we enter. If you feel like you could fit this bill, mail me and we can talk.
It's important to note that this operation will initially be very low profile. Don't expect center folds in DN's Economy pages. We will work mostly with word-of-mouth. So if you hear of someone that might need our help, don't hesitate to mention our name. And even though we are low profile, we will still have the resources of the whole company to draw on. A 1000 ThoughtWorkers. That feels rather good, and it should feel even better for any prospective clients.
One thing I have been a bit worried about is my commitment to JRuby, and other open source projects. I assure everyone I'll do my best to live up to these commitments. Sleep be damned!
These are exciting times. I for one is looking forward to it very much. Me and Marcus will officially start on this from June this year. Get in touch if you have any questions. It's my name separated with dots at gmail, or obini at the official thoughtworks domain.
torsdag, maj 15, 2008
Dynamically created methods in Ruby
There seems to be some confusion with regards to dynamically defining methods in Ruby. I thought I'd take a look at the three available methods for doing this and just quickly note why you'd use one method in favor of another.
Let's begin by a quick enumeration of the available ways of defining a method after the fact:
As you can see, we can use the def keyword inside of any context. Something that bites most Ruby programmers at least once - and more than once if they used to be Scheme programmers - is that the second def of "something" will not do a lexically scoped definition inside the scope of the first "something" method. Instead it will define a "something" method on the metaclass of the currently executing self. This means that in the example of the local variable "o", the first call to "something" will first calculate the value and then define a new "something" method on the metaclass of the "o" local variable. This pattern can be highly useful.
Another variation is quite common. In this case you define a new method on a specific object, without that object being the self. The syntax is simple:
This idiom is generally preferred in two cases - first, when defining on the metaclass of self. In this case, using this syntax makes what is happening much more explicit. The other common usage of this idiom is when you're defining more than one singleton method. In that case this syntax provide a nice grouping.
The final way of defining methods with def is using module_eval. The main difference here is that module_eval allows you to define new instance methods for a module like object:
OK, so now you know when the def method can be used. Some important notes about it to remember is this: def does _not_ use any enclosing scope. The method defined by def will not be a lexical closure, which means that you can only use instance variables from the enclosing running environment, and even those will be the instance variables of the object executing the method, not the object defining the method. My main rule is this: use def whenever you can. If you don't need lexical closures or a dynamically defined name, def should be your default option. The reason: performance. All the other versions are much harder - and in some cases impossible - for the runtimes to improve. In JRuby, using def instead of define_method will give you a large performance boost. The difference isn't that large with MRI, but that is because MRI doesn't really optimize the performance of general def either, so you get bad performance for both.
Use def unless you can't.
The next version is define_method. It's just a regular method that takes a block that defines that implementation of the method. There are some drawbacks to using define_method - the largest is probably that the defined method can't use blocks, although this is fixed in 1.9. Define_method gives you two important benefits, though. You can use a name that you only know at runtime, and since the method definition is a block this means that it's a closure. That means you can do something like this:
Well, it's a good thought. The problem is that it won't work. See, there are a few keywords in Ruby that kills lexical closure. The class, module and def keywords are the most obvious ones. So, the reference to abc inside of the define_method block will actually not be a lexical closure to the abc defined outside, but instead actually cause a runtime error since there is no such local variable in scope. This means that using define_method in this way is a bit cumbersome in places, but there are situations where you really need it.
The second feature of define_method is less interesting - it allows you to have any name for the method you define, including something random you come up with at runtime. This can be useful too, of course.
Let's summarize. The method define_method is a private method so it's a bit problematic to call, but it allows you to define methods that are real closures, thus providing some needed functionality. You can use whatever name you want for the method, but this shouldn't be the deciding reason to use it.
There are two problems with define_method. The first one is performance. It's extremely hard to generally optimize the performance of invocation of a define_method method. Specifically, define_method invocations will usually be a bit slower than activating a block, since define_method also needs to change the self for the block in question. Since it's a closure it is harder to optimize for other reasons too, namely we can never be exactly sure about what local variables are referred to inside of the block. We can of course guess and hope and do optimistic improvements based on that, but you can never get define_method invocations are fast as invoking a regular Ruby method.
Since the block sent to define_method is a closure, it means it might be a potential memory leak, as I documented in an older blog post. It's important to note that most Ruby implementations keep around the original self of the block definition, as well as the lexical context, even though the original self is never accessible inside the block, and thus shouldn't be part of the closed environment. Basically, this means that methods defined with define_method could potentially leak much more than you'd expect.
The final way of defining a method dynamically in Ruby is using def or define_method inside of an eval. There are actually interesting reasons for doing both. In the first case, doing a def inside of an eval allows you to dynamically determine the name of the method, it allows you to insert any code before or after the actual functioning code, and most importantly, defining a method with def inside of eval will usually have all the same performance characteristics as a regular def method. This applies for invocation of the method, not definition of it. Obviously eval is slower than just using def directly. The reason that def inside of an eval can be made fast is that at runtime it will be represented in exactly the same way as a regular def-method. There is no real difference as far as the Ruby runtime sees it. In fact, if you want to, you can model the whole Ruby file as running inside of an eval. Not much difference there. In particular, JRuby will JIT compile the method if it's defined like that. And actually, this is exactly how Rails handles potentially slow code that needs to be dynamically defined. Take a look at the rendering of compiled views in ActionPack, or the route recognition. Both of these places uses this trick, for good reasons.
The other one I haven't actually seen, and to be fair I just made it up. =) That's using define_method inside of an eval. The one thing you would gain from doing such a thing is that you have perfect control over the closure inside of the method defined. That means you could do something like this:
In this code we create a new method "something" on Object. This method is actually a closure, but it's an extremely controller closure since we create a specific binding where we want it, and then use that binding as the context in which the define_method runs. That means we can return the value of abc from inside of the block. This solution will have the same performance problems as regular define_method methods, but it will let you control how much you close over at least.
So what's the lesson? Defining methods can be complicated in Ruby, and you absolutely need to know when to use which one of these variations. Try to avoid define_method unless you absolutely have to, and remember that def is available in more places than you might think.
Let's begin by a quick enumeration of the available ways of defining a method after the fact:
- Using a def
- Using define_method
- Using def inside of an eval
class Obj
def something
puts "calling simple"
@abc = 3*42
def something
puts "calling memoized"
@abc
end
something
end
end
o = Obj.new
o.something
o.something
o.something
As you can see, we can use the def keyword inside of any context. Something that bites most Ruby programmers at least once - and more than once if they used to be Scheme programmers - is that the second def of "something" will not do a lexically scoped definition inside the scope of the first "something" method. Instead it will define a "something" method on the metaclass of the currently executing self. This means that in the example of the local variable "o", the first call to "something" will first calculate the value and then define a new "something" method on the metaclass of the "o" local variable. This pattern can be highly useful.
Another variation is quite common. In this case you define a new method on a specific object, without that object being the self. The syntax is simple:
def o.somethingThis is deceptively simple, but also powerful. It will define a new method on the metaclass of the "o" local variable, constant, or result of method call. You can use the same syntax for defining class methods:
puts "singleton method"
end
def String.somethingAnd in fact, this does exactly the same thing, since String is an instance of the Class class, this will define a method "something" on the metaclass of the String object. There are two other idioms you will see. The first one:
puts "also singleton method"
end
class << odoes exactly the same thing as
def something
puts "another singleton method"
end
end
def o.something
puts "another singleton method"
end
This idiom is generally preferred in two cases - first, when defining on the metaclass of self. In this case, using this syntax makes what is happening much more explicit. The other common usage of this idiom is when you're defining more than one singleton method. In that case this syntax provide a nice grouping.
The final way of defining methods with def is using module_eval. The main difference here is that module_eval allows you to define new instance methods for a module like object:
String.module_eval doThis syntax is more or less equivalent to using the module or class keyword, but the difference is that you can send in a block which gives you some more flexibility. For example, say that you want to define the same method on three different classes. The idiomatic way of doing it would be to define a new module and include that in all the classes. But another alternative would be doing it like this:
def something
puts "instance method something"
end
end
"foo".something
block = proc doThe method class_eval is an alias for module_eval - it does exactly the same thing.
def something
puts "Shared something definition"
end
end
String.module_eval &block
Hash.module_eval &block
Binding.module_eval &block
OK, so now you know when the def method can be used. Some important notes about it to remember is this: def does _not_ use any enclosing scope. The method defined by def will not be a lexical closure, which means that you can only use instance variables from the enclosing running environment, and even those will be the instance variables of the object executing the method, not the object defining the method. My main rule is this: use def whenever you can. If you don't need lexical closures or a dynamically defined name, def should be your default option. The reason: performance. All the other versions are much harder - and in some cases impossible - for the runtimes to improve. In JRuby, using def instead of define_method will give you a large performance boost. The difference isn't that large with MRI, but that is because MRI doesn't really optimize the performance of general def either, so you get bad performance for both.
Use def unless you can't.
The next version is define_method. It's just a regular method that takes a block that defines that implementation of the method. There are some drawbacks to using define_method - the largest is probably that the defined method can't use blocks, although this is fixed in 1.9. Define_method gives you two important benefits, though. You can use a name that you only know at runtime, and since the method definition is a block this means that it's a closure. That means you can do something like this:
class ObjOK, let this code sample sink in for a while. It's actually several things rolled into one. They are all necessary though. First, note that abc is no longer an instance variable. It's instead a local variable to the first "something" method. Secondly, the funky looking thing(class <<self; self; end) is the easiest way to get the metaclass of the current object. Unlike def, define_method will not implicitly define something on the metaclass if you don't specify where to put it. Instead you need to do it manually, so the syntax to get the metaclass is necessary. Third, define_method happens to be a private method on Module, so we need to use send to get around this. But wait, why don't we just open up the metaclass and call define_method inside of that? Like this:
def something
puts "calling simple"
abc = 3*42
(class <<self; self; end).send :define_method, :something do
puts "calling memoized"
abc
end
something
end
end
o = Obj.new
o.something
o.something
o.something
class Obj
def something
puts "calling simple"
abc = 3*42
class << self
define_method :something do
puts "calling memoized"
abc
end
end
something
end
end
o = Obj.new
o.something
o.something
o.something
Well, it's a good thought. The problem is that it won't work. See, there are a few keywords in Ruby that kills lexical closure. The class, module and def keywords are the most obvious ones. So, the reference to abc inside of the define_method block will actually not be a lexical closure to the abc defined outside, but instead actually cause a runtime error since there is no such local variable in scope. This means that using define_method in this way is a bit cumbersome in places, but there are situations where you really need it.
The second feature of define_method is less interesting - it allows you to have any name for the method you define, including something random you come up with at runtime. This can be useful too, of course.
Let's summarize. The method define_method is a private method so it's a bit problematic to call, but it allows you to define methods that are real closures, thus providing some needed functionality. You can use whatever name you want for the method, but this shouldn't be the deciding reason to use it.
There are two problems with define_method. The first one is performance. It's extremely hard to generally optimize the performance of invocation of a define_method method. Specifically, define_method invocations will usually be a bit slower than activating a block, since define_method also needs to change the self for the block in question. Since it's a closure it is harder to optimize for other reasons too, namely we can never be exactly sure about what local variables are referred to inside of the block. We can of course guess and hope and do optimistic improvements based on that, but you can never get define_method invocations are fast as invoking a regular Ruby method.
Since the block sent to define_method is a closure, it means it might be a potential memory leak, as I documented in an older blog post. It's important to note that most Ruby implementations keep around the original self of the block definition, as well as the lexical context, even though the original self is never accessible inside the block, and thus shouldn't be part of the closed environment. Basically, this means that methods defined with define_method could potentially leak much more than you'd expect.
The final way of defining a method dynamically in Ruby is using def or define_method inside of an eval. There are actually interesting reasons for doing both. In the first case, doing a def inside of an eval allows you to dynamically determine the name of the method, it allows you to insert any code before or after the actual functioning code, and most importantly, defining a method with def inside of eval will usually have all the same performance characteristics as a regular def method. This applies for invocation of the method, not definition of it. Obviously eval is slower than just using def directly. The reason that def inside of an eval can be made fast is that at runtime it will be represented in exactly the same way as a regular def-method. There is no real difference as far as the Ruby runtime sees it. In fact, if you want to, you can model the whole Ruby file as running inside of an eval. Not much difference there. In particular, JRuby will JIT compile the method if it's defined like that. And actually, this is exactly how Rails handles potentially slow code that needs to be dynamically defined. Take a look at the rendering of compiled views in ActionPack, or the route recognition. Both of these places uses this trick, for good reasons.
The other one I haven't actually seen, and to be fair I just made it up. =) That's using define_method inside of an eval. The one thing you would gain from doing such a thing is that you have perfect control over the closure inside of the method defined. That means you could do something like this:
class BinderCreator
def get
abc = 123
binding
end
end
eval(<<EV, BinderCreator.new.get)
Object.send :define_method, :something do
abc
end
EV
In this code we create a new method "something" on Object. This method is actually a closure, but it's an extremely controller closure since we create a specific binding where we want it, and then use that binding as the context in which the define_method runs. That means we can return the value of abc from inside of the block. This solution will have the same performance problems as regular define_method methods, but it will let you control how much you close over at least.
So what's the lesson? Defining methods can be complicated in Ruby, and you absolutely need to know when to use which one of these variations. Try to avoid define_method unless you absolutely have to, and remember that def is available in more places than you might think.
Would Type Inference help Java
My former colleague Lars Westergren recently posted a blog (here) about type inferencing, posing the question whether type inference would actually be good for Java, and if it would provide any benefits outside of just "less typing".
In short: no. Type inferencing would probably not do much more than save you some typing. But how much typing it would save you could definitely vary depending on the type of type inference you added. The one version I would probably prefer is just a very simple hack to avoid writing out the generic type arguments. One simple way of doing that would be to allow an equals sign inside of the angle brackets. In that case you could do this:
Another kind of generics that would be somewhat useful is the kind added to C#, which is only local to a scope. That means there will be no type inferencing of member variables, method parameters or return values. Of course, that's the crux of Lars question, since this kind of type inference potentially removes ALL type information in the current text, since you can do:
So in practice, the kind of type inference C# adds is actually quite useful, clean, and doesn't cause too much trouble - especially if you have one of those fancy ideas that do method completion... =)
From another angle, there are some things that type inference could possible do, but that you will never see in Java. For example, say that you assign a variable to something, and later you assign that variable to some other value. If these two values are distinct types that doesn't overlap in the inheritence chain, you will usually get an error. But if you have an advanced type system, it will do unification for you. The basic versions will just find the most common supertype (the disjunction), but you can also imagine the compiler injecting a new type into your program that is the union of the two types in use. This will provide something similar to duck typing while still retaining some static type safety. If your type system allows multiple inheritence, the synthetic union type might even be a subclass of both the types in question.
So yeah. The long answer is that you can actually do some funky stuff with type inference that doesn't immediately translate to less typing. Although less typing and better abstractions is what programming languages are all about, right? Otherwise assembler provides everything we want.
In short: no. Type inferencing would probably not do much more than save you some typing. But how much typing it would save you could definitely vary depending on the type of type inference you added. The one version I would probably prefer is just a very simple hack to avoid writing out the generic type arguments. One simple way of doing that would be to allow an equals sign inside of the angle brackets. In that case you could do this:
List<=> l = new ArrayList<String>();Of course, you can do it on more complicated expressions:
List<String> l2 = new ArrayList<=>();
List<Set<Map<Class<?>, List<String>>>> l = new ArrayList<=>();This would save us some real pain in the definition of genericized types, and it wouldn't strip away much stuff you need for readability. In the above examples it would just strip away one duplication, and you don't need that duplication to read it correctly. The one case where it might be a little bit harder to read would be if you defined a variable and assigned it somewhere else. In that case the definition would need to carry the type information, so the instantiation would use the <=> syntax. I think that would be an acceptable price to reduce the verbosity of Java generics.
Another kind of generics that would be somewhat useful is the kind added to C#, which is only local to a scope. That means there will be no type inferencing of member variables, method parameters or return values. Of course, that's the crux of Lars question, since this kind of type inference potentially removes ALL type information in the current text, since you can do:
var x = someValue.DoSomething();At this point there is no easy way for you to know what the type of x actually is. Reading it like this, it looks a bit frightening if you're used to Java type tags, but in fact this is not what you would see. In most cases you have a small method - maybe 5-15 lines of code, where x is being used in some way or another. In many cases you will see methods called on x, or x used as argument to method calls. Both of these usages gives you clues about what it might be, but in fact you don't always need to know what type it is. You just need to know what you can do with it. And that's exactly what Java interfaces represent. So for example, do you know what class you get back from Collections.synchronizedMap()? No, and you shouldn't need to know. What you do know is that it's something that implements Map, and the documentation says that it is synchronized, but that is it. The only thing you know about it is that you can use it as a map.
So in practice, the kind of type inference C# adds is actually quite useful, clean, and doesn't cause too much trouble - especially if you have one of those fancy ideas that do method completion... =)
From another angle, there are some things that type inference could possible do, but that you will never see in Java. For example, say that you assign a variable to something, and later you assign that variable to some other value. If these two values are distinct types that doesn't overlap in the inheritence chain, you will usually get an error. But if you have an advanced type system, it will do unification for you. The basic versions will just find the most common supertype (the disjunction), but you can also imagine the compiler injecting a new type into your program that is the union of the two types in use. This will provide something similar to duck typing while still retaining some static type safety. If your type system allows multiple inheritence, the synthetic union type might even be a subclass of both the types in question.
So yeah. The long answer is that you can actually do some funky stuff with type inference that doesn't immediately translate to less typing. Although less typing and better abstractions is what programming languages are all about, right? Otherwise assembler provides everything we want.
onsdag, maj 14, 2008
A New Hope: Polyglotism
OK, so this isn't necessarily anything new, but I had to go with the running joke of the two blog posts this post is more or less a follow up to. If you haven't already read them, go read Yegge's Dynamic Languages Strikes Back, and Beust's Return Of The Statically Typed Languages.
So let's see. Distilled, Steve thinks that static languages have reached the ceiling for what's possible to do, and that dynamic languages offer more flexibility and power without actually sacrificing performance and maintainability. He backs this up with several research papers that point to very interesting runtime performance improvement techniques that really can help dynamic languages perform exceptionally well.
On the other hand Cedric believes that Scala is bad because of implicits and pattern matching, that it's common sense to not allow people to use the languages they like, that tools for dynamic languages will never be as good as the ones for static ones, that Java generics isn't really a problem, that dynamic language performance will improve but that this doesn't matter, that static languages really hasn't failed at all and that Java is still the best language of choice, and will continue to be for a long time.
Now, these two bloggers obviously have different opinions, and it's really hard to actually see which parts are facts and which are opinions. So let me try to sort out some facts first:
Dynamic language have been around for a long time. As long as statically typed languages in fact. Lisp was the first one.
There have been extremely efficient dynamic language implementations. Some of the Common Lisp implementations are on par with C performance, and Strongtalk also achieved incredible numbers. As several commenters have noted, Strongtalks performance did not come from the optional type tags.
All dynamic languages in large use today are not even on the same map with regards to performance. There are several approaches to fixing these, but we can't know how well they will work out in practice.
Java's type system is not very strong, and not very static, as these definitions go. From a type theoretic stand point Java does not offer neither static type safety nor any complete guarantees.
There is a good reason for these holes in Java. In particular, Java was created to give lots of hints to the compiler so the compiler can catch errors where the programmer is insoncistent. This is one of the reasons that you very often find yourself writing the same type name twice, including the type name arguments (generics). If the programmer makes a mistake at one side, the compiler will be able to catch this error very easily. It is a redundancy in the syntax that makes Java programs very verbose, but helps against certain kinds of mistakes.
Really strong type systems like those Haskell and OCaML use provide extremely strong compile time guarantees. This means that if the compiler accepts your program, you will never see any runtime errors from the type system. This allows these compilers to generate very efficient code, because they know more about the state of the application at most points in time, compared to the compiler for Java, which knows some things, but not nearly as much as Haskell or OCaML.
The downside of really strong type systems is that they disallow some extremely common expressions - these are things you intuitively can imagine, but it can't be expressed within the constraints of such a type system. One solution to these problems is to add higher kinds, but these have a tendency to create more complexity and also suffer from some of the same problems.
So, we have three categories of languages here. The strongly statically checked ones, like Haskell. The weakly statically checked ones, like Java. And the dynamically checked ones, like Ruby. The way I look at these, they are good at very different things. They don't even compete in the same leagues. And comparing them is not really a valid point of reasoning. The one thing that I am totally sure if is that we need better tools. And the most important tool in my book is the language. It's interesting, many Java programmers talk so much about tools, but they never seem to think about their language as a tool. For me, the language is what shapes my thinking, and thus it's definitely much more important than which editor I'm using.
I think Cedric have a point in that dynamic language tool support will never be as good as those for statically typed languages - at least not when you're defining "good" to be the things that current Java tools are good at. Steve thinks that the tools will be just as good, but different. I'm not sure. To a degree I know that no tool can ever be completely safe and complete, as long as the language include things like external configuration, reflection and so on. There is no way to include all dynamic aspects of Java, but using the common mainstream parts of the language will give you most of these. As always this is a tradeoff. You might get better IDE support for Java right now, but you will be able to express things in Ruby that you just can't express in Java because the abstractions will become too large.
This is the point where I'm going to do a copout. These discussions are good, to the degree that we are working on improving our languages (our tools). But there is a fuzzy line in these discussions, where you end up comparing apples and oranges. These languages are all useful, for different things. A good programmer uses his common sense to provide the best value possible. That includes choosing the best language for the job. If Ruby allows you to provide functionality 5 times faster than the equivalent functionality with Java, you need to think about whether this is acceptable or not. On the one hand, Java has IDEs that make maintainability easier, but with the Ruby codebase you will end up maintaining a fifth of the size of the Java code base. Is that trade off acceptable? In some cases yes, in some cases no.
In many cases the best solution is a hybrid one. There is a reason that Google allows more than one language (C++, Java, Python and JavaScript). This is because the languages are good at different things. They have different characteristics, and you can get a synergistic effect by combining them. A polyglot system can be greater than the sum of it's parts.
I guess that's the message of this post. Compare languages, understand your most important tools. Have several different tools for different tasks, and understand the failings of your current tools. Reason about these failings in comparison to the tasks they should do well, instead of just comparing languages to languages.
Be good polyglot programmers. The world will not have a new big language again, and you need to rewire your head to work in this environment.
So let's see. Distilled, Steve thinks that static languages have reached the ceiling for what's possible to do, and that dynamic languages offer more flexibility and power without actually sacrificing performance and maintainability. He backs this up with several research papers that point to very interesting runtime performance improvement techniques that really can help dynamic languages perform exceptionally well.
On the other hand Cedric believes that Scala is bad because of implicits and pattern matching, that it's common sense to not allow people to use the languages they like, that tools for dynamic languages will never be as good as the ones for static ones, that Java generics isn't really a problem, that dynamic language performance will improve but that this doesn't matter, that static languages really hasn't failed at all and that Java is still the best language of choice, and will continue to be for a long time.
Now, these two bloggers obviously have different opinions, and it's really hard to actually see which parts are facts and which are opinions. So let me try to sort out some facts first:
Dynamic language have been around for a long time. As long as statically typed languages in fact. Lisp was the first one.
There have been extremely efficient dynamic language implementations. Some of the Common Lisp implementations are on par with C performance, and Strongtalk also achieved incredible numbers. As several commenters have noted, Strongtalks performance did not come from the optional type tags.
All dynamic languages in large use today are not even on the same map with regards to performance. There are several approaches to fixing these, but we can't know how well they will work out in practice.
Java's type system is not very strong, and not very static, as these definitions go. From a type theoretic stand point Java does not offer neither static type safety nor any complete guarantees.
There is a good reason for these holes in Java. In particular, Java was created to give lots of hints to the compiler so the compiler can catch errors where the programmer is insoncistent. This is one of the reasons that you very often find yourself writing the same type name twice, including the type name arguments (generics). If the programmer makes a mistake at one side, the compiler will be able to catch this error very easily. It is a redundancy in the syntax that makes Java programs very verbose, but helps against certain kinds of mistakes.
Really strong type systems like those Haskell and OCaML use provide extremely strong compile time guarantees. This means that if the compiler accepts your program, you will never see any runtime errors from the type system. This allows these compilers to generate very efficient code, because they know more about the state of the application at most points in time, compared to the compiler for Java, which knows some things, but not nearly as much as Haskell or OCaML.
The downside of really strong type systems is that they disallow some extremely common expressions - these are things you intuitively can imagine, but it can't be expressed within the constraints of such a type system. One solution to these problems is to add higher kinds, but these have a tendency to create more complexity and also suffer from some of the same problems.
So, we have three categories of languages here. The strongly statically checked ones, like Haskell. The weakly statically checked ones, like Java. And the dynamically checked ones, like Ruby. The way I look at these, they are good at very different things. They don't even compete in the same leagues. And comparing them is not really a valid point of reasoning. The one thing that I am totally sure if is that we need better tools. And the most important tool in my book is the language. It's interesting, many Java programmers talk so much about tools, but they never seem to think about their language as a tool. For me, the language is what shapes my thinking, and thus it's definitely much more important than which editor I'm using.
I think Cedric have a point in that dynamic language tool support will never be as good as those for statically typed languages - at least not when you're defining "good" to be the things that current Java tools are good at. Steve thinks that the tools will be just as good, but different. I'm not sure. To a degree I know that no tool can ever be completely safe and complete, as long as the language include things like external configuration, reflection and so on. There is no way to include all dynamic aspects of Java, but using the common mainstream parts of the language will give you most of these. As always this is a tradeoff. You might get better IDE support for Java right now, but you will be able to express things in Ruby that you just can't express in Java because the abstractions will become too large.
This is the point where I'm going to do a copout. These discussions are good, to the degree that we are working on improving our languages (our tools). But there is a fuzzy line in these discussions, where you end up comparing apples and oranges. These languages are all useful, for different things. A good programmer uses his common sense to provide the best value possible. That includes choosing the best language for the job. If Ruby allows you to provide functionality 5 times faster than the equivalent functionality with Java, you need to think about whether this is acceptable or not. On the one hand, Java has IDEs that make maintainability easier, but with the Ruby codebase you will end up maintaining a fifth of the size of the Java code base. Is that trade off acceptable? In some cases yes, in some cases no.
In many cases the best solution is a hybrid one. There is a reason that Google allows more than one language (C++, Java, Python and JavaScript). This is because the languages are good at different things. They have different characteristics, and you can get a synergistic effect by combining them. A polyglot system can be greater than the sum of it's parts.
I guess that's the message of this post. Compare languages, understand your most important tools. Have several different tools for different tasks, and understand the failings of your current tools. Reason about these failings in comparison to the tasks they should do well, instead of just comparing languages to languages.
Be good polyglot programmers. The world will not have a new big language again, and you need to rewire your head to work in this environment.
Etiketter:
dynamic typing,
holy wars,
polyglot,
static typing
måndag, maj 12, 2008
JavaOne 2008, the other half
So, the Thursday got a late start. For some strange reason I didn't feel motivated to go see the Intel General Session, so I showed up for Nick's session about JRuby on Rails deployment instead. Nick did a good job of outlining both the problem and the solution, and I have to say that this presentation was a good end cap for the JRuby week at JavaOne.
I had planned to go to some sessions after that, but I ended up hacking on JRuby instead. An interesting parser bug reared its head.
So the next session I went to was the Filthy Rich Clients one. Quite entertaining, although my interest in Swing is not what it used to be.
The final session of the day was the BOF about writing great technology books. This proved highly enjoyable since joining Josh Bloch and Brian Goetz as panelists were Burt Bates and Kathy Sierra. They did a wonderful job talking about how to write books that captures the readers attention and how to correctly use the brains weaknesses against it. I am tempted to say that this was the best session of the whole JavaOne. Brilliant.
On the Friday I was up early and sat in on Goslings Toy Story. Always funny, and some cool things there. For a geek like me, the CERN stuff and jMars was especially cool.
I managed to see quite a lot of sessions during the rest of the day. More Effective Java was useful as always, the Maxine Virtual Machine looks really cool, Neal Ford did an excellent job of comparing JRuby and Groovy, highlighting both the differences and similarities between the two. Finally the Jython session talked about some of the implementation challenges we in the JRuby team have wrestled with too, implementing a highly dynamic language on top of the JVM.
All in all, this JavaOne definitely stood out as the first non-Java-language JavaOne for me. And I didn't even attend a single one of the gazillions of JavaFX presentations. It was a good year in general, and specifically for dynamic languages.
Oh, and my book is the 5th bestseller in the conference bookstore. Yay!
I had planned to go to some sessions after that, but I ended up hacking on JRuby instead. An interesting parser bug reared its head.
So the next session I went to was the Filthy Rich Clients one. Quite entertaining, although my interest in Swing is not what it used to be.
The final session of the day was the BOF about writing great technology books. This proved highly enjoyable since joining Josh Bloch and Brian Goetz as panelists were Burt Bates and Kathy Sierra. They did a wonderful job talking about how to write books that captures the readers attention and how to correctly use the brains weaknesses against it. I am tempted to say that this was the best session of the whole JavaOne. Brilliant.
On the Friday I was up early and sat in on Goslings Toy Story. Always funny, and some cool things there. For a geek like me, the CERN stuff and jMars was especially cool.
I managed to see quite a lot of sessions during the rest of the day. More Effective Java was useful as always, the Maxine Virtual Machine looks really cool, Neal Ford did an excellent job of comparing JRuby and Groovy, highlighting both the differences and similarities between the two. Finally the Jython session talked about some of the implementation challenges we in the JRuby team have wrestled with too, implementing a highly dynamic language on top of the JVM.
All in all, this JavaOne definitely stood out as the first non-Java-language JavaOne for me. And I didn't even attend a single one of the gazillions of JavaFX presentations. It was a good year in general, and specifically for dynamic languages.
Oh, and my book is the 5th bestseller in the conference bookstore. Yay!
torsdag, maj 08, 2008
JavaOne halfway point
To say that I am seriously tired of JavaFX at this point, would be a gross understatement. So let's not even go there.
CommunityOne was a nice event. I like the feeling of it more and more, and the new open spaces approach seemed to be really successful. The alternative languages presentations were well attended and good. I recommend anyone in the Bay Area, or anyone attending JavaOne, to make the effort to go to CommunityOne next year. It's definitely worth it.
Tuesday was, for undisclosed reasons, a day where I didn't attend so many presentations. In fact, I missed both the keynote and Tom&Charlies JRuby talk. Bad on me. I did manage to go to both the technical session and the BOF about upcoming language features in Java 7. Let me say immediately that I don't want pluggable type systems as a part of Java. Yes, they are useful in certain settings, but they don't fit Java. Not at all. I'm all for having it possible to have annotations in more places, but not for type systems.
Wednesday was "my" day of the conference. Started out early with the Script Bowl, where Groovy, JRuby, Jython and Scala faced off in three different challenges. I was one of the three judges. It was a actually a great fun. Some people have gotten annoyed at it, but I feel that they are taking it too seriously. This format was a good way of introducing the audience to the capabilities of four languages that are sometimes very much alike and sometimes very different.
It's also interesting to note that Charles was the only one of the panel who solved his challenges by asking the community for help with it. In fact, Charles didn't do any of them himself, while the other three did their code in isolation. My opinion is that this shows a difference in attitude between the communities, and it's very interesting.
After that I took it easy for some time, and then it was time for my JRuby on Rails presentation. It went fairly well and was also well attended. I did a serious mistake in my database configuration, but Tom helped me out and the rest of the demonstrations was good.
I did a book signing session after that and then headed back to the office for a while before coming back and holding my JRuby at ThoughtWorks BOF which was also nice.
Today I'm going to take it easy and relax. I'll go to Nick's session in an hour and then I'll go to Josh and Brian's BOF tonight. After that we host a party which I'm going to attend for a while. It's going to be a nice day.
CommunityOne was a nice event. I like the feeling of it more and more, and the new open spaces approach seemed to be really successful. The alternative languages presentations were well attended and good. I recommend anyone in the Bay Area, or anyone attending JavaOne, to make the effort to go to CommunityOne next year. It's definitely worth it.
Tuesday was, for undisclosed reasons, a day where I didn't attend so many presentations. In fact, I missed both the keynote and Tom&Charlies JRuby talk. Bad on me. I did manage to go to both the technical session and the BOF about upcoming language features in Java 7. Let me say immediately that I don't want pluggable type systems as a part of Java. Yes, they are useful in certain settings, but they don't fit Java. Not at all. I'm all for having it possible to have annotations in more places, but not for type systems.
Wednesday was "my" day of the conference. Started out early with the Script Bowl, where Groovy, JRuby, Jython and Scala faced off in three different challenges. I was one of the three judges. It was a actually a great fun. Some people have gotten annoyed at it, but I feel that they are taking it too seriously. This format was a good way of introducing the audience to the capabilities of four languages that are sometimes very much alike and sometimes very different.
It's also interesting to note that Charles was the only one of the panel who solved his challenges by asking the community for help with it. In fact, Charles didn't do any of them himself, while the other three did their code in isolation. My opinion is that this shows a difference in attitude between the communities, and it's very interesting.
After that I took it easy for some time, and then it was time for my JRuby on Rails presentation. It went fairly well and was also well attended. I did a serious mistake in my database configuration, but Tom helped me out and the rest of the demonstrations was good.
I did a book signing session after that and then headed back to the office for a while before coming back and holding my JRuby at ThoughtWorks BOF which was also nice.
Today I'm going to take it easy and relax. I'll go to Nick's session in an hour and then I'll go to Josh and Brian's BOF tonight. After that we host a party which I'm going to attend for a while. It's going to be a nice day.
måndag, maj 05, 2008
Faulty routes on MacOS X
This has been driving me insane, but I haven't had time to actually try to fix the problem until now, so this is a small writeup about the symptom and what you can do about it. Sadly, it's kinda a hard problem to search for, unless you already knows whats wrong.
This was my problem: At work, accessing LiveJournal worked fine, but at home it didn't. Basically I got a Server Not Found from Firefox when trying to access from home. After a while I realized that the distinction wasn't between home/work, but wireless or wirebound network. See, at work I usually use a wire to connect, but I only use wireless at home.
My first shot at a solution was something that solved a similar problem a while back - namely flushing the DNS cache. Of course, doing an nslookup and verifying it told me that the cache wasn't the problem, but I tried anyway, because MacOSX sometimes plays funny tricks with this cache. So, to do this, just do "sudo lookupd -flushcache" in a console window, and then restart the network interface.
I tried to figure out what was wrong by pinging, and doing traceroute. Both of these gave clues that something was wrong. Traceroute gave me "Can't assign requested address" from the bind system call. That finally made me realize that the route tables for the network interface was screwed. Lo and behold, doing a "sudo route flush" and restarting the interface solved my problems. Apparently there were about 10 faulty routes stuck in the cache, and for some reason MacOS X didn't flush this cache automatically when connecting to a different wireless network. Highly annoying, but it's finally solved.
This was my problem: At work, accessing LiveJournal worked fine, but at home it didn't. Basically I got a Server Not Found from Firefox when trying to access from home. After a while I realized that the distinction wasn't between home/work, but wireless or wirebound network. See, at work I usually use a wire to connect, but I only use wireless at home.
My first shot at a solution was something that solved a similar problem a while back - namely flushing the DNS cache. Of course, doing an nslookup and verifying it told me that the cache wasn't the problem, but I tried anyway, because MacOSX sometimes plays funny tricks with this cache. So, to do this, just do "sudo lookupd -flushcache" in a console window, and then restart the network interface.
I tried to figure out what was wrong by pinging, and doing traceroute. Both of these gave clues that something was wrong. Traceroute gave me "Can't assign requested address" from the bind system call. That finally made me realize that the route tables for the network interface was screwed. Lo and behold, doing a "sudo route flush" and restarting the interface solved my problems. Apparently there were about 10 faulty routes stuck in the cache, and for some reason MacOS X didn't flush this cache automatically when connecting to a different wireless network. Highly annoying, but it's finally solved.
söndag, maj 04, 2008
Just add scaling!
(To protect the innocent, all names of programming languages have been changed. All likeness with existing names is purely for dramatic effect.)
I have heard many times now that LRM doesn't scale. People have been telling it to me in all these different circumstances. They tell me that languages like Deep Throttle and Moulder have scaling but LRM doesn't. Even the Scally language has scaling - you could understand that from the name I guess. So why doesn't LRM has this feature?
Since I'm rather fond of LRM, and really don't want to go back to either Deep Throttle or Moulder, I decided to add scaling to LRM. I mean, how hard could it be if all these other languages have it? I have implemented langauges before, so this should just be a case of finding the correct implementation of the feature, and grafting it onto LRM's code base, and then provide a patch to the mailing lists.
Since I knew LRM didn't have scaling, I decided to investigate other languages that have it. I began with the original language, the one people write operating systems in. Obviously a language like Deep Throttle has scaling. So I started reading the books, looking through the syntax. But I couldn't find anything there. What was I missing? Was there some obscure syntactic element that wasn't documented, that provided scaling? Or was it hidden somewhere? Maybe the pointers were in reality another word for scaling? But no, it didn't seem that way. I decided to break open GDPC, the Gnu Deep Throttle Compiler, and see if this scaling thing was actually something the compiler did autoamtically for you - a bit like register allocation. But I didn't find any scaling allocation. There was lots of other neat stuff there, but all of it was quite mundane translations from the Deep Throttle code into machine code.
Feeling utterly dejected I turned my attention to Moulder. I assume that you all are familiar with the language - it's used in many places around the world and provides lots of really advanced features such as garbage collection, a virtual machine, and yes - scaling! Of course, this is where it would be. Maybe Deep Throttle had hidden the feature somewhere obscure, but I was sure Moulder would make it easier to find. I started out with the syntax again, looking through the things you could do with the language. Interestingly, Moulder is a really small language, when you get down to it. It's small but you need to say lots of things to get anything done. I couldn't find the syntax for scaling, so I started entertaining other ideas. Maybe all those extra words "protected final static Foo" was actually necessary. Maybe the scaling happened in the spaces between the words, or in some strange interaction between them? The only way to find out was to crack open the compiler and look what was happening. In this case it was actually easier, because the compiler did even less than Deep Throttle's compiler. Frankly, the Moulder compiler just output something bytecodes. There were no hidden scaling inlining in the Moulder compiler.
You see where this is heading? Right, I had to investigate the bytecodes that Moulder use. Where is that precious scaling bytecode? I assume it would be called something like invoke_scaling or maybe ldscaled. But nothing. There was no scaling bytecode. All of the existing bytecodes just did the same mundane things that I know LRM already can do. I was now feeling ready to give up, but I came up with one final idea. Scally is famous for being a language with scaling. It's creators even named it after that feature. So if both Moulder and Scally has scaling, maybe it's somewhere in Moulders gigantic standard library?
But guess what? I didn't find it. I still haven't found anyone who knows how you implement Scaling in a language, so I guess that LRM will never have it... Anyone who care to enlighten me, please send me a detailed email with an implementation of Scaling. I really feel the need to know how this thing works.
I have heard many times now that LRM doesn't scale. People have been telling it to me in all these different circumstances. They tell me that languages like Deep Throttle and Moulder have scaling but LRM doesn't. Even the Scally language has scaling - you could understand that from the name I guess. So why doesn't LRM has this feature?
Since I'm rather fond of LRM, and really don't want to go back to either Deep Throttle or Moulder, I decided to add scaling to LRM. I mean, how hard could it be if all these other languages have it? I have implemented langauges before, so this should just be a case of finding the correct implementation of the feature, and grafting it onto LRM's code base, and then provide a patch to the mailing lists.
Since I knew LRM didn't have scaling, I decided to investigate other languages that have it. I began with the original language, the one people write operating systems in. Obviously a language like Deep Throttle has scaling. So I started reading the books, looking through the syntax. But I couldn't find anything there. What was I missing? Was there some obscure syntactic element that wasn't documented, that provided scaling? Or was it hidden somewhere? Maybe the pointers were in reality another word for scaling? But no, it didn't seem that way. I decided to break open GDPC, the Gnu Deep Throttle Compiler, and see if this scaling thing was actually something the compiler did autoamtically for you - a bit like register allocation. But I didn't find any scaling allocation. There was lots of other neat stuff there, but all of it was quite mundane translations from the Deep Throttle code into machine code.
Feeling utterly dejected I turned my attention to Moulder. I assume that you all are familiar with the language - it's used in many places around the world and provides lots of really advanced features such as garbage collection, a virtual machine, and yes - scaling! Of course, this is where it would be. Maybe Deep Throttle had hidden the feature somewhere obscure, but I was sure Moulder would make it easier to find. I started out with the syntax again, looking through the things you could do with the language. Interestingly, Moulder is a really small language, when you get down to it. It's small but you need to say lots of things to get anything done. I couldn't find the syntax for scaling, so I started entertaining other ideas. Maybe all those extra words "protected final static Foo
You see where this is heading? Right, I had to investigate the bytecodes that Moulder use. Where is that precious scaling bytecode? I assume it would be called something like invoke_scaling or maybe ldscaled. But nothing. There was no scaling bytecode. All of the existing bytecodes just did the same mundane things that I know LRM already can do. I was now feeling ready to give up, but I came up with one final idea. Scally is famous for being a language with scaling. It's creators even named it after that feature. So if both Moulder and Scally has scaling, maybe it's somewhere in Moulders gigantic standard library?
But guess what? I didn't find it. I still haven't found anyone who knows how you implement Scaling in a language, so I guess that LRM will never have it... Anyone who care to enlighten me, please send me a detailed email with an implementation of Scaling. I really feel the need to know how this thing works.
Etiketter:
deranged,
programming languages,
scaling
lördag, maj 03, 2008
The Twitter Conspiracy
First, let me warn you. This is a conspiracy theory. It's got all the usual logical fallacies and problems of a conspiracy theory. Add to that the fact that it's almost guaranteed to not be true, you might ask why I'm writing about it. Well, the thing is, if I was in charge of Twitter, I would do something like this. Of course, I'm not in charge, and the people in charge are probably more sane and well adjusted persons than me. But still, who knows?
What is the idea then? Well, it's actually really simple. Think about the Twitter network, the kind of people who connect there, and the way things spread. What is the difference between Twitter and mobile texting for example? First, everything is by default multicast. It's not reciprocal - you don't need to know how many hundreds or thousands read what you write. And you don't care how many others read the persons you read. You are restricted in length. And, the whole thing is open enough that you can follow all the tweets going on in the system.
The characteristics I've described means that Twitter is more or less the ideal memetic engine. What I mean by this is that it's a wonderful way to spread your ideas, if you can express them in a concise and readable way. This means that certain memes doesn't work well in this setting, but most do. And you can convince more people to join, because if your tweets are interesting enough, someone will notice them in the all-tweet. Also, you can see who the people you are interested in follows, which means that you can spread your network selectively, but really quickly.
These are not really part of the theory. They are just the axioms. So what's the theory then? Well, what are Twitter doing with all this data? If I would have been them, I would have used it to do research on memetic spread and viral marketing. I would use it to try out ideas based on how good uptake they have. Finding this information is not really hard when you have control over all the messages happening. In fact, you could actually do it even outside of Twitter, by using the published tools correctly.
What got me thinking along these lines? Well, the whole TechCrunch debacle was the thing that triggered the idea. How would it work in practice? Well, first, the Twitter gang couldn't necessarily know what kind of people would take up Twitter the most, so the cultural fit of Twitter is actually mostly self organizing. The people and groups taking part of twitter select themself for this experiment. Now, of course there are lots of overlapping groups, and that gives even more interesting possibilities for the sociodynamics of meme transfer between non-overlapping social circles.
Take the Ruby people, who have a quite significant presence on Twitter. They are one of the test groups in my theory, and the TechCrunch article was a very directed way of inserting a meme and see how fast and to how many it propagated. It was very easy to insert this into the blog-sphere, since Twitter could have had any amount of people "leak" the information in the TechCrunch article. Once it was out, they just needed to set up some suitable filters and follow the spread. They also inserted a couonter meme, through one of their employees, to see if it work out as an "antidote" to the first meme, or which one of them was stronger. All in all, I think they got enough material for several research articles out of this stunt.
OK, so really, you don't need to grasp for conspiracy theories to explain the TC debacle. It's not necessary, so Occam's razor demands that we choose the simplest available conclusion that explains all the facts. This theory does not fall into this category. But it's still an entertaining notion.
And I predict that sooner or later, someone will use Twitter, or another network like it, to do this kind of research. It's a question of time. This kind of information is way to valuable for marketing purposes and also for the understanding of the human mind, that it will happen. The question is: will you know about it, when you're participating in their research?
Let's not forget the lovely recursive interpretation that this blog post is a way of doing the same kind of research I've just described.
What is the idea then? Well, it's actually really simple. Think about the Twitter network, the kind of people who connect there, and the way things spread. What is the difference between Twitter and mobile texting for example? First, everything is by default multicast. It's not reciprocal - you don't need to know how many hundreds or thousands read what you write. And you don't care how many others read the persons you read. You are restricted in length. And, the whole thing is open enough that you can follow all the tweets going on in the system.
The characteristics I've described means that Twitter is more or less the ideal memetic engine. What I mean by this is that it's a wonderful way to spread your ideas, if you can express them in a concise and readable way. This means that certain memes doesn't work well in this setting, but most do. And you can convince more people to join, because if your tweets are interesting enough, someone will notice them in the all-tweet. Also, you can see who the people you are interested in follows, which means that you can spread your network selectively, but really quickly.
These are not really part of the theory. They are just the axioms. So what's the theory then? Well, what are Twitter doing with all this data? If I would have been them, I would have used it to do research on memetic spread and viral marketing. I would use it to try out ideas based on how good uptake they have. Finding this information is not really hard when you have control over all the messages happening. In fact, you could actually do it even outside of Twitter, by using the published tools correctly.
What got me thinking along these lines? Well, the whole TechCrunch debacle was the thing that triggered the idea. How would it work in practice? Well, first, the Twitter gang couldn't necessarily know what kind of people would take up Twitter the most, so the cultural fit of Twitter is actually mostly self organizing. The people and groups taking part of twitter select themself for this experiment. Now, of course there are lots of overlapping groups, and that gives even more interesting possibilities for the sociodynamics of meme transfer between non-overlapping social circles.
Take the Ruby people, who have a quite significant presence on Twitter. They are one of the test groups in my theory, and the TechCrunch article was a very directed way of inserting a meme and see how fast and to how many it propagated. It was very easy to insert this into the blog-sphere, since Twitter could have had any amount of people "leak" the information in the TechCrunch article. Once it was out, they just needed to set up some suitable filters and follow the spread. They also inserted a couonter meme, through one of their employees, to see if it work out as an "antidote" to the first meme, or which one of them was stronger. All in all, I think they got enough material for several research articles out of this stunt.
OK, so really, you don't need to grasp for conspiracy theories to explain the TC debacle. It's not necessary, so Occam's razor demands that we choose the simplest available conclusion that explains all the facts. This theory does not fall into this category. But it's still an entertaining notion.
And I predict that sooner or later, someone will use Twitter, or another network like it, to do this kind of research. It's a question of time. This kind of information is way to valuable for marketing purposes and also for the understanding of the human mind, that it will happen. The question is: will you know about it, when you're participating in their research?
Let's not forget the lovely recursive interpretation that this blog post is a way of doing the same kind of research I've just described.
Prenumerera på:
Inlägg (Atom)