I have been involved in several discussions about programming languages lately and people have assumed that since I spend lots of time in the Ruby world and with dynamic languages in general I don't like static typing. Many people in the dynamic language communities definitely expresses opinions that sound like they dislike static typing quite a lot.
I'm trying to not sound defensive here, but I feel the need to clarify my position on the whole discussion. Partly because I think that people are being extremely dogmatic and short-sighted by having an attitude like that.
Most of my time I spend coding in Java and in Ruby. My personal preference are to languages such as Common Lisp and Io, but there is no real chance to use them in my day-to-day work. Ruby neatly fits the purpose of a dynamic language that is close to Lisp for my taste. And I'm involved in JRuby because I believe that there is great worth in the Java platform, but also that many Java programmers would benefit from staying less in the Java language.
I have done my time with Haskell, ML, Scala and several other quite statically typed languages. In general, the fact that I don't speak that much about those languages is that I have less exposure to them in my day-to-day life.
But this is the thing. I don't dislike static typing. Absolutely not. It's extremely useful. In many circumstances it gives me things that really can't have in a dynamic language.
Interesting thought: Smalltalk is generally called a dynamic language with very late binding. There are no static type tags and no type inference happening. The only type checking that happens will happen at runtime. In this regard, Smalltalk is exactly like Ruby. The main difference is that when you're working with Smalltalk, it is _always_ runtime. Because of the image based system, the type checking actually happens when you do the programming. There is no real difference between coding time and runtime. Interestingly, this means that Smalltalk tools and environments have most of the same features as a static programming language, while still being dynamic. So a runtime based image system in a dynamic, late bound programming language will actually give you many of the benefits of static typing at compile time.
So the main take away is that I really believe that static typing is extremely important. It's very, very useful, but not in all circumstances. The fact that we reach for a statically typed programming language by default is something we really need to work with though, because it's not always the right choice. I'm going to say this in even stronger words. In most cases a statically typed language is a premature optimization that gets extremely much in the way of productivity. That doesn't mean you shouldn't choose a statically typed language when it's the best solution for the problem. But this should be a conscious choice and not by fiat, just because Java is one of the dominant languages right now. And if you need a statically typed language, make sure that you choose one that doesn't revel in unnecessary type tags. (Java, C#, C++, I'm looking at you.) My current choice for a static language is Scala - it strikes a good balance in most cases.
A statically typed language with type inference will give you some of the same benefits as a good dynamic language, but definitely not all of them. In particular, you get different benefits and a larger degree of flexibility from a dynamic language that can't be achieved in a static language. Neal Ford and others have been talking about the distinction between dynamic and static typing as being incorrect. The real question is between essence and ceremony. Java is a ceremonious language because it needs you to do several dances to the rain gods to declare even the simplest form of method. In an essential language you will say what you need to say, but nothing else. This is one of the reasons dynamic languages and type inferenced static languages sometimes look quite alike - it's the absence of ceremony that people react to. That doesn't mean any essential language can be replaced by another. And with regads to ceremony - don't use a ceremonious language at all. Please. There is no reason and there are many alternatives that are better.
My three level architecture of layers should probably be updated to say that the stable layer should be an essential, statically typed language. The soft/dynamic layer should almost always be a strongly typed dynamic essential language, and the DSL layers stays the same.
Basically, I believe that you should be extremely pragmatic with regards to both static and dynamic typing. They are tools that solve different problems. But out industry today have a tendency to be very dogmatic about these issues, and that's the real danger I think. I'm happy to see language-oriented programming and polyglot programming get more traction, because they improve a programmers pragmatic sensibilities.
Prenumerera på:
Kommentarer till inlägget (Atom)
22 kommentarer:
When programming test-first using any language, it is also "always runtime" almost as if using Smalltalk and a workspace or Lisp and a repl. So the way to always be at runtime in Ruby is to use the repl or, better, use TDD.
Smalltalk's image, and back in the day when more Lisps had images, had other advantages, e.g. not having to load and recreate so much data. TDD or a Ruby or Python repl is always being at runtime, but always having to recreate more data than when using an image. OK, done rambling.
"But this is the thing. I don't dislike static typing. Absolutely not. It's extremely useful. In many circumstances it gives me things that really can't have in a dynamic language."
Could you describe some of the things it gives you that a dynamic language doesn't?
In general, you talk about the advantages of static languages or the advantages of dynamic languages, without ever giving examples.
For myself, I know I'm much happier and more productive using LISP, SmallTalk or Ruby than using C++ or Java. I have my own, personal list of reasons. Some of which are probably just "essence" vs "ceremony", but some are definitely dynamic vs static issues.
I'd be very interested in hearing other people's perspectives. In particular, I'm interested in real-world, pragmatic advantages and disadvantages. Not the textbook, theoretical ones.
-Rich-
@rich
You asked about the advantages of statically typed languages.
But before i part: JAVA IS NOT A STATICALLY TYPED LANGUAGE
Indispite what they claim themselves it is not a real statically typed language at all and should never be used as an example of one.
Java lacks type-polymorphy, hacks in null-types for every possible variable with run-time-type-checking (and run-time null pointer exceptions). It overcomes the lack of type-polymorphy with castings (again falling back to run-time type-checking). It's too dumb to infer types as well.
Hence you get all the disadvantages of dynamically typed languages servered with a topping of all the disadvantages of statically typed language.
Alright, enough with the dont-take-java-as-an-example-disclaimer...
The advantages:
1. catch every possible type error at compile time and be forced to deal with it.
=> no need to come up with all kinds of unit-tests that try every possible branch. Less work. Any large scale project in a dynamically typed language needs unit-tests. The large majority of unit-tests are manual implementations of what a static type checker does for free.
=> bugs can only be semantically. That is: the program behaves wrong or returns the wrong data, but it won't get into any problamatic, let's halt execution state.
2. type-erasure. No need to ever store the type information in memory. No need to make run-time branching decisions based on types.
=> Uses half ot the memory when you have lots of small objects, like boxed integers.
=> Doesn't loose CPU on verifying the type over and over again.
=> Doesn't loose CPU when selecting a code-branch based on type.
3. Type Inference. Let the compiler tell you what the true types of things are.
=> Types often are part of documentation/api generation. By not having to declare any type, but having it _inferred_, you can automatically generate very usefull documentation.
=> Since inference by default (in most real statically typed languages), infers the most _generic_ type, you are notified that you can use the same function in way more contexts than that you might previously have considered.
4. Automatic type-based branching. (Multi-methods). That is, you can declare the several functions of the same name with different implementations for different types. OOP languages usually make this easy for _one_ type (the type of the object itself), but that is no solution for operators like (+) or (*). Their methods really shouldn't be part of either the lhs or the rhs.
5. Type based induction. Although not very common, still very usefull for writing super-generic code. Not only types can be inferred; types themselves are structued as well in sum and product types.
=>You could inspect these types and have functions that create type-specific implementation. For example to a generic fold or map function.
However, this all might seems somewhat abstract. Here are some examples in Haskell.
-- A boolean type
data Bool = True | False
-- A generic tree
data Tree a = Branch (Tree a) (Tree a) | Leaf a
-- A tree consisting of booleans
myTree = Branch (Leaf True) (Branch (Leaf False) (Leaf True))
-- a function that inverts a boolean
not True = False
not False = True
-- a function that applies another function to every leaf of a tree
applyTree f (Leaf a) = Leaf (f a)
applyTree f (Branch l r) = Branch (map f l) (map f r)
-- my inverted tree
myInvertedTree = map not myTree
So, what types would haskell infer for all these functions?
myTree :: Tree Bool
-- input: Nothing
-- returns: A tree of booleans
not :: Bool -> Bool
-- input: a boolean
-- returns: a boolean
applyTree :: (a -> b) -> Tree a -> Tree b
-- inputs:
-- 1. a function that converts something of type 'a' to type 'b'
-- 2. a tree of type 'a'
-- returns: a tree of type 'b'
Notice how generic the functions can be and how the types can be inferred and are inferred to their most generic shapes.
Anonymous, I'm not sure I agree with all your comments. For example, I think you're mischaracterizing unit tests. Personally, I think all code (static or dynamic) requires unit testing, and most unit tests focus on program logic, not on type issues.
Also, I don't buy the "Java is not static" rant. I agree, null pointers and casting create a loophole in the whole compiler-checks-types paradigm, but most commonly recognized static typed languages have this problem. More importantly, I don't think we should be defining languages based on rare corner cases. 99% of the time, Java is statically typed.
Finally, type inference is hardly a requirement for static typing. Static typing, as I understand the definition, simply means the type is checked at compile time.
It seems to me, most arguments for static typing boil down to two cases: 1) type checking at compile time produces more stable programs and 2) static languages are more efficient/faster.
In practice, I find #1 to just be untrue. Type errors, at least in my code, are rare and easy to find. Plus, dynamic languages usually give me a lot of tools to help me improve my code (interactive shells to experiment with live code, using reflection/metaprogramming in testing to improve unit tests, etc). When I do a full cost/benifit analysis, static typing always comes up short--at least in my experience.
That's why I classify #1 as a theoretical advantage, not a pragmatic one.
#2 Often seems like premature optimization. I'm also not convinced it holds true in all cases. Some versions of LISP claim to have speeds comparable or better than C. And, it's been shown that you can write Object Oriented code that heavily uses polymorphism to eliminate the need for branching. This can lead to significant speed increases, regardless of whether you're using a dynamic or a static language.
Still, I guess, if you really need speed for an application, you might be better choosing a static language.
However, I'm more interested in the pragmatic advantages of each.
For example, I love metaprogramming in dynamic languages. A lot of the techniques are powerful, elegant and cool--but they can be a bit dangerous. Redefining the methods for a core class can lead to very mysterious bugs, especially if you're working in a big team. So, while I might hesitate to use some of these methods in production code, I have no qualms about using them in my testing. This lets me make unit tests that are both simpler and more complete than their static-bound equivalents.
On the other hand, one of the biggest benefits I see from static typing (or maybe it's from ceremonious languages) is that the method definitions are often self documenting. If you have a method goTo(x, y) you don't know whether the values should be doubles or ints. goTo(double x, double y) explicitly defines its expectations.
This is particularly helpful when I'm using an IDE that supports code completion. The IDE will show me the method signature as I'm typing. Of course, for this to really be useful, you must use well named methods. Unfortunately, I often find the names too vague or their possible use too ambiguous, so I must look at the documentation anyway. For example, are the x and y absolute or relative measurements? What unit are they in? Etc.
-Rich-
In most cases a statically typed language is a premature optimization that gets extremely much in the way of productivity.
I used to think this is well. I have been programming dynamically typed languages for about 10 years, and seriously started doing Haskell about a year ago. I'm now at the point where the types don't get in the way anymore. They force you to think about the behavior of your functions, and in the case of Haskell, even about the possible side-effects of your function. I now have far less bugs in my code (and the bugs that I do have are mostly very interesting bugs). Maybe it is me, but static typing completely works for me (but so far, only Haskell's type system is strong enough to be convenient).
Another under-qualified commenter putting forward potentially misleading and therefore, damaging information to anyone willing to learn.
Type Theory is a science, not a blog-based debate. Learn it. It's only a fair to demand you learn the topic on which you feel compelled to comment, don't you think?
Rich: the problem with your example is that the type signature defined for goTo() is not properly abstracted. It defines the datatype in terms of which x and y should be implemented rather than using an abstract type that specifies what they mean.
What would make sense is to have goTo accept a Point if it moves something to an absolute coordinate, or a Vector if it moves by a relative amount. Somewhere in the code you may use a constructor from a pair of doubles to a Point or to a Vector, but in most of your code, you should be using Points and Vectors. This is just as true in a dynamic object oriented-language like Smalltalk as it is in an expressive statically-typed language.
One of the side benefits of static typing is that using a Point or a Vector need not be any less efficient than a pair of doubles. Static typing, used properly, allows you to think about the problem domain without worrying about making your program less efficient in doing so. Of course, boxed objects in Java screws this up, but that's specific to Java.
In general, a type should be a treated as a predicate, stating what you can assume about a data value, rather than simply a concrete description of how the value is implemented (this is useful information to know in some places, but irrelevant in most parts of your code). In fact, there's no reason you can't have a static type saying "this could be any value"—for example, ignoring primitive types, that's exactly what the Object type is in Java. A dynamically typed language is really just a degenerate statically typed language with only one static type: "any."
Sameer: I may not have explicitly stated it, but I was talking about calling methods from other people's code. Especially, when it's an API or code base that I'm not really familiar with.
I don't know about your workplace, but I'm often calling methods like goTo(double x, double y) at mine. Also, it would probably be reasonable to assume that goTo() required absolute coordinates, while something like move() required relative ones. But all that is really beside the point.
I was trying to give a simple, transparent example. The problem is even worse when passing objects.
Many times, while using Dynamic languages, the IDE gives me the method signature, but I have no idea what sort of objects are expected. One of the few nice things that I can say about Java is, I almost always know what sort of objects I need.
Again, this is usually only a problem when dealing with unfamiliar APIs. It typically goes away quickly with a little use.
I'm also not sure I'd agree that dynamic languages are just a degenerate static typed language. After all, I'm much more interested in the tools that dynamic typed languages typically give me (strong metaprogramming, evals, interactive shells, etc.), than in the dynamic typedness itself.
OK, here's another pragmatic example for team dynamic (or at least for "duck type" languages): I find it much easier to build systems from the bottom up in dynamic languages. Now, I haven't tried Haskell, so I won't speak for it. But C++, Java and friends seem to really want a top-down design. Whenever I've tried a bottom up approach (which is often my preference), I find myself constantly fiddling with (if not fighting against) the type hierarchy.
-Rich-
> "Type errors, at least in my code, are rare and easy to find."
Static languages help you find more than 'you've-used-the-wrong-type-here' errors.
It also helps you with wrong numbers of parameters. So a call like
log.writeMessage(msg)
will throw a compile exception if you defined the method as
writeMessage(msg, severity)
Auto-discovery is a massive win, too; being able to see
CreateProcess([path, args, waitForExit]
as a tooltip is a great little productivity tool. It saves a lookup in the documentation, and allows you to explore options by tacking '.' onto the end of a variable to see the methods it provides.
I also find it easier to write higher-order code, in languages which have type inference. If I write an anonymous method in C# like this;
customer => customer.Name
that expression is strongly typed. In this case, Converter<Customer, string>. So I can write
customerList
.ConvertAll(customer=>customer.Name)
.Pipe(printStrings);
and the type information ensures that my customer list is converted to a string list, which is a valid input to the printStrings() method. The great thing is that, half-way through writing the line, the compiler is giving me hints about exactly what methods are applicable.
I think I embrace type systems because the computer catches whole classes of error. I write so that the compiler becomes a kind of autistic sidekick. Let's face it, one of you is going to have to be the nitpicker. That means I spend a little less of my mental energy on minutiae.
I think in the past static systems haven't been clever enough to help the user without also hindering. I think type inferrence is changing that, by kicking the high priests out of the temple of ceremony. I can write huge chunks without type declarations.
For example, imagine that you've got a file containing a script. You want to load it, remove comments and blank lines, and throw it into an interpreter. This is typesafe C#;
fileName
.Pipe(File.ReadAllText)
.Split('\n')
.ConvertAll(line => Regex.Replace(line, "(.*)(#.*)", "$1"))
.ConvertAll(line => line.Trim())
.FindAll(line => line.Length > 0)
.ConvertAll(line => line.ToUpper())
.Pipe(interpretCommands);
Then that's an awful lot of work with no type declarations at all.
Hope that all makes some sense.
Steve
"In general, a type should be a treated as a predicate, stating what you can assume about a data value, rather than simply a concrete description of how the value is implemented"
Isn't this just an interface? Declare a class as
class Mallard: IDuck
and you can write methods which take IDuck and call Quack() and Waddle() statically.
Does it make sense to talk about how bad static type systems are compared to dynamic typesystems if you only know the week type systems of C++ and Java? IMO not.
Somebody said in Java it sometimes feels more like fighting against the type system than using it. I feel the same, but in Haskell it's the opposite. There the typesystem feels like something you miss in dynamic languages.
Excellent post, Ola. We have recently started the Polyglot Programmers User Groups and are planning to have a Scala talk in Chicago this summer. Thanks for all your excellent contributions to our craft.
Remember that while Newton's laws of physics were believed to be accurate in the 1700s and early 1800s, practical experiments in the 20th century revealed the inaccuracy and limitations of these laws. In fact, the physical paradigm of the universe has later shifted a number of times from Relativity and Quantum mechanics to String-theory and Super-String theory to M-Theory.
In a similar way, I used to believe in all the theory behind static typing until one day I got to experience Ruby. Seeing my productivity quadruple compared to Java despite the fact that I had no static type checking, autocomplete, or automated refactoring totally changed my perspective and revealed to me that pragmatically speaking static typing is over-rated.
The point is that it took practice of a dynamic language like Ruby to shift my perspective. And, who knows, practice may change my perspective again when I learn Haskell some day.
I think that most people who debate against dynamic typing with the usual reliability/performance argument have either not experienced a solid dynamic language like Ruby for a few months or have experienced it without following strict test-driven development and got frustrated.
While in my opinion TDD is required with dynamic languages, I believe it is just as important to follow with languages like Java or C# because after all having a program compile doesn't mean it will function correctly according to expectations. :)
Can anyone point me to a useful reference that gives specific examples of Haskell's static typing in action? I'm looking for concrete examples of things you can do in Haskell, that you cannot do in other languages. Preferably from a tutorial or online article.
I'm not trying to be a troll here. I'm really interested in the discussion, but so far I'm just not buying it.
Here's the thing. I don't think TDD is just for dynamic languages. If code--any code--isn't tested, you should assume it's buggy. And, testing does not end with unit tests. The code should be sent to a dedicated testing team, then sent to beta testers.
Type errors are, in my experience, trivial to find in dynamic languages. For one to remain undiscovered, it must be lurking in a rarely used branch. We're talking about a seriously rare event, something that would be entirely missed by the developer (which is unlikely, given an interactive shell), by the unit testing, by the testing team and by the beta testers.
So, yes. There's an incredibly slim chance that a type bug may slip by. But bugs are a fact of programming. We can never guarantee that our code is correct, not even in Haskell.
And compilers miss whole classes of more-serious bugs. Bugs that are not trivial or easy to find. If you write code with a lot of side effects, and you're working on any medium to large projects, you should expect bugs. If you deal with multiple threads (especially with side effects), you will probably find lots of bugs. If your language uses raw pointer or manually managed memory, you're just asking for bugs.
Clearly, static typing alone is not sufficient. Look at C++. It's static typed, but it also encourages side effects and has serious pointer and memory issues. it's a practical bug factory.
I'm not saying that Haskell code is not stable. I believe that it is, especially when compared with C++ or Java. But, as a largely ignorant outsider, I strongly suspect that its stability comes more from Haskell being a functional programing language, than from it being a static typed.
But, hey, I'm willing to be proved wrong.
-Rich-
@Annas:
I used to believe in all the theory behind static typing until one day I got to experience Ruby. Seeing my productivity quadruple compared to Java [...]
Java's really not the be all and end all of static typing, though. In fact, I think it's fair to say that no system makes static typing harder. As I understand it, Java has no type inferrence, and also requires everything to have an explicit type declaration.
This means that java requires a great deal of extra typing both on the keyboard, and in extra class declarations.
Haskell's type system is much more interesting.
Rich also asked about Haskell, so I'll comment on it next.
@Rich:
Can anyone point me to a useful reference that gives specific examples of Haskell's static typing in action? I'm looking for concrete examples of things you can do in Haskell, that you cannot do in other languages. Preferably from a tutorial or online article.
Have a look at 'A Gentle Introduction to Haskell.'
In Haskell, you don't declare any types, unless you feel the need. It uses something like the inverse of duck typing.
So, imagine that you have a function like 'divide x y = x / y'. Haskell looks at that and says 'since this expression works only with the quotient operator, it will only work with two numbers.' So you get a type error at compile time if you write 'divide "foo" "bar"'. That compares to the runtime error you get with Ruby.
I can't really explain the haskell type system in a blog comment, but I'd recommend you have a look, if you've got the time.
BTW, the C# team seem enamoured of Haskell, and newer versions of C# is developing some excellent haskell like features. If you're familiar with Java, it might be a more practical way to explore the ideas.
Type errors are, in my experience, trivial to find in dynamic languages. For one to remain undiscovered, it must be lurking in a rarely used branch.
This may be the essence of the discussion. What we're talking about is not just type errors, but any error that may be caught at compile time. I think Reg Braithwaite makes some excellent points here;
can your type checking system do this?
which is interesting for asking 'what can/might compile time checks be able to do?' rathar than just comparing existing mainstream languages.
I agrre with dibblego - Type Theory is a science, not a blog-based debate. Learn it. It's only a fair to demand you learn the topic on which you feel compelled to comment.
Type Theory is a science, ...
No it isn't. It may be philosophy. It may be mathematics. It may even be engineering. But it certainly isn't a science. Any more than computer science is a science. (para 6)
... not a blog-based debate.
We seem to be disproving that notion.
Rick writes:
If code--any code--isn't tested, you should assume it's buggy.
This is quite incorrect. All testing can do is show that there is a bug; it cannot prove that there isn't one. A good typing system can prove that certain bugs don't exit.
As for an example of Haskell's static typing in action, check out Peyton Jones's papers on transactional memory. If you've ever done any serious work with threads and had to deal with the locking issues, you'll understand how wonderful it is to have a type system prove that you don't have any bugs in your multiprocessing code.
@Steve
Thanks, I'll check those out.
@Curt
Yes, it's true. You can never prove a program is bug free. I think I even addressed that in an earlier post. But, that doesn't mean testing is useless. My point is, we should do testing, regardless of what programming language we use.
In my opinion (and the opinion of others) static typing only solves a relatively small set of problems. And, the benefit you get from it may not be worth the effort. Obviously, the exact ratio of benefit/cost varies from language to language. Java, for instance, has a relatively low benefit for a relatively high cost. Haskell seems better, but I'm still not convinced that it's worth the cost.
I'll definitely take a look at the Peyton Jones papers, but I have to tell you, I'm highly suspicious. I just don't see how static typing--on it's own--can help with concurrent code.
Here's the problem. If you have a program that uses shared memory, even if you have a perfectly static typed solution--you may get concurrency errors through race conditions. The problem comes form using shared memory, and has nothing to do with type.
Ok, here's a counter example. Erlang is designed for extremely reliable, highly concurrent systems. It has proven itself on many large, real-world projects. And, it's dynamically typed.
It's the lack of shared memory and the strict functional programming that give Erlang it's reliability. Type issues don't seem to matter that much.
So, I feel that a lot of this discussion has conflated the advantages of Haskell because it's a functional programming language, with the advantages of Haskell because it's static typed. In my mind, the functional advantages seem much more important than the static typed ones.
-Rich-
Rich,
In my opinion (and the opinion of others) static typing only solves a relatively small set of problems.
Well, to put it bluntly, this opinion is wrong. Compile-time type checking is capable of solving a far broader range of problems than you can currently imagine. Most people don't actually understand the range of problems it's capable of solving, probably because the most common implementations of compile-time type checking are so miserably weak. I suspect we both agree that you're better off with a fully dynamic system (such as Ruby) than a weak static system (such as Java).
Your doubts are shared by many. However, the programming community continually goes through stages like this. In the late 90s, many developers were highly suspicious of garbage collection and bytecode VMs; they're now very well accepted. In turn, up to about the mid-2000s, many developers thought that the "bondage and discipline" of Java's type system, EJB, and all that sort of stuff was productive; that attitude is slowly changing and we're seeing a strong shift to dynamic languages such as Ruby in consequence. I think we'll see the same thing with static typing over the next decade or so: in ten or fifteen years using a language that doesn't have a powerful type system will be looked upon as we now look upon using a language without object-oriented constructs or garbage collection.
I just don't see how static typing--on it's own--can help with concurrent code.
Well, that's why you should read the paper.
Here's the problem. If you have a program that uses shared memory, even if you have a perfectly static typed solution--you may get concurrency errors through race conditions. The problem comes form using shared memory, and has nothing to do with type.
You're right that improper use of shared memory is the problem. The whole point here is that a type system really can check that you're not using it improperly, in a way that might cause race conditions or deadlocks, and can make sure you're handling all the error conditions you need to handle. Please read the papers and see for yourself.
Skicka en kommentar