While writing the post yesterday about testing regular expressions, I realized that this problem is not really specific to regular expressions. I got a very good comment noting that testing any place that uses some kind of DSL is definitely prudent. SQL is another example.
But these examples are both about actually testing the usage of them, and the problem becomes that you have two languages, but you're mostly only testing the code written in the outer language. This is due to several reasons. One of the most obvious ones is that our tools really doesn't make it that easy to do.
Thinking about these issues made me start thinking about how we generally test languages. Having worked on several language implementations and worked on both new languages, and implementations of existing languages, I've come to the conclusion that the whole area of testing languages are actually quite complicated, and also there are no real best practices for doing it.
First, there is a problem of terminology. Many implementations of languages that are really executable specifications of how the language should work. What's the difference? Well, testing the language according to such a spec, you are really only doing functional, black-box testing. I've looked at several of the open source language implementations, and I don't really see much usage of anything else than such language spec tests. This means basically that some parts of the implementation can be implemented wrongly, and by some freak chance it still works correctly in all the cases you have tests for, but it might fail in other ways.
Unit tests for the actual implementation would help with this - it helps since you will be doing TDD on the unit level, it helps because you make a conscious decision about the implementation and what it should be doing in these cases. It still doesn't make everything clear cut and simple, but it absolutely would help. So why don't most implementations do unit testing of the internals? I don't really know. Maybe it's because implementations can be extremely complicated. But that should be a reason for testing more, not testing less. One reason I feel a bit about is that it makes larger changes quite hard. Large refactorings are one of the ways JRuby has used to get incredible performance improvements and new subsystems, but unit tests can sometimes act as inertia for these.
I'm totally disregarding the academic approaches here. Yeah, in soem cases for simple languages, you can actually prove that it does what you want it to do, and for small enough implementations using a suitable language, you can actually prove the same things about the implementation. The problem is that this approach doesn't scale.
And since a language almost always is turing complete, that means that you can't exhaustively test it. There is no way of testing all permutations - either manually or automatically. So what should a language spec do? The first thing that many languages do are to specify that whole areas of functionality result in undefined behavior. That makes it easier. But the real problems exist when you start combining different features which can interact in different ways.
At the end of the day, I have no idea how to actually do this well. I would like to know though - how should I test the implementation, and how should I write an executable language specification? And these questions doesn't even touch on the question of testing the core libraries. Many of the some problems apply, but it gets even more complicated.
Prenumerera på:
Kommentarer till inlägget (Atom)
10 kommentarer:
I think problems like this have to be a combination of ad hoc testing, formal testing, and formal proof. Not that I am an expert in the formal aspects. I first started looking into this as a compiler and AI "hack" -- in 1984 I was in Austin for the AAAI conference but attended the Lisp and Functional Programming conference which was running in parallel. Will Clinger had a great paper on the Scheme 311 compiler...
http://portal.acm.org/citation.cfm?id=802052
I intended to point out with that reference, regarding your academic skepticism that I usually reserve for my own use, the interesting thing about Scheme 311 and this compiler is that the compiler preceded the attempts at a proof and that the compile is practical - it was used at Indiana University in education and also became the foundation of Clinger's MacScheme implementation which was very practical and available for the original MacOS for years. I developed real Mac applications with it.
You wrote: "I'm totally disregarding the academic approaches here... The problem is that this approach doesn't scale."
You heard that Ruby doesn't scale. You know why scaling argument is often bogus.
The fact is that mathematics scales extremely well. Mathematics are capable of proving things like Fermat's last theorem and classification of finite simple groups. Mathematics is an abstraction without leaks.
I believe formal proof of correctness for language implementations will become more common. Once you have a proof, you don't really need testing. What's needed are good tools and background theory of semantics.
I think you will agree that C is a practical language. INRIA developed an ANSI C compiler that compiles to PowerPC assembly with performance comparable to GCC -O1. And they proved it to be correct. Visit http://compcert.inria.fr/ for details.
sanxiyn: I hope you know that when I said scale, I didn't mean in the "ruby doesn't scale way". People have tried to use formal verification for programming language environments for ages, and almost all of them have failed.
And it's interesting that you pose C as a counter example, since C is basically just assembler with a nicer syntax. It's an extremely simple language. And the problem with formal verification isn't in the destination language, it's the source language. So yeah, if their system was written in Haskell I would assume that a formal verification process would be successful. But in most cases you want your language to be self hosting, which means that you are limited to languages that are formally verifiable in a simple way. C is not such a language, since you can do things like self modifying code.
You wrote: "People have tried to use formal verification for programming language environments for ages, and almost all of them have failed."
I pointed out a success case. So did Patrick. And formal verification tools improved a lot.
On INRIA CompCert: yes, C-the-language is a glorified assembler, but C-the-implementation, especially optimising one, is not trivial. CompCert proved things like constant propagation and common subexpression elimination to be sound.
Other successful efforts include http://afp.sourceforge.net/entries/Jinja.shtml and http://afp.sourceforge.net/entries/CoreC++.shtml for Java and C++ respectively. The former proved things like soundness of bytecode verifier of JVM-like VM.
Having a verified compiler means having a detailed operational semantics of a language. So based on CompCert model of C, you can reason about C codes, and (eventually) one could prove correctness of GCC and fixes all ICE(internal compiler error) bugs while doing so. I remain hopeful.
Sanxiyn: See, you can tell me examples, but what I said was that "almost all" has failed. So examples isn't really the point. You need to give an argument why it will work in most cases.
And having a detailed operational semantics description of a language will not tell you everything, because you can write a program in C that changes itself in such a recursive way that no mathematical description can actually prove it in finite time. I would assume this is a special case if the halting problem.
So what I'm saying is that formal verification is really good, in certain domains where it's needed - BUT it does puts quite severe limitations on what you can do in a language, which is not going to make it possible to use for most general purpose languages.
... Or, in other words, why don't we have formal verification processes for Java programs if it's so good?
"Why don't we have formal verification processes for Java programs if it's so good?"
It's a matter of time, I'd say. Take a look at http://krakatoa.lri.fr/ , a verification tool for Java programs, for something that works *right now*.
Re: halting problem and unverifiable code. You avoid such code, just like you avoid type errors.
In my opinion, Dijkstra said it right in The Humble Programmer: "No loop should be written down without providing a proof for termination".
Does the unability to write a loop without providing a proof for termination puts "severe limitations on what you can do in a language"?
Even if it does, you could punt, just like you could punt type system by casting. You could state your assumption that certain loop terminates, and go on to verify the rest of program from that assumption. Eradicating such assumptions could be an on-going project, just like achieving 100% coverage of unit tests.
Interesting post. I'm taking exactly this TDD approach with my new pet language, nil.googlecode.com. I write some example Nil code, and test that the standard out is correct. I don't add any implementation to the language unless there's a failing test. Even better, I added an assert() very early on, and a built in test fixture, so most of my code examples are self-contained.
One idea I'm thinking of is to make all of the documentation of the language executable, or format all of the tests into a readable documentation form. Do any of your favorite languages have a good approach for that? I've been tinkering with Scala lately, but Paul Hammant told me that he doesn't like it because the implementation was academic and not well tested.
Alex: Well, as it happens, my language Ioke has taken this approach and it works really well. It's TDD from scratch, and a testing framework built into the language, plus the documentation tools will extract all the spec information too.
Take a look at http://ioke.org/dok/index.html where you can see the current documentation. It works really well.
Skicka en kommentar