This is a long and hopefully insightful post about the power of programming languages, particularly Python and Lisp. Most of this is inspired by the Theory of programming languages
class I'm taking this semester, which is one of the toughest classes I've taken at UT. The reason I'm finding it tough is that while a lot of the stuff we learn in grad school here is part of the undergrad curriculum at IIT, the theory of programming languages is one significant omission.
Conceptually, the lambda calculus
is very powerful stuff. The entire theory is way too complex to describe here, but let me present one nugget. Consider booleans. Anything you can do with a boolean can be done with a branch (a JNE instruction, an if-then-else, however you want to think of it). For instance, "a and b
" is the same as "if a then b else false
". So in the lambda calculus, a boolean is
an abstraction which will branch to its first or its second argument. That's the only thing you can do with a boolean, and there are no separate branch statements. Booleans and branches are the same thing. (For those who know what a lambda expression is, "true = lambda a. lambda b. a
" and "false = lambda a. lambda b. b
This takes some getting used to. Unfortunately, when you're learning the lambda calculus you have to get used to having to make big shifts in your whole world view every five minutes. Like I said, conceptually powerful stuff.
But does the lambda calculus, in its avatars as functional programming languages, actually give you more expressive power over procedural or object oriented programming?
The pedantic answer to that would be that all functional programms can be converted into procedural programs (both are Turing complete, if you understand computer science theory), but that's clearly not what I'm talking about. We don't write machine code even though we can achieve the same things by doing so.
Most academic types strongly believe that the answer to the above question is affirmative. Apparently much of the love affair that academia has with functional programming goes back to a 1984 paper by John Hughes, "Why Functional Programming Matters
." It is rather weak, at least from the modern reader's point of view, for such a seminal paper: it claims that functional programming rules all because it has higher order functions and lazy evaluation. Higher order functions are functions that take functions as arguments. They're somewhat like function pointers, only a lot less hairier. Lazy evaluation is roughly like saying that all functions behave like Unix pipes, but somewhat more powerful.
Even though this paper was written 20 years ago, I'm still surprised that these features were thought to be a big deal, because they are readily available in object oriented languages. What you can do with higher order functions you can do with Interfaces, and Iterator objects are virtually the same as lazy evaluation. I'm not sure what was the exact state of the OOP world back then, but I know that OO languages existed way back in 1967. In any case, we can say with certainty that as of today we have John Hughes covered. I am in fact (co-)writing a class paper right now which aims to show that everything in the Hughes paper can be done just as easily in Java.
But there are other, non-academic proponents of functional programming who appear to have other reasons. The foremost of these is probably Paul Graham
. In particular, his essay Beating the Averages
on how his startup that used Lisp for the backend totally kicked everyone else's ass is highly recommended reading
for any programmer. This is my favorite part:
Languages fall along a continuum of abstractness, from the most powerful all the way down to machine languages, which themselves vary in power... to explain this point I'm going to use a hypothetical language called Blub. Blub falls right in the middle of the abstractness continuum. It is not the most powerful language, but it is more powerful than Cobol or machine language.
As long as our hypothetical Blub programmer is looking down the power continuum, he knows he's looking down. Languages less powerful than Blub are obviously less powerful, because they're missing some feature he's used to. But when our hypothetical Blub programmer looks in the other direction, up the power continuum, he doesn't realize he's looking up. What he sees are merely weird languages. He probably considers them about equivalent in power to Blub, but with all this other hairy stuff thrown in as well. Blub is good enough for him, because he thinks in Blub.
When we switch to the point of view of a programmer using any of the languages higher up the power continuum, however, we find that he in turn looks down upon Blub. How can you get anything done in Blub? It doesn't even have [feature] y.
By induction, the only programmers in a position to see all the differences in power between the various languages are those who understand the most powerful one. (This is probably what Eric Raymond meant about Lisp making you a better programmer.) You can't trust the opinions of the others, because of the Blub paradox: they're satisfied with whatever language they happen to use, because it dictates the way they think about programs.
Extremely well written, and I completely agree with it. I believe I am at the n-1th
level of this hierarchy: being fluent in Python, I tend to wonder how people get anything done with less powerful languages, but being not very fluent with lisp, I also tend to wonder what's so great about it.
The killer feature of Lisp, according to Paul Graham (and many others) is macros (in lisp, a macro is function that processes code). Keeping data and code separate is something that is drilled into every beginning programmer's head, so you might wonder why macros are a good thing. This is because Lisp programs have an extremely simple tree structure (the same as all Lisp data), which makes processing code not only feasible, but very easy and very beneficial. While I am convinced of the need to experience this first hand by writing a large Lisp program, I am not convinced that python doesn't have macro-equivalent features. For example, someone who moved from Lisp to Python
says you don't miss macros that much because Python has eval, and operator overloading, and regular expression parsing.Notes.
The need for the tree structure appears to be why Lisp programs need all those parentheses. It takes a little more getting used to than whitespace indentation in python (which I became comfortable with in about 15 minutes, and others have said the same thing), it's still not as bad as I thought it would be. In particular, with a smart editor and some experience, you can make it work to your advantage.
A lot of people appear to think that removing lambda means that Python won't be functional anymore. That's dumb: lambda is just syntactic sugar; the real functional progamming support in Python is the ability to pass functions around.
Finally, I was surprised to find that Lisp is way
faster than Python (at least when the Great Computer Programming Language Shootout was conducted.) This is probably because Lisp is more mature and is less dynamically typed. Fortunately, Python has made great progress in this area in the last few years.