## The New Luddites, Part 3

The New Luddites have stuck their heads up again. This time it's the Economist whining about information overload in mobile GPS units. The correspondent is, he assures us, a longtime lover and user of maps, but somehow all those curvy lines on the display just overwhelm him. He compensates by dimming the display and just listening to the spoken directions. The barely-below-the-surface theme is that maps have no business being digitized anyway. After all, people have been getting along with paper maps for centuries.

Is this a problem anyone else has? The GPS display seems perfectly clear and informative to me, even without a lifetime love affair with maps. I just don't get it. Every once in a while you glance at the display to see how far it is to your next waypoint. Actually, you don't even have to do that because the GPS unit will warn you in plenty of time.

If the GPS unit overwhelms you, turn the damn thing off and use paper maps or your memory or whatever. Just leave the rest of us in peace. Please.

## Rewire Redux

A few posts ago I ranted about gave a carefully reasoned analysis of an article about how computers were destroying our brains or something. Now Cory Doctorow makes the same points in a slightly different context. He even mentions the old “calculators are destroying our ability to do math” silliness and explains why that argument is not just wrong but the actually misses the point that by liberating us from spending years developing a facility to do arithmetic in our heads we are free to learn to think about mathematics at a more abstract and useful level.

The link takes you to an 11 minute video interview of Doctorow that is well worth your time. Doctorow's main thesis is that technology frees us from worrying about many of the mechanical details of writing and mathematics and allows us to concentrate on their more important aspects.

## Another Slime for Vim

Over at Jonathan's Techno-tales Jonathan Palardy writes about using Vim for Lisp programming. His idea is to emulate some of the Slime functionality by using Gnu Screen to help communicate between the REPL and the Vim editing buffer. He's written a Vim plugin to automate the process but you don't need that to try the process out.

It's a nice hack but doesn't have anywhere near the functionality of Slime. Still, if you're a committed Vim user and program in Lisp this may be a good solution for you. The same technique should work with any language that uses a REPL-like mechanism.

## The CRU Files

This is not a political blog so I have nothing to say here about the ethics of the hackers who broke into the CRU network, downloaded 60–70 megs of files and emails, and posted them on the Internet. I also have nothing to say about whether Anthropogenic Global Warming is real or not because (1) I don't have the necessary scientific knowledge to make an informed decision, and (2) again, this is not a political blog and the AGW debate has, it seems to me, devolved into a political question. Finally, we should be clear that the files and emails have not been verified to be authentic although several have apparently been corroborated and no one, to my knowledge, has disputed the authenticity of any of the files or emails.

What I want to discuss is the shocking betrayal of science and the search for truth that these emails reveal. If the emails are genuine then their authors have forfeited any right to call themselves “scientists.” Rather, they are revealed as agenda driven politicos and particularly nasty and small minded ones at that.

The details are all over the Internet (see here or here for example) so I won't bother rehashing them but it's all there: gloating about the death of a rival, cooking the books on their data and then refusing to release raw data so that it could be verified, conspiring to evade (at least) the spirit of the UK Freedom of Information act, deleting incriminating emails, plotting to prevent skeptics from getting published, and more.

Again, it's possible that some of the more incriminating emails are fabrications, although no one is saying this, and they may well be taken out of context, but if they turn out to be genuine then the actions of the principals are shameful and they should be removed from their positions and no longer be treated as members of the scientific community.

## Help! Help! My brain is being rewired

I usually ignore twaddle such as this, but really, enough is enough. The new Luddites are at it again wailing about how spell checkers and contact lists are destroying our ability to remember and process details. I'm old enough to remember the exact same arguments being used against calculators: “Oh my gosh, pretty soon people won't be able to add two numbers.”

And you know what else? Some college students actually text each other during lectures. One wonders why their minds are wandering. There's even a line about “BART riders reaching furtively into their pockets thinking their iPhone has vibrated—even if it hasn't.” The horror, the horror!

The article goes on and on like this and so could I, but why bother? We've heard it all before and in a few years no one will remember or care about these doomsday predictions. Go read the article if you must, but it will probably be a better use of your time to check out your favorite Twitter feed.

UPDATE: Sadly, this sort of nonsense is nothing new.

## Compiling DSLs

Recently I had occasion to revisit this post from Peter Seibel's blog about compiling queries in Lisp. It's a cliche in the Lisp world that “if you're using eval, you're doing something wrong.” A more helpful formulation is, “if you're using eval, you should probably rewrite your code to use macros instead.” This is generally good advice—see here for some contrary examples—but it does not tell the whole story; that's where Seibel's post comes in.

Seibel considers the problem of a DSL for performing simple keyword queries on a block of text. He gives the example

(or
(and "space" "mission")
(and (or "moon" "lunar") "landing"))


which tests whether the text contains the words space and mission, or moon and landing, or lunar and landing. It's pretty easy to translate the above into a Lisp expression such as

(or
(and (word-in-string-p "space" text)
(word-in-string-p "mission" text))
(and
(or (word-in-string-p "moon" text)
(word-in-string-p "lunar" text))
(word-in-string-p "landing" text)))


where word-in-string-p returns t if the first argument is a substring of the second and nil otherwise.

The next problem is to evaluate this expression. The naive way to do this is to pass it to eval, and you know what? The earth will still be orbiting the sun if you do. But Seibel goes on to show us a better way and to illustrate why the prejudice against eval is usually well-founded.

Instead of invoking eval, we turn the above expression into a function

(lambda (#:G1)
(or
(and (word-in-string-p "space" #:G1)
(word-in-string-p "mission" #:G1))
(and
(or (word-in-string-p "moon" #:G1)
(word-in-string-p "lunar" #:G1))
(word-in-string-p "landing" #:G1))))

where the #:G1 is a gensym'd symbol. Now an immediate advantage of

this is that we have a function that we can pass to some other function or otherwise use in any of the ways that Lisp's first class functions can be used. This means that it can be called over and over again on a series of text blocks (the original problem is posed in terms of a database query). The key here is that we can take this function and compile it using compile-expression so that (for systems that support native compilation) we have a fast implementation compiled to machine language.

Notice that we're not talking about much more work than using eval. We merely have to wrap the code in the lambda form and then call compile-expression rather than eval. Seibel's got all the code in the post I linked above, so you should go take a look at it.

## MobileOrg!

The remote Org mode application for the iPhone that I've written about previously is finally available at the AppStore. I've downloaded the app to my iPhone, but still have to set up the WebDAV server for syncing data to and from the iPhone. I'll write more when I have things fully configured. In the mean time, get over to the MobileOrg Website and marvel at Org mode on the go.

## Tail Calls

I saw this post from Tim Bray at ongoing the other day and thought it was pretty silly. I was going to comment on it but decided that Bray is a serious and knowledgeable guy who is entitled to his opinion even if anyone with functional programming experience might find it snort-worthy.

Along comes Ezra Cooper at Ezra's Research with a response to Bray. It's too good to pass up and says everything that I wanted to say only better. He also points out that “tail-call optimization” is a misnomer. I liked that because the tail-call optimization nonsense has always annoyed me.

Cooper's blog has a lot of juicy content, so it's worth taking a look at even if you have no interest in the tail-call contretemps.

## The Hidden Costs of C++

Over at Rachels Lab Notes, Rachel Blum has a nice post about the hidden costs of C++ that pretty accurately captures one of my main complaints about the language. When I code in C I have a pretty accurate model in my head of what the generated code is going to look like and I can even evaluate the effectiveness of alternative expressions based on my understanding of what code (approximately) the compiler will generate. With C++, not so much. As Rachel points out, it's easy to imagine the code that

a = func( b, c );


will generate in C just by looking that the statement. With C++ we really have no idea. Is func a simple function? A class member function? A constructor? What about b and c? Will they invoke copy constructors? These and other questions can not be answered except by having detailed knowledge of the rest of the code, and even then it's not always clear to me what the resulting object code will look like. This makes it hard to generate efficient code with C++. For me, at least, dealing with that takes more effort than dealing with the code complexity of C that C++ claims to ameliorate.

## MobileOrg

A little while ago I posted about an iPhone app for Org mode. At that point, the project wasn't even in beta mode but now they have a website and are projecting that they will submit the app to Apple by October 18. This really closes the circle for me as far as Org mode is concerned: now I can take my notes and TODOs with me, update, and add to them and then sync everything up when I get back to my desk.

## Virtual Keyboards

Mike Elgan has an interesting post about the coming age of virtual keyboards. Elgan predicts that virtual keyboards will, in the near future, replace the traditional mechanical keyboard. My iPhone has a virtual keyboard, of course, and it's hard to imagine being able to use one efficiently. To be sure, the iPhone keyboard is too small for touch typing, but even with a full size keyboard I can't visualize myself using an iPhone-like virtual keyboard. The problem is the lack of tactile feedback and keeping fingers resting on the home row from registering a key hit.

Elgan says that these problems will be solved with haptics: the user will “feel” the keys and will receive tactile feedback when a key is “pressed.” I like the idea of getting rid of one more mechanical component but I'm a bit skeptical about ease of use issues. What do you think?

## A Scheme Bookshelf

Over at Programming Musings jao has an old post entitled A Scheme bookshelf that I just stumbled across. If you're looking for some great books on Scheme, I can't recommend his list enough. Go take a look.

## Lisp Outside the Box

Nick Levine has posted the first batch of sample chapters (Chapters 13–16) from his book-in-progress Lisp Outside the Box, which is being published by O'Reilly. My understanding of the book's purpose is that it's intended to be an introduction to Lisp crafted specifically to show that Lisp is applicable to a wide range of problems and that extensions and external libraries extend its application domain outside of what's provided “inside the box.”

## Two Schemes

The Scheme Steering Committee recently released a position statement in which they recommend splitting Scheme into "separate but compatible languages:" a small Scheme that is basically a modernized R5RS, and a large Scheme based on R6RS but with, as they put it, a "happier outcome."

The idea is that the small Scheme would be aimed at "educators, casual implementors, researchers, embedded languages, and 50-page' purists," while the large Scheme would be aimed at programmers and implementors of industrial strength Schemes. As a practical matter this would probably mean that large Scheme would have small scheme as a core and extend it with a larger standard library and perhaps some extra core features to support things like a module system. The small and large group charters specifically state that the languages be compatible and that small Scheme be a subset of large Scheme.

I think this is a good idea. By and large I'm happy with R5RS, but it would be nice to have a standard library for things like networking. On the other hand, I recognize that industrial use of Scheme requires things like a standard module system and perhaps (but only perhaps) some object oriented features like CLOS. In any event, the goal is to make Scheme portable among implementations, something that is dramatically missing currently.

Scheme is a great language and we should support changes that will increase its use and acceptance, but not at the cost of losing the features and principles that make it such a great language. Specifically, I would oppose any PLT-like changes that remove functionality in the service of protecting programmers from themselves. To paraphrase dmr, if you want Pascal, you know where to find it. Scheme is a Lisp dialect and Lisp is all about power, not about artificial restraints.

## Speaking of JSON

Over at the YAHOO! Interface Blog they have an interesting talk on JSON by Douglas Crockford the inventor—or as he puts it, discoverer–of JSON. He recounts the history of JSON, compares it to some of its competitors, including XML, and explains some of the wrinkles in the notation, such as why keys are quoted.

It's an interesting talk and well worth 50 minutes of your time.

## Scheme on Android

Previously, I wrote about developing in Scheme on the iPhone. Now in a post on the Google Research Blog, Bill Magnuson, Hal Abelson, and Mark Friedman are writing about using Scheme to develop on Android. Their work is part of the App Inventor for Android project that envisions using the Android as part of introductory programming courses for majors and non-majors alike.

Students would use a visual programming language, similar to Scratch, that provides a drag and drop interface in which students assemble applications for the phone. The visual language is compiled to s-expressions for a DSL written in Scheme macros. They are using the Kawa framework as their Scheme engine.

There aren't a lot of details in the post, but we can hope that developers will eventually get access to the underlying Scheme. This project is tremendously cheering to those of us who have been mourning the passing of 6.001.

## Data-interchange in Scheme

In my last post I talked about data-interchange using JSON. I remarked that the JSON format maps readily onto a very similar Lisp representation and that the Lisp version had the advantage of being parsed automatically by the Lisp (or Scheme) reader. In this post, I'd like to follow up on that by discussing the paper Experiences with Scheme in an Electro-Optics Laboratory by Richard Cleis and Keith Wilson. I forget when I first read this paper, but since then I've reread it several times and I learn something new each time I do.

The authors work at the Starfire Optical Range, an Air Force Laboratory doing research on controlling the distortion in optical telescopes caused by air turbulence. The laboratory has five telescope systems, which are run by about a dozen computers that provide tracking calculations, motion control, gimbal control, and command and status functions. Starfire uses Scheme in a variety of ways: as an extension language by embedding it in the legacy C application that provides motion control; as a configuration language for the telescopes and experiments; and as a bridge to proprietary hardware controllers by extending Scheme with the vendors' libraries. But what I want to talk about is their use of Scheme for data-interchange. All communications between the telescopes, their subsystems, and the computers are conducted with s-expressions that are evaluated by Scheme (or in one case by a C-based s-expression library).

This is more subtle than merely using s-expressions as a substitute for JSON. Every request for data or command to perform some function is sent as an s-expression. These messages have the form

(list 'callback (function1 parameter1...) ...)


where the optional callback function is defined by the sender and intended to handle the remote system's response. Each of the functions and their parameters (function1, parameter1, etc.) are commands and data to the remote system.

This seems a little strange until you realize that the remote system merely evaluates the message and returns the result using code like the stylized version below—see the paper for the actual, slightly more complex code.

(define handle-request-and-reply
(lambda (udp-socket)
(let* ((buffer (make-string max-buffer-size))
(write-udp-socket
udp-socket
(format #f "~s"
(eval
(substring buffer 0 n)))))))))


Now consider what happens when the remote system evaluates the message

(list 'callback (do-something arg1 arg2))


First the arguments to list are evaluated. The callback function is quoted so the remote system merely treats it as a symbol. The second argument is a call to a local (to the remote system) function called do-something. When the argument (do-something arg1 arg2) is evaluated, do-something is called and returns a result, say the-result. Finally, the function list is called with the arguments callback and the-result so the result of the eval is

(callback the-result)


and this is returned to the sender on the same socket that received the original message. When the sender receives this message it is read by a routine similar to the one above that evaluates the message but does not send a response. The result of that evaluation is to call callback with the argument the-result.

This is truly a beautiful thing. The function handle-request-and-reply takes care of all communication with the remote system's clients. Notice that it is completely general and will handle any legal message a client sends. There is no need to write special code for each type of message, merely a function to do whatever the client is requesting. The actual code is wrapped in an error handler so that even malformed messages are caught and handled gracefully with an error message returned to the sender.

This is, I submit, a better solution in many cases than something like JSON or XML. Just a few lines of code completely handles data-interchange, message parsing, and communication, and it shows the power inherent in the humble s-expression.

## JSON

ESR recently put up a paper on his redesign of the request-response protocol for gpsd. The gpsd project is interesting in its own right, but what interests me is his use of JSON to serialize command and response data to the back end of the daemon.

Like many hackers, I'm indifferent to Java. I've never had the urge to learn it and I've been fortunate that my employment has never required its use. I have noticed, however, that Java-related things always seem to start with a capital J, so when I saw references to JSON I always assumed it was yet another Java something or other and passed it by without further investigation. Like most forms of prejudice, of course, this was merely ignorance. While it's true that the J does stand for Java, JSON actually stands for Java Script Object Notation and it's a really neat way of serializing complex data for transfer between applications.

JSON has two types of data structures:

• arrays, in which the values are comma separated and enclosed in square brackets, and
• objects, which are serialized representations of what Lispers call hash tables or alists and Pythonistas call dictionaries. An object is a comma separated list of name/value pairs enclosed in curly braces. The name/value pairs have the form "name" : value.

These structures are fully recursive and each can contain instances of itself or the other. Names are always strings, but object and array values can be strings, numbers, objects, arrays, true, false, or null. The exact syntax, complete with railroad diagrams, is given on the JSON.org site linked above; a formal description is given in RFC 4627. As an example, here is a hypothetical object representing a family:

{"father":{"name":"John Smith", "age":45, "employer":"YoYodyne, inc."},
"mother":{"name":"Wilma Smith", "age":42},
"children":[{"name":"William Smith", "age":15},
{"name":"Sally Smith","age":17}]}

Notice that white space can be inserted between any pair of tokens to

make the object more human-readable, but that it is not required. Also notice that unused fields in objects (but not arrays) can be omitted as the employer field is in all but the father's object.

What I like about this is that it's fundamentally Lispy, which shouldn't be a surprise given JavaScript's Scheme lineage. For example, the above JSON family object rendered in a typically Lisp way is shown below. Notice how similar it is to the JSON rendering.

'((father . ((name . "John Smith") (age . 45) (employer . "YoYodynce, inc.")))
(mother . ((name . "Wilma Smith") (age . 42)))
(children . #(((name . "William Smith") (age . 15))
((name . "Sally Smith") (age . 17)))))


The nice thing about this representation is that the Lisp/Scheme reader will parse it directly. True, the name/value pairs are represented as alists, but for many applications that's the appropriate choice. Common Lisp hash tables don't have an external representation that is directly readable, but we could certainly use alists as such a representation at the cost of spinning down the list and hashing the names.

None of this is to say that we should eschew JSON in favor of s-expressions, especially when doing data-interchange between applications written in non-Lisp languages or even when interchanging data between a Lisp application and, say, a C application. Rather, my point is that Lispers should feel familiar and comfortable with it.

So what's the take away on this? First, that JSON is a simple and ubiquitous (the JSON.org page has links to parsers for dozens of languages, including CL and Scheme) data-interchange format that can make interprocess communication much easier when complex data is involved. Second, I think it nicely illustrates the power of Lisp ideas and data structures, even if JSON itself is neither.

## Interview

Over at the essential emacs-fu, djcb has an interesting interview with Chong Yidong and Stefan Monnier, the current maintainers of Emacs. They talk about how they became maintainers, the Emacs 23.1 release, and their plans for the immediate future of Emacs 23. It's well worth a read.

## Developing iPhone Apps in Scheme

Over at jlongster, James Long has a series of posts on developing iPhone applications using Gambit Scheme (the posts are collected here). It's still a work in progress, but James has a couple of working examples, some FFIs to get at iPhone functionality such as touch and the accelerometer, and even a screencast on how to set up a remote debugger.

This is great work and a nice chance to try out some iPhone development without having to delve into Objective-C. The posts include step-by-step directions for setting up the environment and the source for the examples is available for viewing or download. There is also a git repository available at github.

## Emacs 23 Released

The long awaited Emacs 23.1 was released yesterday as promised. I've been running the development version on a Mac for some time without any real problems, so there's no reason not to upgrade and give it a try.

## In Search of a Problem

I don't know what to make of this. What, exactly, is the market for vim with Emacs key bindings? Vim users envious of Emacs Pinky? In the About Vimacs page, the author explains that we was an Emacs user frustrated by the lack of speed in Emacs and the necessity to learn Lisp to perform customization. I have to say I'm not convinced. For sure it's a clever hack and does indeed scratch the itch that the author had but, really, it seems to me to be a solution in search of a problem.

Update: Fixed a typo.

## Implementing Continuations

Over at Lambda the Utimate they're featuring the classic paper Representing Control in the Presence of First-Class Continuations by Dybvig and Bruggeman. I remember reading this paper years ago at a time when I was still wrestling with the notion of continuations and how they worked. While reading the paper, everything suddenly snapped into place and it all made sense. If you're a Scheme programmer and you haven't read this paper yet, you should. You'll be glad you did.

## Nekthuth

Frank Duncan has been working on Nekthuth, a solution to the problem of programming in Lisp using the vim editor that I complained about previously. So far Nekthuth only works with SBCL and it lacks most of the features in SLIME, but it is a start.

Of course, we've heard the promise of a solution before only to be disappointed, so I'm still glad I went through the pain of switching to Emacs. Unfortunately, the architecture of vim makes it difficult to build a SLIME-like solution, so Frank's achievement is all the more striking. If you absolutely, positively can't give up vim but need to program in Lisp, you should give it a try.

## Continuations in Scheme

Here's a nice elementary explanation of continuations from Vijay Matthew over at (λ x.xx) (λ x.xx)

## Stop Iterating Down Lists

Over at Abstract Heresies Joe Marshall has two posts that make the point that we shouldn't write code to iterate down a list. We should, instead, use higher order functions such as map or fold. Thus, if we want to sum the squares of a list of numbers, we shouldn't explicitly iterate down the list like this

(define sum-of-squares
(lambda (numbers)
(let sum ((l numbers) (sos 0))
(if (null? l)
sos
(sum (cdr l) (+ sos (* (car l) (car l))))))))


but should write something like

(define sum-squares
(lambda (numbers)
(fold (lambda (num sum) (+ sum (* num num)))
0
numbers)))


It's a good point and one that we tend to forget no matter how fervently we worship at the Church of SICP.

## Speaking of the Fibonacci Sequence

Over at Programming Praxis, a recent problem involved the continued fraction

We can write this as the recursion relation

G0 = 1

Gn+1 = 1 + 1/Gn

The problem is to find G200. It's not hard to see that the Gn converge to some number G and therefore that G = 1 + 1/G, and so G is the golden ratio.

More interesting, though, is that Gn = Fibn+1/Fibn, where Fibn is the nth Fibonacci number starting from 1—that is, the Fibonacci sequence 1, 1, 2, 3, 5, … . Exercise 1.19 of SICP develops a fast method of calculating Fibn+1 and Fibn at the same time in O(log n) time (it's very much like the fast exponential routines). This gives us an elegant and fast solution to the problem

(define golden-ratio
(lambda (n)
(let iter ((n n) (p 0) (q 1) (a 1) (b 1))
(cond ((= n 0) (/ a b))
((even? n) (iter (/ n 2) (+ (* p p) (* q q))
(+ (* 2 p q) (* q q)) a b))
(else (iter (- n 1) p q (+ (* b q) (* a q) (* a p))
(+ (* b p) (* a q))))))))

I like this problem because it's a chance to use the fast Fibonacci

calculator from SICP, an opportunity that doesn't come up often.

## Fibonacci Numbers as a Dynamic Programming Problem

Early in our education as Lisp programmers we all learn the naturalness and power of recursion. We also learn that it can sometimes be the spectacularly wrong way of doing things. The canonical example is implementing the Fibonacci function

recursively.

(defun fib (n)
(if (<= n 1)
n
(+ (fib (- n 1)) (fib (- n 2)))))


This is a bad implementation because it repeatedly calculates the F(n) resulting in an exponential running time and an O(n) space requirement. But if we (slightly) rewrite the definition as a recurrence relation

F0 = 0 F1 = 1 Fn = Fn-1 + Fn-2 we notice that we have the makings for a classical Dynamic Programming solution: a recursion relation that specifies the solution of the problem in terms of one or more subproblems. It's straightforward to implement this solution

(defun fib-dp (n)
(let ((F (make-array (max 2 (1+ n)))))
(setf (aref F 0) 0)
(setf (aref F 1) 1)
(do
((i 2 (1+ i)))
((> i n) (aref F n))
(setf (aref F i) (+ (aref F (- i 1)) (aref F (- i 2)))))))


Here the amount of space and the amount of work are both O(n). We can make a further optimization if we notice that we don't need to store all the values of F, just the last two. This gives us a solution that is O(n) in time and O(1) in space.

(defun fib-opt (n)
(if (<= n 1)
n
(let ((F-last 1) (F-second-last 0) temp)
(do
((i 2 (1+ i)))
((> i n) F-last)
(setf temp F-last
F-last (+ F-last F-second-last)
F-second-last temp)))))


Finally, we can come full circle and reintroduce recursion. Because this is tail recursion the solution still uses constant space—at least for those Lisps (i.e. pretty much all modern Lisps) that optimize tail recursion.

(defun fib-opt2 (n)
(if (<= n 1)
n
(labels ((fib (n F-last F-second-last)
(if (zerop n)
F-last
(fib (1- n) (+ F-last F-second-last) F-last))))
(fib (1- n)  1 0))))


None of this is earth shattering, of course, but it's a nice illustration of how shifting your viewpoint a little can help you arrive at what SICP describes as a non-straightforward solution.

## Dragged kicking and screaming to Emacs (2)

At the end of my last post, I was happily going about my business secure in the knowledge that I was a lifetime vim user. Then I stumbled across Paul Graham's essay, Revenge of the Nerds. This essay—like many of his early ones—was about the power and superiority of Lisp. Unlike many programmers of my generation, I had remained mired first in assembly language and then C and had never learned Lisp, but Paul's essay inspired me to finally give it a try. So I picked up a copy of Paul's ANSI Common Lisp, read it cover to cover, and was hooked.

I quickly moved to Scheme and spent the next three or four years using it almost exclusively. I read Peter Seibel's Practical Common Lisp, and started spending more time with CL. Eventually, I divided my time pretty evenly between CL and Scheme.

### One Small Problem

Although I was having great fun playing with the DNA of the universe (XKCD notwithstanding), there was one small problem: it's really hard to program in Lisp effectively using vim. Yes, you can have vim running in one window and the REPL in another, save your edits, load them from the REPL and run your tests, but that's not really a smooth work flow. Yes, some Lisps and Schemes have an IDE, but using them means you have to juggle multiple editors and that you'll probably never master any of them. And yes, I know all about all the vim SLIME-like add ons that are coming real soon now, but after a while one tires of waiting.

So once again I downloaded the latest emacs sources, compiled and installed them, ran through as much of the tutorial as I could stand, loaded Quack and SLIME and started life as a born-again Emacs user. And it was painful. Oh my it was painful. All that vi muscle memory mated with the Emacs key sequences to produce such mutant offspring as CTL-k to scroll backwards—this was especially hilarious because I would often type CTL-k several times before I realized I was deleting lines rather than moving up.

There were other, more subtle, hurdles too. Vi users tend to fire up the editor when they need to edit a file and close it immediately afterwards. Emacs users tend to leave the editor open all the time and even keep buffers around weeks after they're finished with them. So on top of retraining my muscle memory I also had to learn a new way of working.

### Epilog

I've been using Emacs full time for about a year now, and am currently running the excellent Emacs 23. All in all, it's been worth the struggle. Writing Lisp code is so much easier that that one thing alone was worth the price of admission. I really like that I can add functionality that exactly suits my needs simply by writing some Lisp. Like many Emacs users, I discovered that the .emacs file is a splendid time trap, easily soaking up any cycles that wander within its event horizon.

I even use some of those features that I made fun of as a vim user. For example, Org-mode and remember are part of my work flow now and I don't know how I ever got along without them. I still don't play Towers of Hanoi though.

## Dragged kicking and screaming to Emacs (1)

I've been a staunch vi user for more than a quarter of a century, and although I never enlisted in the vi/Emacs holy wars, I was firm in my belief that vi—and especially its vim incarnation—was the right thing. I liked that it was lightweight and would start in an instant (although to be fair, that was less true of vim). I considered vi's modality—the main complaint of its detractors—a feature because it meant I didn't have to take my hands off the home row. It was fast and after all those years I was fast when I used it: all those cursor movement commands were burned into my muscle memory and I didn't have to think about them. It was almost as if vi were reading my mind.

Emacs, on the other hand, seemed bloated and clunky. Its huge memory footprint (neatly captured by the retronym Eventually Mallocs All of Core Storage) meant that it was painfully slow to start up. Its lack of modality meant that control sequences had to make extensive use of the metakeys giving rise to another retronym: Escape-Meta-Alt-Control-Shift. Its list of features, including a Towers of Hanoi puzzle seemed almost a caricature of marketing-driven featuritis.

Still, there were some things in Emacs that did invoke envy in the vi user. Its use of multiple windows meant that a user could edit related files (or different views of a single file) at the same time. Multiple buffers meant that a user could quickly switch between files without having to save the current file. Dired meant that a user could see the files in a directory and pick the right one to edit without having to fire up a separate shell and run ls. The built-in man command meant the user didn't have to leave the editor to make a quick check on the exact calling sequence of a function.

So every few years I'd make an attempt to adopt Emacs, but I was always overcome by minor annoyances: the jump scrolling seemed jarring; cursor control was physically painful; moving forward or backwards by words left the cursor in unexpected places; even the terminology seemed wrong (for example, yank means something else to the vi user). The result was always the same. I'd come home to vi but wish it had some of those neat Emacs features.

Finally, vim came along and provided most of the missing pieces that Emacs had but vi didn't. I was now completely content with vim and fully expected to stay with it until the heat death of the universe. But then something happened that changed everything.

—to be continued

## A Surprising Solution

In Exercise 1.3 of SICP, we are asked to define a procedure that take three numbers as input and returns the sum of the squares of the 2 larger numbers. We already have a procedure, sum-of-squares, that returns the sum of the squares of its two inputs, so a natural solution is something of the sort

(define sum-of-squares-of-max
(lambda (a b c)
(sum-of-squares
(if (< a b) b a)
(if (< (if (< a b) a b) c) c (if (< a b) a b)))))


or, perhaps, the less opaque

(define sum-of-squares-max
(lambda (a b c)
(cond ((and (<= a b) (<= a c)) (sum-of-squares b c)) ;;a smallest
((and (<= b a) (<= b c)) (sum-of-squares a c));;b smallest
(else (sum-of-squares a b)))))                ;;c smallest


Joe Marshall over at Abstract Heresies offers this surprising and elegant solution

(define sum-of-squares-max
(lambda (a b c)
(if (and (<= a b) (<= a c))                         ;;a smallest
(sum-of-squares b c)
(sum-of-squares-max b c a))))
`

## Welcome to Irreal

This is a blog primarily about Scheme and Lisp and their world of

infinite fun. As part of exploring that world, I'll also consider
related subjects such as Emacs, Slime, Org-Mode and, really, anything
else that excites my geek sensibilities.

The idea for Irreal came about when I stumbled upon a series of
posts on Eli Benersky's Website in which he presented his solutions to
the problems in Structure and Interpretation of Computer Programs
(SICP). I have long considered SICP to be one of the best books on
the art and craft of computer programming ever written. Because it
had been some time since I'd read SICP and I'd never worked through
all the problems, I decided that I would reread the book and write up
solutions to the problems in a form suitable for a solution manual—I
was aiming at the level of detail in the solutions to the exercises in
Knuth's Art of Computer Programming. One of the main purposes of
this blog, at least initially, is to record observations about the
exercises, their solutions, and any other technical issue that comes
up along the way.