Matt's Mind

Saturday, March 26, 2011

Lisp take 2

(This post has the dubious honour of being my first in 4 years. I had all sorts of grandiose ideas about moving this blog from blogger to something slightly more hip like posterous or my own hosting, but that would really have been such a waste of time. In the meantime, the planned move was a good excuse not to write anything. Further excuses like getting married and the birth of our daughter then presented themselves. What decided me to write again was re-reading some old articles and comments, and discovered some things I didn't even remember writing as well as some interesting comments - I realized I missed blogging.

Just for added nerd points, this article was written on an iPad. Using the touch keyboard. Yes, I also, apparently, did not get the memo that the iPad is only for consumption of content [link is to John Gruber's teardown of that strange meme].)

I recently re-read an article I wrote in 2007 about not being able to think in Lisp and worrying about what that meant to my development, or otherwise, as a programmer. Since then, I've resembled even more the developer's equivalent of a midlife crisis, and tried quite a few other languages, including Haskell, Scala, Groovy and, Brainfuck (just joking about that one but, frankly, I'm not convinced J or Forth, which I also had a look at, are that much different :) ).

Recently I've started a new commercial project using Clojure for the server backend and Objective-C on iOS for the (iPhone) client. And I'd like to report that I actually believe I can now, finally, think in Clojure, i.e. Lisp. Yay. It surprises and elates me to no end as it turns out.

What's more, writing significant software in Objective-C has given me an excellent basis of productivity comparison with Lisp and, I have to say, the Lisp code is so much more information dense,* it's a little scary. Partly so because I'm worried that means it's a write-only language from the point of view of anyone else. And also partly because I may not want to go back to OO Java land again.

That's not to say I don't like Obj-C, I do, and I really like the Cocoa/UIKit framework on iOS too: it's quite clearly the result of software API craftsmen working at the highest level. But things like the .h/.m file separation and, particularly, the lack of literal syntax for lists and maps are constant irritants, the latter so much so that I've started writing complex data structures in JSON and using a library to translate this into NSDictionary's/NSArray's at runtime, because it's just too verbose otherwise. And tracking retain/release bugs can be a huge black hole for productive time.

Did learning Lisp make me a Paul Graham-certified programming god? I rather suspect not, but I now I know what I was missing out on. And it was most certainly worth it in terms of mind-expansion and generally feeling more secure as a programmer nearing his 40's.

* The server, which implements a fairly scary data replication database, is less than 3,000 lines of Lisp. Whereas I suspect at least one of the Obj-C classes is probably that big by now :/


Saturday, August 11, 2007

A close encounter of the spiny kind

Yesterday I had an encounter with a very rare creature, an encounter which I suspect will be hard to top. It was effectively right in my backyard, for one of the advantages of living where I am in Adelaide is that getting out of the city into the (semi) wild is only a 20 drive up into the Adelaide Hills.

While the hills are certainly not an unspoilt wilderness, and have been changed quite a bit by introduced species and marauding ape-descendants, large areas are protected as National Park, and some of the more tolerant Australian wildlife still calls it come. This includes increasingly-ridiculous numbers of Koalas in some places: a group of us walkers once counted 21 Koalas in 1.5 hour walk.

Koalas are common as muck, and it's only interesting if you see one actually moving, which they seem to do about three times a day [1]. Kangaroos are more rare, but there are at least two groups that live in the areas run and walk in so it's interesting, but not a major event, to see them on the occasions that they are out, usually in the evening in open grassy areas.

But a real rarity and treat is seeing an Echidna [2], small spiny little buggers that are right up there with Platypuses in the weird stakes. Classifying them gave taxonomists some trouble, since they're basically mammals, except they lay eggs. They are small, secretive and roam widely, characteristics guaranteed to make sure you won't see one often.

I was out running starting from a place called Chamber's Gully up to a ridge, which is imaginatively named Long Ridge, cos it's quite long, you see? I went up the nice easy tourist trail to the ridge, and when I got to the top, with some spectacular 360ยบ views, I found that the sun was setting and decided I needed to cut the run short.

So I watched the sun go down over the ocean and then headed down what is basically the opposite of the nice track I came up, basically part-time stream masquerading as a track, cutting vertically down the side of the hill. But it has the virtue of halving the time needed to get back to the bottom. [3]

I was on my way down this track in twilight, with an evening breeze from the plains pushing back on me, when I saw, shuffling up towards me on the same track, an Echidna. I stopped dead immediately; the wind direction had obviously worked in my favour since it showed no sign it had seen me and kept right on shuffling. So I squatted down and watched it come towards me. And keep on coming, until it nearly bumped into a running shoe, which it carefully sniffed with a pointy snout. It then sniffed the other shoe, looked around myopically with little black beady eyes, and then, with me trying not to laugh out loud, it started to work it's way between my feet and got stuck with me squatting on it.

Now this is not a situation I had anticipated being in that evening and, if there had been any reasonable prospect of someone coming down the trail and seeing me squatting on a protected native species, I would have been inclined to get out there immediately. But there wasn't anyone else, and the Echidna seemed to have decided that this wasn't a bad place for a break, even if it did smell slightly like ape-descendant. So I felt I could take my time working out the best way to deal with this.

It was when the spines started to make themselves felt that I started imagining what a suddenly-startled Echidna, wedged in what must be admitted is a fairly sensitive area, might do to my chances of producing any of my own personal ape-descendants. So, I quickly devised an executed a cunning plan, which involved suddenly leaping up and forwards, and thus simply dealing with a steep rocky path and gravity, rather than have an angry monotreme wedged in my crotch.

Amazingly, the plan worked and this turn of events didn't seem to faze the little bugger. What it had previously judged as a warm, dark hole to rest in for a bit had just leaped out of the way, but no worries, these things happen. It shuffled off in silence, while I continued to try not to wee myself laughing at the utter ridiculousness of what had just happened.

  1. I once saw one jump between two trees and miss, which may also be hard to top.

  2. See wikipedia entry. This article is also worth a read.

  3. See long ridge between Greenhill and Waterfall Gully Road on this map. If you zoom in on the head of the ridge, you can see the vertical track heading off in the direction of 3 o'clock down to Chamber's Gully.

Sunday, May 13, 2007

Followup to "A Quick Restorative"

Just a quick followup to the previous entry. Since I made those comments about resource forks not being backed up I've had reason to rethink. The reason I thought they were not being backed up up was that MS Office 2004 stopped working with an error about a broken Carbon shared library. I naively assumed this was a resource fork issue, but it turned out that erasing and re-installing Office didn't fix the problem. Plus, other native Carbon apps stopped working with the same error. Oddly, Eclipse and other SWT apps, which use the Carbon API, were OK.

So, I had to bite the bullet and do what the Apple technical docs say, and do a system reinstall to fix Carbon. I was so not happy about doing this — in Windows this is just a world of pain, plus I was just about to go overseas on holiday and rebuilding the system just before a long trip sounded like an insane thing to do. But the "archive and reinstall" option that OS X has sounded straightforward, and I could always restore from backup, so off I went.

Despite my trepidation, it went unbelievably well. I took the opportunity to not install all languages and printer drivers this time, which gave me back a few GB of disk as a bonus. Then re-installing OS updates cost a couple of hundred MB of download, but I had a fixed, working system, including Carbon apps in about 2 hours. And everything else about the system, except the development tools and my UI customisation haxies migrated across.

Tuesday, March 13, 2007

A Quick Restorative

A while ago I wrote a short HOWTO that describes how to back up a Mac OS X system to another drive using rsync. This was not for the pure academic enjoyment of it, it's what I use to back up my sidekick PowerBook, and I recently got to see how well it worked when The Worst Happens.

Actually, the worst would probably be the laptop being stolen. What actually happened is, without warning, the hard drive lapsed into brick mode. In the short interval between being the heart and soul of the PowerBook and becoming an interestingly-shaped piece of metal, it started making strange noises. Being a professional computer geek I knew what this meant and wasn't even that surprised—laptop hard drives are notoriously fail-prone and mine had been spun up and down several times a day for 2 1/2 years plus the usual knocks laptops get when travelling. And there's that warm glow called Extended Warranty.

What did surprise me was that even while I could hear the HDD dying, the SMART status readout in Disk Utility confidently continued to read "Verified". Some recent data from Google (PDF) and the blogs that popped up in response seems to indicate SMART just isn't very reliable for detecting when your drive is about to fail—here's another data point to back that finding up.

Anyway, when the drive started coughing, I tried to get some recent data off it, which just accelerated its demise. So I booted off the external backup drive and had my system back. I was just about to start feeling smug when I realised I hadn't backed up for nearly six weeks. Irritating, but I hadn't lost much and still had a working system to use in the meantime. A bootable backup is a must-have IMHO, you can limp along with this in the time between it being fixed. Commercial systems that create non-bootable backups are really hobbled in this regard.

A week in the AppleCare hospital later and it had a new HDD. Reverse rsyncing from backup took about 2 hours and, after some messing around with the "Startup Disk" preference pane (the "bless" command didn't seem to stick for some reason), the machine booted and I was back to December 27, 2006 in PowerBookLand.

So the backup and restore system worked. I really can't even begin to estimate how long it would have taken to reinstall Mac OS X, download 100's of MB's of updates and applications, reinstall Office, XCode, Eclipse, etc, tweak my settings, restore music, contacts, calendar, photos (assuming I had those), ...

The more I think of how many things most people don't back up which cost them so much time and energy to set up I find it astounding. Of course there were three things in my favour

  1. I knew how much a disk failure and system rebuild would cost, having experienced this before,

  2. I had the technical knowledge to make a real backup rather than copy random directories every now and then, and

  3. I was using an OS that makes it easy to clone an entire system.

Because Mac OS X doesn't assume you're a pirate and require an activation scheme to prove you're not one (such as that used by some other companies which I won't mention but which sound like "Microsoft") it makes it possible to clone the system without the OS working against you. This is something I really appreciate about Mac OS X.

One thing that didn't work was resource forks. None of these got restored because, as it turned out, none were being backed up. I just didn't check—when the system booted of the backup and everything appeared to work I naively assumed resource fork backup must be working. However the reason the backup worked is that almost no software seems to rely on them anymore.

This was irritating, since the part of the point of the HOWTO was to add info on how to preserve these. But it certainly didn't have a large effect. What tipped me off is none of the Classic apps in my old "Applications (Mac OS 9)" folder which need the System 9 emulator work any more. No biggie, the System 9 apps were interesting for historical reasons, but I came into Mac land for Mac OS X.

More annoying, none of the Microsoft Office 2004 applications run. They're not even recognised as applications any more. Neither are Carbon ports of OS 9 applications such as Graphic Converter. Still, this should be fixable, and once I've fixed them I'll start investigating why resource forks aren't making it through rsync and update the HOWTO.

Monday, March 05, 2007


I just saw this ad on an IT shopping site:

Holy crap. My jaw dropped: AU$1,600 for Vista* plus Office. You can buy an entire PC for less than that. Hell you can buy a Mac mini plus a screen for less than that. What is the world coming to when Apple is the value-for-money option?!

I don't understand why MS even bothers with the retail boxed version of Vista/Office when it's about appealing as a shite sandwich.

* An XP -> Vista upgrade is a far more reasonable** $380

** For a given value of "reasonable".

Saturday, December 16, 2006

Error handling and elegance

I was analing1 some code today, and seriously reduced the size of some knotty configuration handling logic, which made me happy2.

Here's the core Java logic as a snippet:

config.setAll (propertiesFrom (fileStream (stringArg (args, ++i))));
System.exit (0);
} catch (Exception ex)
System.err.println ("Error starting app: " + ex.getMessage ());
System.exit (1);

The code just assigns all configuration properties read from a file stream, where the file name is read from the next command line option. I suppose some might argue that it's too highly nested, but I'd say that if you read it from left to right you get close to reading out its meaning: "set all properties from file stream ...".

All the actual logic in the snippet is contained in functions, but the sheer number of failure modes in this code is huge. The options file could be missing, unreadable, unparseable. There could be unknown options, options in the wrong format, options out of range: setAll () has about a dozen option-related failures alone. The command line parameter could be missing. A malformed UTF-8 character could appear in the bowels of the config file. Anything.

This is not even considering all the usual runtime errors such as out of memory exceptions. And yet the code above handles all of these without distraction from its core purpose, which is after all what it actually does 99.99% of its life. And generates good error messages, if you'll take my word for it.

The reason the code is so simple is that all functions either return a valid result, or throw an exception indicating why they couldn't3. No function ever returns a value that could cause an error later on simply by being invalid.

Now the reason I was pondering this is that Joel Spolsky, someone who I have a great deal of respect for, doesn't like this exception-oriented approach at all. The two main reasons he cites are:

1. They are invisible in the source code.
2. They create too many possible exit points for a function.

These, he argues are bad things. Which puzzles me, because, to me, these are good things. I have to assume *any* line could cause an error and any line of code could except out.

This argument has been done to death by the way, not least of all on Joel's own forum. But I'm going to wade in anyway because I don' have anything better to do, and no one reads this blog anyway.

C is a language that follows Joel's prescription for error handling. It doesn't have exceptions, and instead usually uses the function return code to indicate success/failure (and the global errno variable, which is the work of the devil ... don't get me started). You can alternately pass a pointer to an error info variable which gets filled in if there's an error, which I've seen done on occasion. As Spolsky puts it, best practice in C is to "[...] have your functions return error values when things go wrong, and to deal with these explicitly, no matter how verbose it might be."

Both ways require error checks spattered throughout the code. Which, and here is where I wade into the debate swinging wildly, leads to ugly code that doesn't compose nicely - for instance you can't pass one function call as a parameter to another. However, in C this sort of chaining is often confounded anyway by the fact that allocated memory returned by a function must be deallocated.

Do we really like longer, more complex code, where the flow is broken by all the error checking? And this repetition leads programmers to be lazy: "why bother check the result of this malloc()? It's a tiny amount of memory, it'll never fail" ... SEGV.

Is this just my opinion? Quite possibly, but I decided to do my best to code the important logic above in C to see what it would be like as an alternative. The C version tries to keep the important properties of the Java code above, including trying to provide useful error messages (something that many C programs utterly fail at, leading to hilarious "Error: there was an error"-style messages).

Here's my best attempt:

/* better hope message doesn't overflow this */
char *message = char [255];
char *error = NULL;
properties_t *properties = NULL;
char *file_name;
FILE *file_stream = NULL;

if (!(file_name = string_arg (args, i)))
sprintf (message, "Missing argument: %s", args [i]);

goto error_handler;

if (!(file_stream = fopen (file, "r")))
sprintf (message, "Failed to open file %s: %s",
file_name, strerror (errno));

goto errorHandler;

error = NULL;

if (!(properties = properties_from (file_stream, &error))
sprintf (message, "Error reading properties file: %s", error);

free (error);

goto error_handler;

if (error = config->set_all (properties))
sprintf (message, "Error setting properties: %s", error);

free (error);

goto error_handler;

/* done! */
exit (0);


fprintf (stderr, "Error starting app: %s", result);

if (properties)
free (properties);

exit (1);

I should note that I cut my programming teeth on C, and got to what probably was a pretty reasonable level. But nowadays that may not be the case and the below might be considered bad style. If so, I'd be really interested to know how it could be improved.


1. A new verb I've invented for obsessively tweaking perfectly good code.

2. And simultaneously sad in the eyes of most people reading this. Irony.

3. The fact that GC handles cleanup for us is also a big reason why this code is so simple.

Monday, November 20, 2006

Another reason why I like static typing

I just finished porting a fledgling project of mine from one version of a library API to a new one. The decision to convert was a no-brainer since the designers of the library had improved the API quite a bit over the previous one, and of course I wanted to be able to follow future updates.

But the authors hadn't actually bothered to describe the changes or anything, so I just basically dropped the new JAR over the old one and looked at the red error markers scattered over the code files that turned up in Eclipse. This is the key place where static typing made life much easier: I had a clear map of what needed to be updated and how much work it looked like it was going to be.

Things ranging from the obvious, like missing classes due to renaming, to more abstract errors such as unimplemented interface methods and exceptions that were no longer thrown were flagged. What was left was actual semantic changes and, after grokking some of their updated example code, a few search and replaces later: done.

All in all, surprisingly painless considering I was going in blind. With a dynamically-typed app it I suspect it would have been considerably less scientific. I suppose I'd start by running tests and build a list of missing method errors (assuming I have 100% test coverage).

But even when fixing these and having the tests pass I'd still be paranoid: what about "accidental" passes? Things like changed parameter lists, where extra parameters passed in but no longer used. And what happens when you use a library that uses dynamic tricks like handling missing methods in some clever way so what you're calling is no longer necessarily doing what you think it is?

Don't get me wrong, I think things like this are very cool and would probably use them myself. But I worry that there is a price, which is to make the code harder to understand and evolve.