Everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it?

Brian W. Kernighan

 

The other day at work I was trying to write a test using VCR, a popular Ruby tool (with ports to many other languages) that records and plays back HTTP calls to external services, so you can test your client without actually pinging the external server more than once. I ran into a problem, however. No matter what I did, VCR refused to record my test. Was it because I was testing a POST request instead of GET? Was it because I was using a file attachment? Were there bad file permissions on the cassette directory? Was it not properly hooking into my client’s HTTP library? I spent the better part of a Friday trying to solve the question of why VCR wasn’t recording my request and response.

On Monday, I sat down quickly found the problem: I had reorganized some of the code before writing the test and it was failing before a request could even be sent due to a missing import statement (thanks to Ruby for not making that a fatal error). The bug was embarrassingly easy to fix once I started to look for it. The problem was that I spent Friday looking for the wrong bug because I didn’t really understand how VCR works, so I couldn’t tell correct operation from incorrect operation.


Once at another job, we thought about using Django-polymorphic. Our use case was that we had base articles with certain basic features like title, body, and description plus specialized features like lead images or tags depending on the site. Using Django-polymorphic, we could make a query against the base article class and it would return a list of article sub-class instances based on each article’s type. Best of all, it did it all “magically”: no need to specify anything extra when you run the query, the correct results will automatically be returned by the ORM!

Within days of rolling out Django-polymorphic, a bug cropped up on our sites. We couldn’t trace the issue down, but we were convinced it must be a bug in Django-polymorphic. The magical wizard had betrayed us! Finally, in desperation, we ripped Django-polymorphic out, only to find… the problem was actually unrelated.

When we patched the problem, though, we didn’t put Django-polymorphic back. Why? Because we realized that as long as we were using it but didn’t understand it, we would end up blaming all our hard to understand bugs on it, whether the magic inside of Django-polymorphic was the real cause or not. Magic was just too unpredictable. It was better to stick with something less elegant but better understood.


Any sufficiently advanced technology is indistinguishable from magic.

Arthur C. Clarke

If you don’t know how a technology works, it seems magical to you. The flip-side of Clarke’s quote is one from Joel Spolsky:

All non-trivial abstractions, to some degree, are leaky.

Abstractions fail. Sometimes a little, sometimes a lot. There’s leakage. Things go wrong. It happens all over the place when you have abstractions.

Magic is wonderful when it works, but very few abstractions can be trusted not to break down occasionally. That’s not to argue that you can’t use abstractions. A modern computer system is nothing but a wobbling tower of poorly understood abstractions. It’s just when you aren’t aware of the abstractions your system is built on and something goes wrong, the best you can do is make like Lucy Lawless and say

YouTube embed

A wizard did it.

 

Using “a wizard did it” as a plot contrivance is considered a bad way of writing fiction because the audience never knows what the stakes are. If the heroes get into trouble could a wizard just randomly save them? Or if they are about to win, could a wizard just randomly defeat them? The existence of wizards removes all narrative tension because the universe has no certainties.

Krupal Shah wrote a good summation of my thoughts on abstraction:

I don’t think that libraries and frameworks should provide the abstraction in such way that hides the details of the underlying system. Rather, they should provide the abstraction in a way that people can actually understand the underlying system. Too much abstraction becomes a barrier for most people, especially beginners to actually understand what the system does. It makes them think more about doing things rather than going deep and understanding things. That’s not a good thing. We want people to actually understand the details of stuff under the hood, not to learn the ways of using things.

People are willing to put up with magic in TV and movies as long as there seem to be rules that guide what the wizards can or cannot do. We’re in the middle of genre boom for world building fiction (which probably goes too far as it tries to fulfill some sort of psychic need for order in early twenty-first century America) in part because people like learning about system that’s can be fully understood and mastered.


Julia Evans just gave a talk at Strange Loop where she argues instead of sitting around and waiting for a wizard to do it, you can be the wizard!

I used to think to debug things, I had to be really really smart. I thought I had to stare at the code, and think really hard, and magically intuit what the bug was.

It turns out that this isn’t true at all! If you have the right tools, fixing bugs can be really easy. Sometimes with just a little more information, you can figure out what’s happening without being a genius at all.

By mastering debugging tools the let you go deeper into your stack, you can reverse engineer the magic behind whatever is happening your program. You can turn magic back into sufficiently advanced technology. You can follow the leaky abstractions back to the source pipe.

Steve Jobs said,

Everything around you that you call life was made up by people that were no smarter than you and you can change it, you can influence it, you can build your own things that other people can use.

That’s not really true, but in the world of software, it’s close enough to true. One of the real turning points in programming is when you look at some framework or magical tool and say, “I could have made that too. I just didn’t have the time or opportunity.” Once you realize that you could have made any one of the tools you use (but you didn’t because no one has that kind of time), you won’t be afraid anymore to look behind the curtain and find out who the wizard really is:

YouTube embed