Image created with Midjourney. Image prompt: 2d illustration of An individual trapped in a maze of tangled lines representing complex code, holding a magnifying glass. On the other side of the maze, there is a clean, straight path representing simple code. The individual looks longingly at the straight path
In the realm of software development, complexity can be a formidable adversary. While writing intricate, clever code may feel like a testament to one's skills, it can potentially be a bane when it comes to maintaining and debugging that code. This is where Kernighan’s Law comes into play, a principle named after Brian Kernighan that states, "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it"1.
<aside> 💬 Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.
Brian Kernighan
</aside>
Kernighan's Law is named for Brian Kernighan and derived from a quote from Kernighan and Plauger's book The Elements of Programming Style:
<aside> 💬 Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?
</aside>
While hyperbolic, Kernighan's Law makes the argument that simple code is to be preferred over complex code, because debugging any issues that arise in complex code may be costly or even infeasible. Let's delve into this concept further by looking at three examples.
Consider a software developer who decides to create a sorting algorithm from scratch for a simple task, such as sorting a list of names. They could easily use a built-in sorting function provided by most programming languages, but in an attempt to display their prowess, they construct an intricate and unique sorting mechanism. Later on, when a bug arises within this system, they find themselves in a quagmire, unable to comprehend the origins of the issue. Had they opted for the built-in function, they would have saved not only time but also the effort spent in debugging.
In the quest for efficiency, a software engineer decides to optimize a piece of code that runs a critical operation. They successfully reduce the time complexity from O(n^2) to O(n log n), making the operation faster. However, this optimization introduces a layer of complexity that makes the code harder to understand and, in turn, more challenging to debug. When an unexpected issue surfaces, it takes them twice as long to identify the cause, which wouldn't have been the case with the original, simpler version of the code.
In an effort to improve performance, a team of developers decides to make their software application multithreaded. While this approach indeed speeds up the application, it introduces new bugs related to thread synchronization and deadlock, which are notoriously difficult to reproduce and fix. The cleverness employed in designing the multithreaded application becomes a roadblock in resolving these bugs.