Bruce Hoult suggests on Lobsters that, contrary to popular programming lore, recursion is easier than iteration because it’s fundamentally more declarative. People get confused about recursion when they try to think too much about what the computer is doing in a step-by-step way and try to analyse it like they would iteration. It’s far easier to think in terms of ‘a sorted list is the first half of the list in sorted order, merged with the second half of the list in sored order, and when a list consists of only two elements, swap them if they’re in the wrong order.’1
However, bending implementations to be tail recursive, as is generally done in Scheme, somewhat undermines this principle. Perhaps Hoult’s argument is really an argument for proper tail calls modulo cons to be more widely supported, so that more procedures can be implemented in their natural recursive style instead of being bent to a pseudo-iterative style.
I’ve changed Hoult’s example from bubble sort to merge sort, which accurately demonstrates how recursive procedures can be efficient and not merely easily-readable.