There are a number of problems with this argument, but the biggest one is that most such calls will not actually be tail-recursive. The very specific `adjoin` example that has been provided only works with `contains` not being in a tail position because `or` is assumed to use short-circuit evaluation, and because the author is apparently willing to live with a set membership test that has linear time complexity. We can see that the `contains` method for `UnionObject` is not tail-recursive, for example, no matter how you cut it.
The primes example effectively constructs a linked list with 78,498 elements; any unsuccessful search will iterate over all those elements, and successful searches will (depending on how the search arguments are distributed) also easily traverse tens of thousands of elements. Handwaving performance issues away with "we know lots of ways this code can be optimized, but let us please focus on the behavior of the constructed set" is wrong, simply because efficient and actually usable implementations may exhibit totally different requirements w.r.t. tail recursion and what should be in an interface.
Why don't languages give tail calls their own syntax rather than making it an optimization? It seems like a lot of the arguments against tail calls boil down either 1) it's confusing when things go wrong and you weren't expecting stack frames to be optimized out or 2) it's hard to safely write tail-recursive code because a small change to the code or, worse, the compiler settings can make it stop being tail-recursive.
If you had to explicitly request it, say by doing `tailcall f()` instead of just `f()`, that would mostly fix these problems. Control flow will be a lot less confusing since it'll be explicit in the code (but you still lose information from the optimized-out stack frames, no way around that) and you'll get an explicit error if someone tries to add something that can no longer be tail call optimized, like `tailcall f() + 1`.
One would think that by now tail call optimization (TCO) would be understood to be absolutely necessary. TFA makes a very good case, but it's hardly the first or only example of someone making a convincing argument for TCO.
Even in jq we found that adding TCO enabled a number of interesting possibilities. For example, the range() builtin in jq can be implemented as a tail-recursive function (and the version that takes two or three arguments is). There are a number of builtins in jq 1.5 which are made possible and practical only by having TCO. Before we added TCO it just didn't seem that interesting, but it's proven to be quite an enabler.
I can't think of a language that shouldn't have TCO. C/C++ absolutely should have it (and many compilers do). Java should have it (but doesn't). Lisps badly need it and generally have it -- the same goes for all functional (or mostly functional) languages. Python needs it but apparently lacks it. And so on.
Yes, tail calls obscure stack traces by eliding reused frames. This could be ameliorated by leaving a flag in reused frames that could be shown on stack traces to indicate that there are missing frames. Or perhaps syntactic sugar could be used to control which tail calls are to get optimized (since often it does not matter).
This is an old debate.
On the one hand with tail calls you can write recursive code and find that it runs fast without excessive memory.
On the other hand when things blow up, it is really nice to have a stack backtrace to help debug why it blew up. But a stack backtrace can't include stack frames that were optimized away. Which makes it far harder to figure out why this call on this object turned into that call on that object.
Ruby's got TCO since 1.9 but it must be enabled at runtime in a pretty ugly way . I'm also working on Elixir which has TCO and it's the basis for running server processes: the server function calls itself passing the server state as argument. I don't see why OO languages shouldn't do the same. The argument about losing the stack trace is pretty weak. Nobody wants a million stack frames to debug. If the program dies we check the last values of the variables and that's it.
It's , not . 2011 is when the current host copied the document from the archive.
I'm not a fan of this style of coding. It's not just the stack space. When you call methods all over the place it can result in inefficient code. And yes, I understand that I wrote "can" in that sentence. We're talking about a case where the author wants the compiler to be even smarter than it already is. I guess that's my point. Using these kinds of abstractions results in a reliance on the compiler to make things efficient. OTOH using abstractions can make things simpler and more understandable for the programmer. But there's that word "can" again.
They don't need tail calls, unless you're planning on doing idiomatic functional programming in an OO language, which, Why??