How to explains the Halting Problem's unsolvability in computing, referencing Alan Turing's work. Let’s start with a simple Java login function and a example.

I came across a question on Zhihu, which goes like this: The person wrote a piece of code to control login, and the logic of the program is roughly as follows in Java:

The question is: Why is there a `return`

statement at the end, which is the last line of the code block, even though all logical branches of the program have already returned results, covering all possible cases? Isn't this final `return`

statement redundant?

This raises a famous problem in the field of computation: the Halting Problem. The Halting Problem is formulated as: Does there exist a program $P$ that, given any input program $w$, can determine whether $w$ will terminate within a finite time or enter an infinite loop?

This concept may seem abstract, but let's explain it with a real-life example: Suppose you're a student, and your math teacher is very strict, often assigning complex math homework.

One day, your teacher gives you an assignment with a strange requirement: you can only start working on it if you can prove that you can complete the math homework within a finite time, or prove that the math problem is unsolvable.

However, the problem is difficult, and you're unsure if it's beyond your capabilities, pushing you to your limits. You're stuck trying over and over again on your draft paper, not knowing when you'll find a solution. It could be a few hours, a whole night, a year, two years, or even a decade. Worse yet, you're not even sure if the problem has a solution. Many of us should be familiar with such experiences from our school days when grappling with math problems.

Alan Turing was the first to provide an unprovable proof. He concluded that the Halting Problem is unsolvable. In other words, you can never write a program $P$ that, given any input program $w$, can determine whether $w$ will terminate within a finite time or enter an infinite loop. Just like in the example above, you can never take on that math assignment; you just have to give up.

Next, we give a formal deduction for this problem:

Suppose there exists a function $H$, which takes two parameters: a program $p$ and input $i$, and can determine whether program $p$ terminates (i.e., halts) when given input $i$. $H(p, i)$ returns `true`

if program $p$ halts on input $i$, otherwise `false`

.

Then, we can construct a new function $D$, which takes a program $q$ as its parameter, and defines it as follows:

Now, let's consider the function $D(D)$, which takes itself as input. According to the definition of $D$:

- If $H(D, D)$ returns
`false`

, then according to the definition of $D$, $D(D)$ will halt. - If $H(D, D)$ returns
`true`

, then according to the definition of $D$, $D(D)$ will enter an infinite loop.

Now let's observe the two possibilities for $H(D, D)$:

- If $H(D, D)$ returns
`true`

, then according to the definition of $D$, $D(D)$ will enter an infinite loop. - If $H(D, D)$ returns
`false`

, then according to the definition of $D$, $D(D)$ will halt.

Regardless of whether $H(D, D)$ returns `true`

or `false`

, it leads to a contradiction with the definition of $D(D)$.

Therefore, we arrive at a paradox: If there exists a function $H$ that can determine whether any program halts, then we can use $H$ to construct a function $D$ that leads to a contradiction. Thus, the assumption is untenable, meaning such a function $H$ does not exist, thereby proving the unsolvability of the Halting Problem.

Based on the above conclusion, we can deduce that the capacity of computers is limited, meaning there's no universal algorithm that can solve all computational problems. This is a significant conclusion in computational theory.

Although Alan Turing proved the Halting Problem in the context of "Turing machines," where the problem is unsolvable, this conclusion applies to any modern computing system, even beyond computers to any algorithms or programs themselves.

Moreover, the Turing machine is an idealized computational model that assumes storage space and computing speed can be arbitrary and infinite. So, we can conclude that there's no modern computer whose performance exceeds that of a Turing machine. This indicates that computers are not omnipotent, and many problems in life cannot be solved by computers, no matter how intelligent or high-performing they are.

This can lead to another question: Can memory leaks be fundamentally avoided in program design? The answer is also negative. Upon careful consideration, it's understandable: In a program, how can we fundamentally determine the timing of memory reclamation? Or, can we prove that an object will be released within a finite time?

This problem is logically unsolvable. Languages like Rust provide numerous mechanisms at the language compilation level, but they can only address memory leaks caused by human errors to a small extent. They can only solve a small part of memory leak issues.

Now, returning to the code mentioned at the beginning of the article, why is it necessary to add a return statement at the end of the function? This is related to the compiler's strategy. Because for the compiler, whether branches in the program's business code will exit the scope is unpredictable, which relates to the halting problem discussed above. Therefore, the compiler assumes that any branch may not exit the function's scope. Consequently, the compiler must explicitly use return outside of the branches to exit the function.