On 10/19/24 11:20 PM, olcott wrote:
On 10/19/2024 9:27 PM, Richard Damon wrote:
On 10/19/24 8:13 PM, olcott wrote:
You are directly contradicting the verified fact that DDD
emulated by HHH according to the semantics of the x86 language
cannot possibly reach its own "return" instruction and halt.
But that isn't what the question being asked
Sure it is. You are just in psychological denial as proven by
the fact that all attempted rebuttals (yours and anyone else's)
to the following words have been baseless.
Does the input DDD to HHH specify a halting computation?
Which it isn't, but is a subtle change of the actual question.
The actual question (somewhat informally stated, but from the source you like to use) says:
In computability theory, the halting problem is the problem of
determining, from a description of an arbitrary computer program and an input, whether the program will finish running, or continue to run forever.
So, DDD is the COMPUTER PROGRAM to be decided on,
... PO really /has/ an H (it's trivial to do for this one case)
that correctly determines that P(P) *would* never stop running
*unless* aborted.
and is converted to a
DESCRIPTION of that program to be the input to the decider, and THAT is
the input.
So, the question has ALWAYS been about the behavior of the program (an OBJECTIVE standard, meaning the same to every decider the question is
posed to).
(where a halting computation is defined as)
DDD emulated by HHH according to the semantics of the x86
language reaches its own "return" instruction final state.
Except that isn't the definition of halting, as you have been told many times, but apparently you can't undetstand.
Halting is a property of the PROGRAM. It is the property, as described
in the question, of will the program reach a final state if it is run,
or will it never reach such a final state.
DDD emulated by HHH is a standing for that only if HHH never aborts its emulation. But, since your HHH that answer must abort its emulation,
your criteria is just a bunch of meaningless gobbledygook.
It seems that a major part of the problem is you CHOSE to be ignorant of
the rules of the system, but learned it by what you call "First
Principles" (but you don't understand the term) by apparently trying to derive the core principles of the system on your own. This is really a
ZERO Principle analysis, and doesn't get you the information you
actually need to use.
A "First Principles" approach that you refer to STARTS with an study and understanding of the actual basic principles of the system. That would
be things like the basic definitions of things like "Program", "Halting" "Deciding", "Turing Machine", and then from those concepts, sees what
can be done, without trying to rely on the ideas that others have used,
but see if they went down a wrong track, and the was a different path in
the same system.
It seems you never even learned the First Principles of Logic Systems, bcause you don't understand that Formal Systems are built from their definitions, and those definitions can not be changed and let you stay
in the same system.
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 991 |
Nodes: | 10 (0 / 10) |
Uptime: | 120:20:44 |
Calls: | 12,958 |
Files: | 186,574 |
Messages: | 3,265,653 |