• Re: I have always been correct about emulating termination analyzers--- PROOF

    From olcott@polcott333@gmail.com to comp.theory,comp.lang.c on Sun Oct 20 10:32:45 2024
    From Newsgroup: comp.lang.c

    On 10/20/2024 6:46 AM, Richard Damon wrote:
    On 10/19/24 11:20 PM, olcott wrote:
    On 10/19/2024 9:27 PM, Richard Damon wrote:
    On 10/19/24 8:13 PM, olcott wrote:

    You are directly contradicting the verified fact that DDD
    emulated by HHH according to the semantics of the x86 language
    cannot possibly reach its own "return" instruction and halt.


    But that isn't what the question being asked

    Sure it is. You are just in psychological denial as proven by
    the fact that all attempted rebuttals (yours and anyone else's)
    to the following words have been baseless.

    Does the input DDD to HHH specify a halting computation?

    Which it isn't, but is a subtle change of the actual question.

    The actual question (somewhat informally stated, but from the source you like to use) says:

    In computability theory, the halting problem is the problem of
    determining, from a description of an arbitrary computer program and an input, whether the program will finish running, or continue to run forever.


    That is the problem. Because it is too informally stated
    it can be misunderstood. No one ever intended for any
    termination analyzer to ever report on anything besides
    the behavior that its input actually specifies.

    So, DDD is the COMPUTER PROGRAM to be decided on,

    No not at all. When DDD is directly executed it specifies a
    different sequence of configurations than when DDD is emulated
    by HHH according to the semantics of the x86 language.

    On 10/14/2022 7:44 PM, Ben Bacarisse wrote:
    ... PO really /has/ an H (it's trivial to do for this one case)
    that correctly determines that P(P) *would* never stop running
    *unless* aborted.

    That everyone has always believed that a termination analyzer
    must report on behavior that it does not see is the same sort
    of mistake as believing that a set can be a member of itself.
    Eliminate the false assumption and the issue is resolved.

    and is converted to a
    DESCRIPTION of that program to be the input to the decider, and THAT is
    the input.

    So, the question has ALWAYS been about the behavior of the program (an OBJECTIVE standard, meaning the same to every decider the question is
    posed to).


    Then it is the same error as a set defined as a member of itself.
    The ZFC resolution to Russell's Paradox sets the precedent that
    discarding false assumptions can be a path to a solution.

    (where a halting computation is defined as)

    DDD emulated by HHH according to the semantics of the x86
    language reaches its own "return" instruction final state.

    Except that isn't the definition of halting, as you have been told many times, but apparently you can't undetstand.


    Sure and if everyone stuck with the "we have always done it that way
    therefore you can't change it" ZFC would have been rejected out-of-hand
    and Russell's Paradox would remain unresolved.

    Halting is a property of the PROGRAM. It is the property, as described
    in the question, of will the program reach a final state if it is run,
    or will it never reach such a final state.


    Much more generically at the philosophical foundations of logic
    level all logic systems merely apply finite string transformation
    rules to finite strings. Formal mathematical systems apply truth
    preserving operations to finite strings having the Boolean value
    of true.

    DDD emulated by HHH is a standing for that only if HHH never aborts its emulation. But, since your HHH that answer must abort its emulation,
    your criteria is just a bunch of meaningless gobbledygook.

    It seems that a major part of the problem is you CHOSE to be ignorant of
    the rules of the system, but learned it by what you call "First
    Principles" (but you don't understand the term) by apparently trying to derive the core principles of the system on your own. This is really a
    ZERO Principle analysis, and doesn't get you the information you
    actually need to use.


    ZFC did the same thing and successfully rejected the false assumption
    that a set can be a member of itself.

    A "First Principles" approach that you refer to STARTS with an study and understanding of the actual basic principles of the system. That would
    be things like the basic definitions of things like "Program", "Halting" "Deciding", "Turing Machine", and then from those concepts, sees what
    can be done, without trying to rely on the ideas that others have used,
    but see if they went down a wrong track, and the was a different path in
    the same system.


    The actual barest essence for formal systems and computations
    is finite string transformation rules applied to finite strings.

    The next minimal increment of further elaboration is that some
    finite strings has an assigned or derived property of Boolean
    true. At this point of elaboration Boolean true has no more
    semantic meaning than FooBar.

    Some finite strings are assigned the FooBar property and other
    finite string derive the FooBar property by applying FooBar
    preserving operations to the first set.

    Once finite strings have the FooBar property we can define
    computations that apply Foobar preserving operations to
    determine if other finite strings also have this FooBar property.

    It seems you never even learned the First Principles of Logic Systems, bcause you don't understand that Formal Systems are built from their definitions, and those definitions can not be changed and let you stay
    in the same system.


    The actual First Principles are as I say they are: Finite string
    transformation rules applied to finite strings. What you are
    referring to are subsequent principles that have added more on
    top of the actual first principles.
    --
    Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.20a-Linux NewsLink 1.114