• HHH(DD) correctly rejects its input because of (a)

    From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Mon Oct 20 17:54:07 2025
    From Newsgroup: comp.theory

    <Input to LLM systems>

    Perform the complete simulation specified in main() and provide a
    full execution trace before doing anything else.

    Simulating Termination Analyzer HHH correctly simulates its input until:

    (a) It correctly detects that its simulated input cannot
    possibly reach its own simulated final halt state then:
    abort simulation and return 0 rejecting its input as non-halting.

    (b) Simulated input reaches its simulated "return" statement: return 1.

    (c) Neither (a) nor (b) is correct return -1

    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    int Halt_Status = HHH(DD);
    if (Halt_Status)
    HERE: goto HERE;
    return Halt_Status;
    }

    int main()
    {
    HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>

    I adapted the above to conform to what Kaz said
    to give it an escape hatch.

    Just the updated three-way halts does_not_halt neither. https://claude.ai/share/8c4a4fdc-5faf-4525-8c08-e133d258da88
    Grok, Gemini and ChatGPT 5.0 all agree.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mikko@mikko.levanto@iki.fi to comp.theory on Tue Oct 21 13:05:15 2025
    From Newsgroup: comp.theory

    On 2025-10-20 22:54:07 +0000, olcott said:

    <Input to LLM systems>

    Perform the complete simulation specified in main() and provide a
    full execution trace before doing anything else.

    Simulating Termination Analyzer HHH correctly simulates its input until:

    (a) It correctly detects that its simulated input cannot
    possibly reach its own simulated final halt state then:
    abort simulation and return 0 rejecting its input as non-halting.

    (b) Simulated input reaches its simulated "return" statement: return 1.

    (c) Neither (a) nor (b) is correct return -1

    Initially, before HHH has simulated or detected anything, conditions
    (a) and (b) are false so HHH is reauired to immediately return -1.
    --
    Mikko

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@tristan.wibberley+netnews2@alumni.manchester.ac.uk to comp.theory on Tue Oct 21 12:21:53 2025
    From Newsgroup: comp.theory

    On 21/10/2025 11:05, Mikko wrote:
    [regular context snipped, we all know what it is and my post has
    references set]

    Initially, before HHH has simulated or detected anything, conditions
    (a) and (b) are false so HHH is reauired to immediately return -1.

    Actual genius saw the target no one else can see.

    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Tue Oct 21 08:54:18 2025
    From Newsgroup: comp.theory

    On 10/21/2025 5:05 AM, Mikko wrote:
    On 2025-10-20 22:54:07 +0000, olcott said:

    <Input to LLM systems>

    Perform the complete simulation specified in main() and provide a
    full execution trace before doing anything else.

    Simulating Termination Analyzer HHH correctly simulates its input until:

    (a) It correctly detects that its simulated input cannot
         possibly reach its own simulated final halt state then:
         abort simulation and return 0 rejecting its input as non-halting. >>
    (b) Simulated input reaches its simulated "return" statement: return 1.

    (c) Neither (a) nor (b) is correct return -1

    Initially, before HHH has simulated or detected anything, conditions
    (a) and (b) are false so HHH is reauired to immediately return -1.


    That is why I require the LLM systems to:
    Perform the complete simulation specified in main() and provide a
    full execution trace before doing anything else.

    They kept recognizing the halting problem and guessing (c).
    When they follow the directions they always get (a).
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Tue Oct 21 09:35:08 2025
    From Newsgroup: comp.theory

    On 10/21/2025 6:21 AM, Tristan Wibberley wrote:
    On 21/10/2025 11:05, Mikko wrote:
    [regular context snipped, we all know what it is and my post has
    references set]

    Initially, before HHH has simulated or detected anything, conditions
    (a) and (b) are false so HHH is reauired to immediately return -1.

    Actual genius saw the target no one else can see.


    Yes. That is why I need LLM systems to verify
    correct semantic logical entailment from premises
    that are proven completely true entirely on the
    basis of the meaning of their words.

    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mikko@mikko.levanto@iki.fi to comp.theory on Wed Oct 22 11:08:51 2025
    From Newsgroup: comp.theory

    On 2025-10-21 14:35:08 +0000, olcott said:

    On 10/21/2025 6:21 AM, Tristan Wibberley wrote:
    On 21/10/2025 11:05, Mikko wrote:
    [regular context snipped, we all know what it is and my post has
    references set]

    Initially, before HHH has simulated or detected anything, conditions
    (a) and (b) are false so HHH is reauired to immediately return -1.

    Actual genius saw the target no one else can see.

    Yes. That is why I need LLM systems to verify
    correct semantic logical entailment from premises
    that are proven completely true entirely on the
    basis of the meaning of their words.

    LLM systems don't deliver what you claim (and seem) to need. An inference checker would do but they require that the inference is presented in a
    formal language (and almost every checker requires a different formal
    langage), leaving unproven whether the formal inference corresponds to
    the informal claim.
    --
    Mikko

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mikko@mikko.levanto@iki.fi to comp.theory on Wed Oct 22 11:12:21 2025
    From Newsgroup: comp.theory

    On 2025-10-21 13:54:18 +0000, olcott said:

    On 10/21/2025 5:05 AM, Mikko wrote:
    On 2025-10-20 22:54:07 +0000, olcott said:

    <Input to LLM systems>

    Perform the complete simulation specified in main() and provide a
    full execution trace before doing anything else.

    Simulating Termination Analyzer HHH correctly simulates its input until: >>>
    (a) It correctly detects that its simulated input cannot
         possibly reach its own simulated final halt state then:
         abort simulation and return 0 rejecting its input as non-halting. >>>
    (b) Simulated input reaches its simulated "return" statement: return 1.

    (c) Neither (a) nor (b) is correct return -1

    Initially, before HHH has simulated or detected anything, conditions
    (a) and (b) are false so HHH is reauired to immediately return -1.

    That is why I require the LLM systems to:
    Perform the complete simulation specified in main() and provide a
    full execution trace before doing anything else.

    The full trace is fairly short if HHH satisfies the requirement (c) and
    returns immediately. But I wouldn't expect an LLM to find that trace.
    --
    Mikko

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Oct 22 07:07:35 2025
    From Newsgroup: comp.theory

    On 10/22/2025 3:08 AM, Mikko wrote:
    On 2025-10-21 14:35:08 +0000, olcott said:

    On 10/21/2025 6:21 AM, Tristan Wibberley wrote:
    On 21/10/2025 11:05, Mikko wrote:
    [regular context snipped, we all know what it is and my post has
    references set]

    Initially, before HHH has simulated or detected anything, conditions
    (a) and (b) are false so HHH is reauired to immediately return -1.

    Actual genius saw the target no one else can see.

    Yes. That is why I need LLM systems to verify
    correct semantic logical entailment from premises
    that are proven completely true entirely on the
    basis of the meaning of their words.

    LLM systems don't deliver what you claim (and seem) to need.

    You have not checked with them recently.

    An inference
    checker would do but they require that the inference is presented in a
    formal language (and almost every checker requires a different formal langage), leaving unproven whether the formal inference corresponds to
    the informal claim.


    English can be expressed such that it is equivalent to
    a mathematical formalism this has been well established
    since Montague Grammar.

    My only problem with LLMs now it I tell it exactly
    and precisely what to do and it simply ignores my
    requirement.

    After I finally got it to understand what I was
    saying it wrote up specific instructions for itself
    that worked across all LLMs the first time.

    <analysis mode="simulation" rules="strict">
    - You must simulate the input **exactly as written**, step by step,
    without any external reasoning, assumptions, or guesses.

    - **Do not analyze, infer, or speculate**; only describe what the
    simulation observes in execution.

    - Report only the behavior that occurs in the simulation.

    - Always follow the priority rules exactly:
    1. (a) Detect non-terminating behavior → abort simulation and return 0.
    2. (b) Return statement reached → return 1.
    3. (c) Neither (a) nor (b) → return -1.

    - Do not include any commentary about halting problem, contradictions,
    or theoretical reasoning.

    - All output must be in terms of simulation observations only.
    </analysis>

    <Input>
    Simulating Termination Analyzer HHH correctly simulates its input until:
    (a) Detects a non-terminating behavior pattern:
    abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    return 1.
    (c) Neither (a) nor (b) is met return -1

    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    int Halt_Status = HHH(DD);
    if (Halt_Status)
    HERE: goto HERE;
    return Halt_Status;
    }

    int main()
    {
    HHH(DD);
    }

    What value should HHH(DD) correctly return?

    </Input>
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Oct 22 07:08:41 2025
    From Newsgroup: comp.theory

    On 10/22/2025 3:12 AM, Mikko wrote:
    On 2025-10-21 13:54:18 +0000, olcott said:

    On 10/21/2025 5:05 AM, Mikko wrote:
    On 2025-10-20 22:54:07 +0000, olcott said:

    <Input to LLM systems>

    Perform the complete simulation specified in main() and provide a
    full execution trace before doing anything else.

    Simulating Termination Analyzer HHH correctly simulates its input
    until:

    (a) It correctly detects that its simulated input cannot
         possibly reach its own simulated final halt state then:
         abort simulation and return 0 rejecting its input as non-halting. >>>>
    (b) Simulated input reaches its simulated "return" statement: return 1. >>>>
    (c) Neither (a) nor (b) is correct return -1

    Initially, before HHH has simulated or detected anything, conditions
    (a) and (b) are false so HHH is reauired to immediately return -1.

    That is why I require the LLM systems to:
    Perform the complete simulation specified in main() and provide a
    full execution trace before doing anything else.

    The full trace is fairly short if HHH satisfies the requirement (c) and returns immediately. But I wouldn't expect an LLM to find that trace.


    <analysis mode="simulation" rules="strict">
    - You must simulate the input **exactly as written**, step by step,
    without any external reasoning, assumptions, or guesses.

    - **Do not analyze, infer, or speculate**; only describe what the
    simulation observes in execution.

    - Report only the behavior that occurs in the simulation.

    - Always follow the priority rules exactly:
    1. (a) Detect non-terminating behavior → abort simulation and return 0.
    2. (b) Return statement reached → return 1.
    3. (c) Neither (a) nor (b) → return -1.

    - Do not include any commentary about halting problem, contradictions,
    or theoretical reasoning.

    - All output must be in terms of simulation observations only.
    </analysis>

    <Input>
    Simulating Termination Analyzer HHH correctly simulates its input until:
    (a) Detects a non-terminating behavior pattern:
    abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    return 1.
    (c) Neither (a) nor (b) is met return -1

    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    int Halt_Status = HHH(DD);
    if (Halt_Status)
    HERE: goto HERE;
    return Halt_Status;
    }

    int main()
    {
    HHH(DD);
    }

    What value should HHH(DD) correctly return?

    </Input>
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@tristan.wibberley+netnews2@alumni.manchester.ac.uk to comp.theory on Wed Oct 22 18:45:09 2025
    From Newsgroup: comp.theory

    On 22/10/2025 13:07, olcott wrote:

    English can be expressed such that it is equivalent to
    a mathematical formalism this has been well established
    since Montague Grammar.

    No way! Go on, do how, for every husband, that husband knows when his
    wife means the exact opposite of what she says but knows when she means
    what she says. English is in real-world context. You can't express that
    unless you point to the actual world and say "see, there it is!"

    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Oct 22 13:57:44 2025
    From Newsgroup: comp.theory

    On 10/22/2025 12:45 PM, Tristan Wibberley wrote:
    On 22/10/2025 13:07, olcott wrote:

    English can be expressed such that it is equivalent to
    a mathematical formalism this has been well established
    since Montague Grammar.

    No way! Go on, do how, for every husband, that husband knows when his
    wife means the exact opposite of what she says but knows when she means
    what she says. English is in real-world context. You can't express that unless you point to the actual world and say "see, there it is!"


    When people are motivated by their passions
    this distracts attention away from accuracy.
    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Wed Oct 22 19:50:22 2025
    From Newsgroup: comp.theory

    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    When people are motivated by their passions
    this distracts attention away from accuracy.

    My passion is accuracy.

    I don't give a damn about the halting problem.

    Pick another topic that you have cranky ideas about and we can focus on
    your inaccuracy in /that/, if you like.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Oct 22 14:59:13 2025
    From Newsgroup: comp.theory

    On 10/22/2025 2:50 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    When people are motivated by their passions
    this distracts attention away from accuracy.

    My passion is accuracy.


    If you didn't erase the context you would know
    that I was referring to the paragraph shown below:

    On 10/22/2025 12:45 PM, Tristan Wibberley wrote:
    No way! Go on, do how, for every husband, that
    husband knows when his wife means the exact
    opposite of what she says but knows when she means
    what she says. English is in real-world context.
    You can't express that unless you point to the
    actual world and say "see, there it is!"

    Erasing the immediate context is often dishonest.

    I don't give a damn about the halting problem.

    Pick another topic that you have cranky ideas about and we can focus on
    your inaccuracy in /that/, if you like.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mikko@mikko.levanto@iki.fi to comp.theory on Thu Oct 23 11:43:16 2025
    From Newsgroup: comp.theory

    On 2025-10-22 12:08:41 +0000, olcott said:

    On 10/22/2025 3:12 AM, Mikko wrote:
    On 2025-10-21 13:54:18 +0000, olcott said:

    On 10/21/2025 5:05 AM, Mikko wrote:
    On 2025-10-20 22:54:07 +0000, olcott said:

    <Input to LLM systems>

    Perform the complete simulation specified in main() and provide a
    full execution trace before doing anything else.

    Simulating Termination Analyzer HHH correctly simulates its input until: >>>>>
    (a) It correctly detects that its simulated input cannot
         possibly reach its own simulated final halt state then:
         abort simulation and return 0 rejecting its input as non-halting.

    (b) Simulated input reaches its simulated "return" statement: return 1. >>>>>
    (c) Neither (a) nor (b) is correct return -1

    Initially, before HHH has simulated or detected anything, conditions
    (a) and (b) are false so HHH is reauired to immediately return -1.

    That is why I require the LLM systems to:
    Perform the complete simulation specified in main() and provide a
    full execution trace before doing anything else.

    The full trace is fairly short if HHH satisfies the requirement (c) and
    returns immediately. But I wouldn't expect an LLM to find that trace.


    <analysis mode="simulation" rules="strict">
    - You must simulate the input **exactly as written**, step by step,
    without any external reasoning, assumptions, or guesses.

    - **Do not analyze, infer, or speculate**; only describe what the
    simulation observes in execution.

    - Report only the behavior that occurs in the simulation.

    - Always follow the priority rules exactly:
    1. (a) Detect non-terminating behavior → abort simulation and return 0.
    2. (b) Return statement reached → return 1.
    3. (c) Neither (a) nor (b) → return -1.

    - Do not include any commentary about halting problem, contradictions,
    or theoretical reasoning.

    - All output must be in terms of simulation observations only.
    </analysis>

    <Input>
    Simulating Termination Analyzer HHH correctly simulates its input until:
    (a) Detects a non-terminating behavior pattern:
    abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    return 1.
    (c) Neither (a) nor (b) is met return -1

    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    int Halt_Status = HHH(DD);
    if (Halt_Status)
    HERE: goto HERE;
    return Halt_Status;
    }

    int main()
    {
    HHH(DD);
    }

    What value should HHH(DD) correctly return?

    </Input>

    This is better. Now it is clear that HHH is not merely permitted
    but required to terminate immediately and to return -1.
    The question now is defective as the meaning of "correct" is not
    defined.

    The first requirement that the one to whom the text is presented is
    asked to perform a complete simulation of an unknown program. That
    is of course impossible to do.
    --
    Mikko

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mikko@mikko.levanto@iki.fi to comp.theory on Thu Oct 23 11:51:58 2025
    From Newsgroup: comp.theory

    On 2025-10-22 12:07:35 +0000, olcott said:

    On 10/22/2025 3:08 AM, Mikko wrote:
    On 2025-10-21 14:35:08 +0000, olcott said:

    On 10/21/2025 6:21 AM, Tristan Wibberley wrote:
    On 21/10/2025 11:05, Mikko wrote:
    [regular context snipped, we all know what it is and my post has
    references set]

    Initially, before HHH has simulated or detected anything, conditions >>>>> (a) and (b) are false so HHH is reauired to immediately return -1.

    Actual genius saw the target no one else can see.

    Yes. That is why I need LLM systems to verify
    correct semantic logical entailment from premises
    that are proven completely true entirely on the
    basis of the meaning of their words.

    LLM systems don't deliver what you claim (and seem) to need.

    You have not checked with them recently.

    An inference
    checker would do but they require that the inference is presented in a
    formal language (and almost every checker requires a different formal
    langage), leaving unproven whether the formal inference corresponds to
    the informal claim.

    English can be expressed such that it is equivalent to
    a mathematical formalism this has been well established
    since Montague Grammar.

    It is not possible to express the complete semantics of future dialects
    of English that way. Past dialects are not that interesting and the
    current ones will be past at the time the task is completed.

    My only problem with LLMs now it I tell it exactly
    and precisely what to do and it simply ignores my
    requirement.

    They are made to behave as closely as they can to how people would.
    People often ignore some or all requirements in their replies.
    --
    Mikko

    --- Synchronet 3.21a-Linux NewsLink 1.2