• Re: Updated input to LLM systems proving HHH(DD)==0 withinassumptions

    From olcott@polcott333@gmail.com to comp.theory,comp.ai.philosophy on Sun Oct 12 20:36:11 2025
    From Newsgroup: comp.ai.philosophy

    On 10/12/2025 1:49 PM, Andrew Church wrote:
    On 10/12/25 12:04 PM, olcott wrote:
    Also very important is that there is no chance of
    AI hallucination when they are only reasoning
    within a set of premises.  Some systems must be told:

    Please think this all the way through without making any guesses

    I don't mean to be rude, but that is a completely insane assertion to
    me. There is always a non-zero chance for an LLM to roll a bad token
    during inference and spit out garbage.

    If it is provided the entire basis for reasoning
    then is cannot simply make stuff up about this basis.

    Sure, the top-p decoding strategy
    can help minimize such mistakes by pruning the token pool of the worst
    of the bad apples, but such models will never *ever* be foolproof. The
    price you pay for convincingly generating natural language is
    bulletproof reasoning.


    LLM systems have gotten 67-fold more powerful in that
    their context window increased from 3000 words to
    200,000 words in the last year.

    They seem to be very reliable at applying semantic
    logical entailment to a set of premises. This does
    seems to totally prevent any hallucination.

    It like talking to a guy with a 160 IQ that knows
    the subject of computer theory and practice like a PhD.

    If you're interested in formalizing your ideas using cutting-edge tech,
    I encourage you to look at Lean 4. Once you provide a machine-checked
    proof in Lean 4 with no `sorry`/`axiom`/other cheats, come back. People might adopt a very different tone.

    Best of luck, you will need it.


    https://leodemoura.github.io/files/CAV2024.pdf
    LLM's can do the same thing with very carefully
    crafted English. My initial post provided an
    example of this.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,comp.ai.philosophy on Sun Oct 12 20:49:43 2025
    From Newsgroup: comp.ai.philosophy

    On 10/12/2025 12:06 PM, Mike Terry wrote:
    On 12/10/2025 16:53, Bonita Montero wrote:
    Sorry, that's silly. You spend half your life discussing the
    same problem over and over again and never get to the end.

    This gives PO a narrative he can hold on to which gives his life a meaning:  he is the heroic world-saving unrecognised genius, constantly struggling against "the system" right up to his final breath!  If he
    were to suddenly realise he was just a deluded dumbo who had wasted most
    of his life arguing over a succession of mistakes and misunderstandings
    on his part, and had never contributed a single idea of any academic
    value, would his life be better?  I think not.

    Thankfully he has recently discovered chatbots who can give him the uncritical approval he craves,

    Clearly you have not kept up with the current state
    of the technology.

    LLM systems have gotten 67-fold more powerful in that
    their context window increased from 3000 words to
    200,000 words in the last one year.

    They seem to be very reliable at applying semantic
    logical entailment to a set of premises. This does
    seems to totally prevent any hallucination.

    It like talking to a guy with a 160 IQ that knows
    the subject of computer theory and practice like a PhD.

    It went from barely understanding my most basic proof
    to be able to accurately critique all of my work of how
    I apply an extension of Kripke

    https://files.commons.gc.cuny.edu/wp-content/blogs.dir/1358/files/2019/04/Outline-of-a-Theory-of-Truth.pdf

    to Gödel, Tarski, the Liar Paradox and the Halting
    problem in a single conversation. I now have Kripke
    as the anchor of my ideas.

    so there is next to no chance of that
    happening now.  [Assuming they don't suddenly get better, to the point where they can genuinely analyse and criticise his claims in the way we do...  Given how they currently work, I don't see that happening any
    time soon.]

    Would the lives of other posters here be better?  That's a trickier question.


    Mike.


    Am 12.10.2025 um 15:50 schrieb olcott:
    Please think this all the way through without making any guesses

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its input until: >>> (a) Detects a non-terminating behavior pattern:
         abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
         return 1.
    (c) If HHH must abort its simulation to prevent its own non-termination
         then HHH is correct to abort this simulation and return 0.

    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
       int Halt_Status = HHH(DD);
       if (Halt_Status)
         HERE: goto HERE;
       return Halt_Status;
    }

    int main()
    {
       HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>


    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2