• Never any actual rebuttal to HHH(DD)==0 Since 10/13/2022

    From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Mon Oct 20 22:00:16 2025
    From Newsgroup: comp.ai.philosophy

    On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
    On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> wrote: >>>>>>>>>> i don't get y polcott keep hanging onto ai for dear life. anyone with

    Throngs of dumb boomers are falling for AI generated videos, believing
    them to be real. This is much the same thing.

    AI is just another thing Olcott has no understanding of. He's not >>>>>>>>> researched the fundamentals of what it means to train a language >>>>>>>>> network, and how it is ultimately just token prediction.

    It excels at generating good syntax. The reason for that is that the >>>>>>>>> vast amount of training data exhibits good syntax. (Where it has bad >>>>>>>>> syntax, it is idiosyncratic; whereas good syntax is broadly shared.) >>>>>>>>>

    I provide a basis to it and it does perform valid
    semantic logical entailment on this basis and shows

    But you're incapable of recognizing valid entailment from invalid. >>>>>>>

    Any freaking idiot can spew out baseless rhetoric
    such as this. I could do the same sort of thing
    and say you are wrong and stupidly wrong.

    But you don't?

    It is a whole other ballgame when one attempts
    to point out actual errors that are not anchored
    in one's own lack of comprehension.

    You don't comprehend the pointing-out.


    You need to have a sound reasoning basis to prove
    that an error is an actual error.

    No; /YOU/ need to have sound reasonings to prove /YOUR/
    extraordinary claims. The burden is on you.

    We already have the solid reasoning which says things are other than as
    you say, and you don't have the faintest idea how to put a dent in it.


    In other words you assume that I must be wrong
    entirely on the basis that what I say does not
    conform to conventional wisdom.

    Yes; you are wrong entirely on the basis that what you say does not
    follow a valid mode of inference for refuting an argument.

    If you are trying to refute something which is not only a widely
    accepted result, but whose reasoning anyone can follow to see it
    for themselves, you are automatically assumed wrong.

    The established result is presumed correct, pending your
    presentation of a convincing argument.

    That's not just wanton arbitrariness: your claims are being
    directly refuted by elements of the established result which
    we can refer to.

    I cannot identify any flaw in the halting theorem. It's not simply
    that I believe it because of the Big Names attached to it.


    And when I identify a flaw yo simply ignore
    whatever I say.

    I'm convinced by the argumentation; and that conviction has
    the side effect of convincing me of the falsehood of your
    ineffective, contrary argumentation.


    Not really it actually gives you the bias to refuse
    to pay attention.

    That is not any actual rebuttal of the specific points that I make.

    No, indeed /that/ isn't; but plenty of those have also been made not
    only by me but various others, over a considerable time span.


    Never any actual rebuttal ever since Professor
    Sipser agreed with my words. Those exact same
    words still form the basis of my whole proof.

    *Here is an accurate paraphrase of those words*

    <Input to LLM systems>

    Perform the complete simulation specified in main() and provide a
    full execution trace before doing anything else.

    Simulating Termination Analyzer HHH correctly simulates its input until:

    (a) It correctly detects that its simulated input cannot
    possibly reach its own simulated final halt state then:
    abort simulation and return 0 rejecting its input as non-halting.

    (b) Simulated input reaches its simulated "return" statement: return 1.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Mon Oct 20 23:05:09 2025
    From Newsgroup: comp.ai.philosophy

    On 10/20/2025 11:00 PM, olcott wrote:
    On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
    On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> wrote: >>>>>>>>>>> i don't get y polcott keep hanging onto ai for dear life. >>>>>>>>>>> anyone with

    Throngs of dumb boomers are falling for AI generated videos, >>>>>>>>>> believing
    them to be real. This is much the same thing.

    AI is just another thing Olcott has no understanding of. He's not >>>>>>>>>> researched the fundamentals of what it means to train a language >>>>>>>>>> network, and how it is ultimately just token prediction.

    It excels at generating good syntax. The reason for that is >>>>>>>>>> that the
    vast amount of training data exhibits good syntax. (Where it >>>>>>>>>> has bad
    syntax, it is idiosyncratic; whereas good syntax is broadly >>>>>>>>>> shared.)


    I provide a basis to it and it does perform valid
    semantic logical entailment on this basis and shows

    But you're incapable of recognizing valid entailment from invalid. >>>>>>>>

    Any freaking idiot can spew out baseless rhetoric
    such as this. I could do the same sort of thing
    and say you are wrong and stupidly wrong.

    But you don't?

    It is a whole other ballgame when one attempts
    to point out actual errors that are not anchored
    in one's own lack of comprehension.

    You don't comprehend the pointing-out.


    You need to have a sound reasoning basis to prove
    that an error is an actual error.

    No; /YOU/ need to have sound reasonings to prove /YOUR/
    extraordinary claims. The burden is on you.

    We already have the solid reasoning which says things are other than as >>>> you say, and you don't have the faintest idea how to put a dent in it. >>>>

    In other words you assume that I must be wrong
    entirely on the basis that what I say does not
    conform to conventional wisdom.

    Yes; you are wrong entirely on the basis that what you say does not
    follow a valid mode of inference for refuting an argument.

    If you are trying to refute something which is not only a widely
    accepted result, but whose reasoning anyone can follow to see it
    for themselves, you are automatically assumed wrong.

    The established result is presumed correct, pending your
    presentation of a convincing argument.

    That's not just wanton arbitrariness: your claims are being
    directly refuted by elements of the established result which
    we can refer to.

    I cannot identify any flaw in the halting theorem. It's not simply
    that I believe it because of the Big Names attached to it.


    And when I identify a flaw yo simply ignore
    whatever I say.

    I'm convinced by the argumentation; and that conviction has
    the side effect of convincing me of the falsehood of your
    ineffective, contrary argumentation.


    Not really it actually gives you the bias to refuse
    to pay attention.

    That is not any actual rebuttal of the specific points that I make.

    No, indeed /that/ isn't; but plenty of those have also been made not
    only by me but various others, over a considerable time span.


    Never any actual rebuttal ever since Professor
    Sipser agreed with my words. Those exact same
    words still form the basis of my whole proof.

    You mean the words where he didn't agree with your interpretation of them?



    On Monday, March 6, 2023 at 2:41:27 PM UTC-5, Ben Bacarisse wrote:
    Fritz Feldhase <franz.fri...@gmail.com> writes:

    On Monday, March 6, 2023 at 3:56:52 AM UTC+1, olcott wrote:
    On 3/5/2023 8:33 PM, Fritz Feldhase wrote:
    On Monday, March 6, 2023 at 3:30:38 AM UTC+1, olcott wrote:

    I needed Sipser for people [bla]

    Does Sipser support your view/claim that you have refuted the
    halting theorem?

    Does he write/teach that the halting theorem is invalid?

    Tell us, oh genius!

    Professor Sipser only agreed that [...]

    So the answer is no. Noted.

    Because he has >250 students he did not have time to examine anything
    else. [...]

    Oh, a CS professor does not have the time to check a refutation of the halting theorem. *lol*
    I exchanged emails with him about this. He does not agree with anything substantive that PO has written. I won't quote him, as I don't have permission, but he was, let's say... forthright, in his reply to me.


    On 8/23/2024 5:07 PM, Ben Bacarisse wrote:
    joes <noreply@example.org> writes:

    Am Wed, 21 Aug 2024 20:55:52 -0500 schrieb olcott:

    Professor Sipser clearly agreed that an H that does a finite simulation
    of D is to predict the behavior of an unlimited simulation of D.

    If the simulator *itself* would not abort. The H called by D is,
    by construction, the same and *does* abort.

    We don't really know what context Sipser was given. I got in touch at
    the time so do I know he had enough context to know that PO's ideas were "wacky" and that had agreed to what he considered a "minor remark".

    Since PO considers his words finely crafted and key to his so-called
    work I think it's clear that Sipser did not take the "minor remark" he agreed to to mean what PO takes it to mean! My own take if that he
    (Sipser) read it as a general remark about how to determine some cases,
    i.e. that D names an input that H can partially simulate to determine
    it's halting or otherwise. We all know or could construct some such
    cases.

    I suspect he was tricked because PO used H and D as the names without
    making it clear that D was constructed from H in the usual way (Sipser
    uses H and D in at least one of his proofs). Of course, he is clued in enough know that, if D is indeed constructed from H like that, the
    "minor remark" becomes true by being a hypothetical: if the moon is made
    of cheese, the Martians can look forward to a fine fondue. But,
    personally, I think the professor is more straight talking than that,
    and he simply took as a method that can work for some inputs. That's
    the only way is could be seen as a "minor remark" with being accused of being disingenuous.

    On 8/23/2024 9:10 PM, Mike Terry wrote:
    So that PO will have no cause to quote me as supporting his case: what Sipser understood he was agreeing to was NOT what PO interprets it as meaning. Sipser would not agree that the conclusion applies in PO's HHH(DDD) scenario, where DDD halts.

    On 5/2/2025 9:16 PM, Mike Terry wrote:
    PO is trying to interpret Sipser's quote:

    --- Start Sipser quote
    If simulating halt decider H correctly simulates its input D
    until H correctly determines that its simulated D would never
    stop running unless aborted then

    H can abort its simulation of D and correctly report that D
    specifies a non-halting sequence of configurations.
    --- End Sipser quote

    The following interpretation is ok:

    If H is given input D, and while simulating D gathers enough
    information to deduce that UTM(D) would never halt, then
    H can abort its simulation and decide D never halts.

    I'd say it's obvious that this is what Sipser is saying, because it's natural, correct, and relevant to what was being discussed (valid
    strategy for a simulating halt decider). It is trivial to check that
    what my interpretation says is valid:

    if UTM(D) would never halt, then D never halts, so if H(D) returns
    never_halts then that is the correct answer for the input. QED :)

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Mon Oct 20 22:13:26 2025
    From Newsgroup: comp.ai.philosophy

    On 10/20/2025 10:05 PM, dbush wrote:
    On 10/20/2025 11:00 PM, olcott wrote:
    On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
    On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> wrote: >>>>>>>>>>>> i don't get y polcott keep hanging onto ai for dear life. >>>>>>>>>>>> anyone with

    Throngs of dumb boomers are falling for AI generated videos, >>>>>>>>>>> believing
    them to be real. This is much the same thing.

    AI is just another thing Olcott has no understanding of. He's >>>>>>>>>>> not
    researched the fundamentals of what it means to train a language >>>>>>>>>>> network, and how it is ultimately just token prediction. >>>>>>>>>>>
    It excels at generating good syntax. The reason for that is >>>>>>>>>>> that the
    vast amount of training data exhibits good syntax. (Where it >>>>>>>>>>> has bad
    syntax, it is idiosyncratic; whereas good syntax is broadly >>>>>>>>>>> shared.)


    I provide a basis to it and it does perform valid
    semantic logical entailment on this basis and shows

    But you're incapable of recognizing valid entailment from invalid. >>>>>>>>>

    Any freaking idiot can spew out baseless rhetoric
    such as this. I could do the same sort of thing
    and say you are wrong and stupidly wrong.

    But you don't?

    It is a whole other ballgame when one attempts
    to point out actual errors that are not anchored
    in one's own lack of comprehension.

    You don't comprehend the pointing-out.


    You need to have a sound reasoning basis to prove
    that an error is an actual error.

    No; /YOU/ need to have sound reasonings to prove /YOUR/
    extraordinary claims. The burden is on you.

    We already have the solid reasoning which says things are other
    than as
    you say, and you don't have the faintest idea how to put a dent in it. >>>>>

    In other words you assume that I must be wrong
    entirely on the basis that what I say does not
    conform to conventional wisdom.

    Yes; you are wrong entirely on the basis that what you say does not
    follow a valid mode of inference for refuting an argument.

    If you are trying to refute something which is not only a widely
    accepted result, but whose reasoning anyone can follow to see it
    for themselves, you are automatically assumed wrong.

    The established result is presumed correct, pending your
    presentation of a convincing argument.

    That's not just wanton arbitrariness: your claims are being
    directly refuted by elements of the established result which
    we can refer to.

    I cannot identify any flaw in the halting theorem. It's not simply
    that I believe it because of the Big Names attached to it.


    And when I identify a flaw yo simply ignore
    whatever I say.

    I'm convinced by the argumentation; and that conviction has
    the side effect of convincing me of the falsehood of your
    ineffective, contrary argumentation.


    Not really it actually gives you the bias to refuse
    to pay attention.

    That is not any actual rebuttal of the specific points that I make.

    No, indeed /that/ isn't; but plenty of those have also been made not
    only by me but various others, over a considerable time span.


    Never any actual rebuttal ever since Professor
    Sipser agreed with my words. Those exact same
    words still form the basis of my whole proof.

    You mean the words where he didn't agree with your interpretation of them?



    According to a Claude AI analysis there
    are only two interpretations and one of
    them is wrong and the other one is my
    interpretation.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Mon Oct 20 23:16:13 2025
    From Newsgroup: comp.ai.philosophy

    On 10/20/2025 11:13 PM, olcott wrote:
    On 10/20/2025 10:05 PM, dbush wrote:
    On 10/20/2025 11:00 PM, olcott wrote:
    On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
    On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> >>>>>>>>>>>> wrote:
    i don't get y polcott keep hanging onto ai for dear life. >>>>>>>>>>>>> anyone with

    Throngs of dumb boomers are falling for AI generated videos, >>>>>>>>>>>> believing
    them to be real. This is much the same thing.

    AI is just another thing Olcott has no understanding of. >>>>>>>>>>>> He's not
    researched the fundamentals of what it means to train a >>>>>>>>>>>> language
    network, and how it is ultimately just token prediction. >>>>>>>>>>>>
    It excels at generating good syntax. The reason for that is >>>>>>>>>>>> that the
    vast amount of training data exhibits good syntax. (Where it >>>>>>>>>>>> has bad
    syntax, it is idiosyncratic; whereas good syntax is broadly >>>>>>>>>>>> shared.)


    I provide a basis to it and it does perform valid
    semantic logical entailment on this basis and shows

    But you're incapable of recognizing valid entailment from >>>>>>>>>> invalid.


    Any freaking idiot can spew out baseless rhetoric
    such as this. I could do the same sort of thing
    and say you are wrong and stupidly wrong.

    But you don't?

    It is a whole other ballgame when one attempts
    to point out actual errors that are not anchored
    in one's own lack of comprehension.

    You don't comprehend the pointing-out.


    You need to have a sound reasoning basis to prove
    that an error is an actual error.

    No; /YOU/ need to have sound reasonings to prove /YOUR/
    extraordinary claims. The burden is on you.

    We already have the solid reasoning which says things are other
    than as
    you say, and you don't have the faintest idea how to put a dent in >>>>>> it.


    In other words you assume that I must be wrong
    entirely on the basis that what I say does not
    conform to conventional wisdom.

    Yes; you are wrong entirely on the basis that what you say does not
    follow a valid mode of inference for refuting an argument.

    If you are trying to refute something which is not only a widely
    accepted result, but whose reasoning anyone can follow to see it
    for themselves, you are automatically assumed wrong.

    The established result is presumed correct, pending your
    presentation of a convincing argument.

    That's not just wanton arbitrariness: your claims are being
    directly refuted by elements of the established result which
    we can refer to.

    I cannot identify any flaw in the halting theorem. It's not simply
    that I believe it because of the Big Names attached to it.


    And when I identify a flaw yo simply ignore
    whatever I say.

    I'm convinced by the argumentation; and that conviction has
    the side effect of convincing me of the falsehood of your
    ineffective, contrary argumentation.


    Not really it actually gives you the bias to refuse
    to pay attention.

    That is not any actual rebuttal of the specific points that I make.

    No, indeed /that/ isn't; but plenty of those have also been made not
    only by me but various others, over a considerable time span.


    Never any actual rebuttal ever since Professor
    Sipser agreed with my words. Those exact same
    words still form the basis of my whole proof.

    You mean the words where he didn't agree with your interpretation of
    them?



    According to a Claude AI analysis there
    are only two interpretations and one of
    them is wrong and the other one is my
    interpretation.



    Whether you think one interpretation is wrong is irrelevant. What is
    relevant is that that's how everyone else including Sipser interpreted
    those words, so you lie by implying that he agrees with your interpretation.



    On Monday, March 6, 2023 at 2:41:27 PM UTC-5, Ben Bacarisse wrote:
    Fritz Feldhase <franz.fri...@gmail.com> writes:

    On Monday, March 6, 2023 at 3:56:52 AM UTC+1, olcott wrote:
    On 3/5/2023 8:33 PM, Fritz Feldhase wrote:
    On Monday, March 6, 2023 at 3:30:38 AM UTC+1, olcott wrote:

    I needed Sipser for people [bla]

    Does Sipser support your view/claim that you have refuted the
    halting theorem?

    Does he write/teach that the halting theorem is invalid?

    Tell us, oh genius!

    Professor Sipser only agreed that [...]

    So the answer is no. Noted.

    Because he has >250 students he did not have time to examine anything
    else. [...]

    Oh, a CS professor does not have the time to check a refutation of the halting theorem. *lol*
    I exchanged emails with him about this. He does not agree with anything substantive that PO has written. I won't quote him, as I don't have permission, but he was, let's say... forthright, in his reply to me.


    On 8/23/2024 5:07 PM, Ben Bacarisse wrote:
    joes <noreply@example.org> writes:

    Am Wed, 21 Aug 2024 20:55:52 -0500 schrieb olcott:

    Professor Sipser clearly agreed that an H that does a finite simulation
    of D is to predict the behavior of an unlimited simulation of D.

    If the simulator *itself* would not abort. The H called by D is,
    by construction, the same and *does* abort.

    We don't really know what context Sipser was given. I got in touch at
    the time so do I know he had enough context to know that PO's ideas were "wacky" and that had agreed to what he considered a "minor remark".

    Since PO considers his words finely crafted and key to his so-called
    work I think it's clear that Sipser did not take the "minor remark" he agreed to to mean what PO takes it to mean! My own take if that he
    (Sipser) read it as a general remark about how to determine some cases,
    i.e. that D names an input that H can partially simulate to determine
    it's halting or otherwise. We all know or could construct some such
    cases.

    I suspect he was tricked because PO used H and D as the names without
    making it clear that D was constructed from H in the usual way (Sipser
    uses H and D in at least one of his proofs). Of course, he is clued in enough know that, if D is indeed constructed from H like that, the
    "minor remark" becomes true by being a hypothetical: if the moon is made
    of cheese, the Martians can look forward to a fine fondue. But,
    personally, I think the professor is more straight talking than that,
    and he simply took as a method that can work for some inputs. That's
    the only way is could be seen as a "minor remark" with being accused of being disingenuous.

    On 8/23/2024 9:10 PM, Mike Terry wrote:
    So that PO will have no cause to quote me as supporting his case: what Sipser understood he was agreeing to was NOT what PO interprets it as meaning. Sipser would not agree that the conclusion applies in PO's HHH(DDD) scenario, where DDD halts.

    On 5/2/2025 9:16 PM, Mike Terry wrote:
    PO is trying to interpret Sipser's quote:

    --- Start Sipser quote
    If simulating halt decider H correctly simulates its input D
    until H correctly determines that its simulated D would never
    stop running unless aborted then

    H can abort its simulation of D and correctly report that D
    specifies a non-halting sequence of configurations.
    --- End Sipser quote

    The following interpretation is ok:

    If H is given input D, and while simulating D gathers enough
    information to deduce that UTM(D) would never halt, then
    H can abort its simulation and decide D never halts.

    I'd say it's obvious that this is what Sipser is saying, because it's natural, correct, and relevant to what was being discussed (valid
    strategy for a simulating halt decider). It is trivial to check that
    what my interpretation says is valid:

    if UTM(D) would never halt, then D never halts, so if H(D) returns
    never_halts then that is the correct answer for the input. QED :)

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Tue Oct 21 03:20:51 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
    On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> wrote: >>>>>>>>>>> i don't get y polcott keep hanging onto ai for dear life. anyone with

    Throngs of dumb boomers are falling for AI generated videos, believing
    them to be real. This is much the same thing.

    AI is just another thing Olcott has no understanding of. He's not >>>>>>>>>> researched the fundamentals of what it means to train a language >>>>>>>>>> network, and how it is ultimately just token prediction.

    It excels at generating good syntax. The reason for that is that the >>>>>>>>>> vast amount of training data exhibits good syntax. (Where it has bad >>>>>>>>>> syntax, it is idiosyncratic; whereas good syntax is broadly shared.) >>>>>>>>>>

    I provide a basis to it and it does perform valid
    semantic logical entailment on this basis and shows

    But you're incapable of recognizing valid entailment from invalid. >>>>>>>>

    Any freaking idiot can spew out baseless rhetoric
    such as this. I could do the same sort of thing
    and say you are wrong and stupidly wrong.

    But you don't?

    It is a whole other ballgame when one attempts
    to point out actual errors that are not anchored
    in one's own lack of comprehension.

    You don't comprehend the pointing-out.


    You need to have a sound reasoning basis to prove
    that an error is an actual error.

    No; /YOU/ need to have sound reasonings to prove /YOUR/
    extraordinary claims. The burden is on you.

    We already have the solid reasoning which says things are other than as >>>> you say, and you don't have the faintest idea how to put a dent in it. >>>>

    In other words you assume that I must be wrong
    entirely on the basis that what I say does not
    conform to conventional wisdom.

    Yes; you are wrong entirely on the basis that what you say does not
    follow a valid mode of inference for refuting an argument.

    If you are trying to refute something which is not only a widely
    accepted result, but whose reasoning anyone can follow to see it
    for themselves, you are automatically assumed wrong.

    The established result is presumed correct, pending your
    presentation of a convincing argument.

    That's not just wanton arbitrariness: your claims are being
    directly refuted by elements of the established result which
    we can refer to.

    I cannot identify any flaw in the halting theorem. It's not simply
    that I believe it because of the Big Names attached to it.


    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been
    dissected by multiple poeple to a much greater detail than they deserve.

    It is disingenuous to say that you've simply had your details ignored.

    I'm convinced by the argumentation; and that conviction has
    the side effect of convincing me of the falsehood of your
    ineffective, contrary argumentation.


    Not really it actually gives you the bias to refuse
    to pay attention.

    LOL! the world at large is incredibly biased against giving a crank
    like you any attention.

    Those of us present are overcoming the world's /immense/ bias
    and actually indulging the details of your claims.


    That is not any actual rebuttal of the specific points that I make.

    No, indeed /that/ isn't; but plenty of those have also been made not
    only by me but various others, over a considerable time span.


    Never any actual rebuttal ever since Professor
    Sipser agreed with my words.

    You're forgetting (of course, isnce it was more than 48-72 hours
    ago) that I (almost) also agree with those words.

    Those exact same
    words still form the basis of my whole proof.

    They don't do that, though.

    *Here is an accurate paraphrase of those words*

    <Input to LLM systems>

    Perform the complete simulation specified in main() and provide a
    full execution trace before doing anything else.

    Simulating Termination Analyzer HHH correctly simulates its input until:

    Yes; it correctly steps each x86 instructions with Debug_Step until:

    (a) It correctly detects that its simulated input cannot
    possibly reach its own simulated final halt state then:

    It correctly detects the situation that: if it doesn't abort,
    the simulation will not end.

    What this means is that if, hypothetically, HHH //were differently
    defined// as a non-aborting decider, then DD //would also be
    differntly defined// as a non-terminating case.

    This hypothesis doesn't mean fuck all because it's not reality. HHH is
    not differently defined other than as it is, and likewise DD is not
    differently defined. HHH is required to report on the current definition
    of DD, which is built on the current definition of HHH.

    In any case, yes; the abort is necessary to avoid non-termination.

    abort simulation and return 0 rejecting its input as non-halting.

    And yes, I agree that it aborts the simulation, returns 0
    which indicates that it's rejecting the input as non-halting.

    (Unfortunately, that is wrong).

    But mostly the words can be rationally agreed to with the caveat that
    HHH's result may not be interpreted to be about a hypothetical different version of itself acting on a different input.

    HHH must be reporting about the actual instruction string DD
    that it is actually given. (As you like to repeat.) Not some fantasy
    other versions of these.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Mon Oct 20 22:25:54 2025
    From Newsgroup: comp.ai.philosophy

    On 10/20/2025 10:16 PM, dbush wrote:
    On 10/20/2025 11:13 PM, olcott wrote:
    On 10/20/2025 10:05 PM, dbush wrote:>>>
    You mean the words where he didn't agree with your interpretation of
    them?



    According to a Claude AI analysis there
    are only two interpretations and one of
    them is wrong and the other one is my
    interpretation.



    Whether you think one interpretation is wrong is irrelevant.  What is relevant is that that's how everyone else including Sipser interpreted
    those words, so you lie by implying that he agrees with your
    interpretation.


    <MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
    If simulating halt decider H correctly simulates its
    input D until H correctly determines that its simulated D
    would never stop running unless aborted then

    H can abort its simulation of D and correctly report that D
    specifies a non-halting sequence of configurations.
    </MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>

    There are only two possible ways to interpret those words
    and one of them is wrong. The one that is not wrong is the
    way that I interpret them.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Mon Oct 20 22:29:09 2025
    From Newsgroup: comp.ai.philosophy

    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:

    In other words you assume that I must be wrong
    entirely on the basis that what I say does not
    conform to conventional wisdom.

    Yes; you are wrong entirely on the basis that what you say does not
    follow a valid mode of inference for refuting an argument.

    If you are trying to refute something which is not only a widely
    accepted result, but whose reasoning anyone can follow to see it
    for themselves, you are automatically assumed wrong.

    The established result is presumed correct, pending your
    presentation of a convincing argument.

    That's not just wanton arbitrariness: your claims are being
    directly refuted by elements of the established result which
    we can refer to.

    I cannot identify any flaw in the halting theorem. It's not simply
    that I believe it because of the Big Names attached to it.


    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been
    dissected by multiple poeple to a much greater detail than they deserve.

    It is disingenuous to say that you've simply had your details ignored.

    I'm convinced by the argumentation; and that conviction has
    the side effect of convincing me of the falsehood of your
    ineffective, contrary argumentation.


    Not really it actually gives you the bias to refuse
    to pay attention.

    LOL! the world at large is incredibly biased against giving a crank
    like you any attention.

    Hence the huge advantage of LLMs.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Mon Oct 20 23:29:51 2025
    From Newsgroup: comp.ai.philosophy

    On 10/20/2025 11:25 PM, olcott wrote:
    On 10/20/2025 10:16 PM, dbush wrote:
    On 10/20/2025 11:13 PM, olcott wrote:
    On 10/20/2025 10:05 PM, dbush wrote:>>>
    You mean the words where he didn't agree with your interpretation of
    them?



    According to a Claude AI analysis there
    are only two interpretations and one of
    them is wrong and the other one is my
    interpretation.



    Whether you think one interpretation is wrong is irrelevant.  What is
    relevant is that that's how everyone else including Sipser interpreted
    those words, so you lie by implying that he agrees with your
    interpretation.


    <repeat of previously refuted point>


    Repeating the point that was just refuted is less than no rebuttal, and therefore constitutes your admission that Sipser does NOT agree with
    you, and that you have been lying by implying that he does.



    On Monday, March 6, 2023 at 2:41:27 PM UTC-5, Ben Bacarisse wrote:
    Fritz Feldhase <franz.fri...@gmail.com> writes:

    On Monday, March 6, 2023 at 3:56:52 AM UTC+1, olcott wrote:
    On 3/5/2023 8:33 PM, Fritz Feldhase wrote:
    On Monday, March 6, 2023 at 3:30:38 AM UTC+1, olcott wrote:

    I needed Sipser for people [bla]

    Does Sipser support your view/claim that you have refuted the
    halting theorem?

    Does he write/teach that the halting theorem is invalid?

    Tell us, oh genius!

    Professor Sipser only agreed that [...]

    So the answer is no. Noted.

    Because he has >250 students he did not have time to examine anything
    else. [...]

    Oh, a CS professor does not have the time to check a refutation of the halting theorem. *lol*
    I exchanged emails with him about this. He does not agree with anything substantive that PO has written. I won't quote him, as I don't have permission, but he was, let's say... forthright, in his reply to me.


    On 8/23/2024 5:07 PM, Ben Bacarisse wrote:
    joes <noreply@example.org> writes:

    Am Wed, 21 Aug 2024 20:55:52 -0500 schrieb olcott:

    Professor Sipser clearly agreed that an H that does a finite simulation
    of D is to predict the behavior of an unlimited simulation of D.

    If the simulator *itself* would not abort. The H called by D is,
    by construction, the same and *does* abort.

    We don't really know what context Sipser was given. I got in touch at
    the time so do I know he had enough context to know that PO's ideas were "wacky" and that had agreed to what he considered a "minor remark".

    Since PO considers his words finely crafted and key to his so-called
    work I think it's clear that Sipser did not take the "minor remark" he agreed to to mean what PO takes it to mean! My own take if that he
    (Sipser) read it as a general remark about how to determine some cases,
    i.e. that D names an input that H can partially simulate to determine
    it's halting or otherwise. We all know or could construct some such
    cases.

    I suspect he was tricked because PO used H and D as the names without
    making it clear that D was constructed from H in the usual way (Sipser
    uses H and D in at least one of his proofs). Of course, he is clued in enough know that, if D is indeed constructed from H like that, the
    "minor remark" becomes true by being a hypothetical: if the moon is made
    of cheese, the Martians can look forward to a fine fondue. But,
    personally, I think the professor is more straight talking than that,
    and he simply took as a method that can work for some inputs. That's
    the only way is could be seen as a "minor remark" with being accused of being disingenuous.

    On 8/23/2024 9:10 PM, Mike Terry wrote:
    So that PO will have no cause to quote me as supporting his case: what Sipser understood he was agreeing to was NOT what PO interprets it as meaning. Sipser would not agree that the conclusion applies in PO's HHH(DDD) scenario, where DDD halts.

    On 5/2/2025 9:16 PM, Mike Terry wrote:
    PO is trying to interpret Sipser's quote:

    --- Start Sipser quote
    If simulating halt decider H correctly simulates its input D
    until H correctly determines that its simulated D would never
    stop running unless aborted then

    H can abort its simulation of D and correctly report that D
    specifies a non-halting sequence of configurations.
    --- End Sipser quote

    The following interpretation is ok:

    If H is given input D, and while simulating D gathers enough
    information to deduce that UTM(D) would never halt, then
    H can abort its simulation and decide D never halts.

    I'd say it's obvious that this is what Sipser is saying, because it's natural, correct, and relevant to what was being discussed (valid
    strategy for a simulating halt decider). It is trivial to check that
    what my interpretation says is valid:

    if UTM(D) would never halt, then D never halts, so if H(D) returns
    never_halts then that is the correct answer for the input. QED :)


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 06:56:47 2025
    From Newsgroup: comp.ai.philosophy

    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
    On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> wrote: >>>>>>>>>>>> i don't get y polcott keep hanging onto ai for dear life. anyone with

    Throngs of dumb boomers are falling for AI generated videos, believing
    them to be real. This is much the same thing.

    AI is just another thing Olcott has no understanding of. He's not >>>>>>>>>>> researched the fundamentals of what it means to train a language >>>>>>>>>>> network, and how it is ultimately just token prediction. >>>>>>>>>>>
    It excels at generating good syntax. The reason for that is that the
    vast amount of training data exhibits good syntax. (Where it has bad
    syntax, it is idiosyncratic; whereas good syntax is broadly shared.)


    I provide a basis to it and it does perform valid
    semantic logical entailment on this basis and shows

    But you're incapable of recognizing valid entailment from invalid. >>>>>>>>>

    Any freaking idiot can spew out baseless rhetoric
    such as this. I could do the same sort of thing
    and say you are wrong and stupidly wrong.

    But you don't?

    It is a whole other ballgame when one attempts
    to point out actual errors that are not anchored
    in one's own lack of comprehension.

    You don't comprehend the pointing-out.


    You need to have a sound reasoning basis to prove
    that an error is an actual error.

    No; /YOU/ need to have sound reasonings to prove /YOUR/
    extraordinary claims. The burden is on you.

    We already have the solid reasoning which says things are other than as >>>>> you say, and you don't have the faintest idea how to put a dent in it. >>>>>

    In other words you assume that I must be wrong
    entirely on the basis that what I say does not
    conform to conventional wisdom.

    Yes; you are wrong entirely on the basis that what you say does not
    follow a valid mode of inference for refuting an argument.

    If you are trying to refute something which is not only a widely
    accepted result, but whose reasoning anyone can follow to see it
    for themselves, you are automatically assumed wrong.

    The established result is presumed correct, pending your
    presentation of a convincing argument.

    That's not just wanton arbitrariness: your claims are being
    directly refuted by elements of the established result which
    we can refer to.

    I cannot identify any flaw in the halting theorem. It's not simply
    that I believe it because of the Big Names attached to it.


    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been
    dissected by multiple poeple to a much greater detail than they deserve.

    It is disingenuous to say that you've simply had your details ignored.


    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    Blah, Blah Blah, no Olcott you are wrong, I know
    that you are wrong because I simply don't believe you.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 08:25:51 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 7:56 AM, olcott wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
    On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> >>>>>>>>>>>> wrote:
    i don't get y polcott keep hanging onto ai for dear life. >>>>>>>>>>>>> anyone with

    Throngs of dumb boomers are falling for AI generated videos, >>>>>>>>>>>> believing
    them to be real. This is much the same thing.

    AI is just another thing Olcott has no understanding of. >>>>>>>>>>>> He's not
    researched the fundamentals of what it means to train a >>>>>>>>>>>> language
    network, and how it is ultimately just token prediction. >>>>>>>>>>>>
    It excels at generating good syntax. The reason for that is >>>>>>>>>>>> that the
    vast amount of training data exhibits good syntax. (Where it >>>>>>>>>>>> has bad
    syntax, it is idiosyncratic; whereas good syntax is broadly >>>>>>>>>>>> shared.)


    I provide a basis to it and it does perform valid
    semantic logical entailment on this basis and shows

    But you're incapable of recognizing valid entailment from >>>>>>>>>> invalid.


    Any freaking idiot can spew out baseless rhetoric
    such as this. I could do the same sort of thing
    and say you are wrong and stupidly wrong.

    But you don't?

    It is a whole other ballgame when one attempts
    to point out actual errors that are not anchored
    in one's own lack of comprehension.

    You don't comprehend the pointing-out.


    You need to have a sound reasoning basis to prove
    that an error is an actual error.

    No; /YOU/ need to have sound reasonings to prove /YOUR/
    extraordinary claims. The burden is on you.

    We already have the solid reasoning which says things are other
    than as
    you say, and you don't have the faintest idea how to put a dent in >>>>>> it.


    In other words you assume that I must be wrong
    entirely on the basis that what I say does not
    conform to conventional wisdom.

    Yes; you are wrong entirely on the basis that what you say does not
    follow a valid mode of inference for refuting an argument.

    If you are trying to refute something which is not only a widely
    accepted result, but whose reasoning anyone can follow to see it
    for themselves, you are automatically assumed wrong.

    The established result is presumed correct, pending your
    presentation of a convincing argument.

    That's not just wanton arbitrariness: your claims are being
    directly refuted by elements of the established result which
    we can refer to.

    I cannot identify any flaw in the halting theorem. It's not simply
    that I believe it because of the Big Names attached to it.


    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been
    dissected by multiple poeple to a much greater detail than they deserve.

    It is disingenuous to say that you've simply had your details ignored.


    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    False:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
    (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed directly

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 07:48:51 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 7:25 AM, dbush wrote:
    On 10/22/2025 7:56 AM, olcott wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
    On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> >>>>>>>>>>>>> wrote:
    i don't get y polcott keep hanging onto ai for dear life. >>>>>>>>>>>>>> anyone with

    Throngs of dumb boomers are falling for AI generated >>>>>>>>>>>>> videos, believing
    them to be real. This is much the same thing.

    AI is just another thing Olcott has no understanding of. >>>>>>>>>>>>> He's not
    researched the fundamentals of what it means to train a >>>>>>>>>>>>> language
    network, and how it is ultimately just token prediction. >>>>>>>>>>>>>
    It excels at generating good syntax. The reason for that is >>>>>>>>>>>>> that the
    vast amount of training data exhibits good syntax. (Where >>>>>>>>>>>>> it has bad
    syntax, it is idiosyncratic; whereas good syntax is broadly >>>>>>>>>>>>> shared.)


    I provide a basis to it and it does perform valid
    semantic logical entailment on this basis and shows

    But you're incapable of recognizing valid entailment from >>>>>>>>>>> invalid.


    Any freaking idiot can spew out baseless rhetoric
    such as this. I could do the same sort of thing
    and say you are wrong and stupidly wrong.

    But you don't?

    It is a whole other ballgame when one attempts
    to point out actual errors that are not anchored
    in one's own lack of comprehension.

    You don't comprehend the pointing-out.


    You need to have a sound reasoning basis to prove
    that an error is an actual error.

    No; /YOU/ need to have sound reasonings to prove /YOUR/
    extraordinary claims. The burden is on you.

    We already have the solid reasoning which says things are other >>>>>>> than as
    you say, and you don't have the faintest idea how to put a dent >>>>>>> in it.


    In other words you assume that I must be wrong
    entirely on the basis that what I say does not
    conform to conventional wisdom.

    Yes; you are wrong entirely on the basis that what you say does not
    follow a valid mode of inference for refuting an argument.

    If you are trying to refute something which is not only a widely
    accepted result, but whose reasoning anyone can follow to see it
    for themselves, you are automatically assumed wrong.

    The established result is presumed correct, pending your
    presentation of a convincing argument.

    That's not just wanton arbitrariness: your claims are being
    directly refuted by elements of the established result which
    we can refer to.

    I cannot identify any flaw in the halting theorem. It's not simply
    that I believe it because of the Big Names attached to it.


    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been
    dissected by multiple poeple to a much greater detail than they deserve. >>>
    It is disingenuous to say that you've simply had your details ignored.


    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    False:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
    (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed directly


    Yes that it the exact error that I have been
    referring to.

    In the case of HHH(DD) the above requires HHH to
    report on the behavior of its caller and HHH has
    no way to even know who its caller is.

    My simulating halt decider exposed the gap of
    false assumptions because there are no assumptions
    everything is fully operational code.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 09:00:27 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 8:48 AM, olcott wrote:
    On 10/22/2025 7:25 AM, dbush wrote:
    On 10/22/2025 7:56 AM, olcott wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
    On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> >>>>>>>>>>>>>> wrote:
    i don't get y polcott keep hanging onto ai for dear life. >>>>>>>>>>>>>>> anyone with

    Throngs of dumb boomers are falling for AI generated >>>>>>>>>>>>>> videos, believing
    them to be real. This is much the same thing.

    AI is just another thing Olcott has no understanding of. >>>>>>>>>>>>>> He's not
    researched the fundamentals of what it means to train a >>>>>>>>>>>>>> language
    network, and how it is ultimately just token prediction. >>>>>>>>>>>>>>
    It excels at generating good syntax. The reason for that >>>>>>>>>>>>>> is that the
    vast amount of training data exhibits good syntax. (Where >>>>>>>>>>>>>> it has bad
    syntax, it is idiosyncratic; whereas good syntax is >>>>>>>>>>>>>> broadly shared.)


    I provide a basis to it and it does perform valid
    semantic logical entailment on this basis and shows

    But you're incapable of recognizing valid entailment from >>>>>>>>>>>> invalid.


    Any freaking idiot can spew out baseless rhetoric
    such as this. I could do the same sort of thing
    and say you are wrong and stupidly wrong.

    But you don't?

    It is a whole other ballgame when one attempts
    to point out actual errors that are not anchored
    in one's own lack of comprehension.

    You don't comprehend the pointing-out.


    You need to have a sound reasoning basis to prove
    that an error is an actual error.

    No; /YOU/ need to have sound reasonings to prove /YOUR/
    extraordinary claims. The burden is on you.

    We already have the solid reasoning which says things are other >>>>>>>> than as
    you say, and you don't have the faintest idea how to put a dent >>>>>>>> in it.


    In other words you assume that I must be wrong
    entirely on the basis that what I say does not
    conform to conventional wisdom.

    Yes; you are wrong entirely on the basis that what you say does not >>>>>> follow a valid mode of inference for refuting an argument.

    If you are trying to refute something which is not only a widely
    accepted result, but whose reasoning anyone can follow to see it
    for themselves, you are automatically assumed wrong.

    The established result is presumed correct, pending your
    presentation of a convincing argument.

    That's not just wanton arbitrariness: your claims are being
    directly refuted by elements of the established result which
    we can refer to.

    I cannot identify any flaw in the halting theorem. It's not simply >>>>>> that I believe it because of the Big Names attached to it.


    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been
    dissected by multiple poeple to a much greater detail than they
    deserve.

    It is disingenuous to say that you've simply had your details ignored. >>>>

    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    False:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
    (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
    directly


    Yes that it the exact error that I have been
    referring to.

    That is not an error. That is simply a mapping that you have admitted
    exists.


    In the case of HHH(DD) the above requires HHH to
    report on the behavior of its caller

    False. It requires HHH to report on the behavior of the machine
    described by its input.

    int main {
    DD() // this
    HHH(DD) // is not the caller of this
    return 0;
    }

    and HHH has
    no way to even know who its caller is.

    Irrelevant.


    My simulating halt decider


    in other words, something that uses simulation to compute the following mapping:


    Given any algorithm (i.e. a fixed immutable sequence of instructions) X described as <X> with input Y:

    A solution to the halting problem is an algorithm H that computes the following mapping:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
    (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed directly




    exposed the gap of
    false assumptions
    The only false assumption is that the above requirements can be
    satisfied, which Turing and Linz proved to be false and that you have *explicitly* agreed with.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 08:47:47 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 8:00 AM, dbush wrote:
    On 10/22/2025 8:48 AM, olcott wrote:
    On 10/22/2025 7:25 AM, dbush wrote:
    On 10/22/2025 7:56 AM, olcott wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>> On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
    On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> >>>>>>>>>>>>>>> wrote:
    i don't get y polcott keep hanging onto ai for dear >>>>>>>>>>>>>>>> life. anyone with

    Throngs of dumb boomers are falling for AI generated >>>>>>>>>>>>>>> videos, believing
    them to be real. This is much the same thing.

    AI is just another thing Olcott has no understanding of. >>>>>>>>>>>>>>> He's not
    researched the fundamentals of what it means to train a >>>>>>>>>>>>>>> language
    network, and how it is ultimately just token prediction. >>>>>>>>>>>>>>>
    It excels at generating good syntax. The reason for that >>>>>>>>>>>>>>> is that the
    vast amount of training data exhibits good syntax. (Where >>>>>>>>>>>>>>> it has bad
    syntax, it is idiosyncratic; whereas good syntax is >>>>>>>>>>>>>>> broadly shared.)


    I provide a basis to it and it does perform valid
    semantic logical entailment on this basis and shows >>>>>>>>>>>>>
    But you're incapable of recognizing valid entailment from >>>>>>>>>>>>> invalid.


    Any freaking idiot can spew out baseless rhetoric
    such as this. I could do the same sort of thing
    and say you are wrong and stupidly wrong.

    But you don't?

    It is a whole other ballgame when one attempts
    to point out actual errors that are not anchored
    in one's own lack of comprehension.

    You don't comprehend the pointing-out.


    You need to have a sound reasoning basis to prove
    that an error is an actual error.

    No; /YOU/ need to have sound reasonings to prove /YOUR/
    extraordinary claims. The burden is on you.

    We already have the solid reasoning which says things are other >>>>>>>>> than as
    you say, and you don't have the faintest idea how to put a dent >>>>>>>>> in it.


    In other words you assume that I must be wrong
    entirely on the basis that what I say does not
    conform to conventional wisdom.

    Yes; you are wrong entirely on the basis that what you say does not >>>>>>> follow a valid mode of inference for refuting an argument.

    If you are trying to refute something which is not only a widely >>>>>>> accepted result, but whose reasoning anyone can follow to see it >>>>>>> for themselves, you are automatically assumed wrong.

    The established result is presumed correct, pending your
    presentation of a convincing argument.

    That's not just wanton arbitrariness: your claims are being
    directly refuted by elements of the established result which
    we can refer to.

    I cannot identify any flaw in the halting theorem. It's not simply >>>>>>> that I believe it because of the Big Names attached to it.


    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been
    dissected by multiple poeple to a much greater detail than they
    deserve.

    It is disingenuous to say that you've simply had your details ignored. >>>>>

    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    False:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
    (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
    directly


    Yes that it the exact error that I have been
    referring to.

    That is not an error.  That is simply a mapping that you have admitted exists.


    In the case of HHH(DD) the above requires HHH to
    report on the behavior of its caller

    False.  It requires HHH to report on the behavior of the machine
    described by its input.


    That includes that DD calls HHH(DD) in recursive
    simulation.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 09:50:38 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 9:47 AM, olcott wrote:
    On 10/22/2025 8:00 AM, dbush wrote:
    On 10/22/2025 8:48 AM, olcott wrote:
    On 10/22/2025 7:25 AM, dbush wrote:
    On 10/22/2025 7:56 AM, olcott wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>>> On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
    On 2025-10-19, dart200
    <user7160@newsgrouper.org.invalid> wrote:
    i don't get y polcott keep hanging onto ai for dear >>>>>>>>>>>>>>>>> life. anyone with

    Throngs of dumb boomers are falling for AI generated >>>>>>>>>>>>>>>> videos, believing
    them to be real. This is much the same thing.

    AI is just another thing Olcott has no understanding of. >>>>>>>>>>>>>>>> He's not
    researched the fundamentals of what it means to train a >>>>>>>>>>>>>>>> language
    network, and how it is ultimately just token prediction. >>>>>>>>>>>>>>>>
    It excels at generating good syntax. The reason for that >>>>>>>>>>>>>>>> is that the
    vast amount of training data exhibits good syntax. >>>>>>>>>>>>>>>> (Where it has bad
    syntax, it is idiosyncratic; whereas good syntax is >>>>>>>>>>>>>>>> broadly shared.)


    I provide a basis to it and it does perform valid >>>>>>>>>>>>>>> semantic logical entailment on this basis and shows >>>>>>>>>>>>>>
    But you're incapable of recognizing valid entailment from >>>>>>>>>>>>>> invalid.


    Any freaking idiot can spew out baseless rhetoric
    such as this. I could do the same sort of thing
    and say you are wrong and stupidly wrong.

    But you don't?

    It is a whole other ballgame when one attempts
    to point out actual errors that are not anchored
    in one's own lack of comprehension.

    You don't comprehend the pointing-out.


    You need to have a sound reasoning basis to prove
    that an error is an actual error.

    No; /YOU/ need to have sound reasonings to prove /YOUR/
    extraordinary claims. The burden is on you.

    We already have the solid reasoning which says things are >>>>>>>>>> other than as
    you say, and you don't have the faintest idea how to put a >>>>>>>>>> dent in it.


    In other words you assume that I must be wrong
    entirely on the basis that what I say does not
    conform to conventional wisdom.

    Yes; you are wrong entirely on the basis that what you say does not >>>>>>>> follow a valid mode of inference for refuting an argument.

    If you are trying to refute something which is not only a widely >>>>>>>> accepted result, but whose reasoning anyone can follow to see it >>>>>>>> for themselves, you are automatically assumed wrong.

    The established result is presumed correct, pending your
    presentation of a convincing argument.

    That's not just wanton arbitrariness: your claims are being
    directly refuted by elements of the established result which
    we can refer to.

    I cannot identify any flaw in the halting theorem. It's not simply >>>>>>>> that I believe it because of the Big Names attached to it.


    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been >>>>>> dissected by multiple poeple to a much greater detail than they
    deserve.

    It is disingenuous to say that you've simply had your details
    ignored.


    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    False:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
    (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
    directly


    Yes that it the exact error that I have been
    referring to.

    That is not an error.  That is simply a mapping that you have admitted
    exists.


    In the case of HHH(DD) the above requires HHH to
    report on the behavior of its caller

    False.  It requires HHH to report on the behavior of the machine
    described by its input.


    That includes that DD calls HHH(DD) in recursive
    simulation.

    Which therefore includes the fact that HHH(DD) will return 0 and that DD
    will subsequently halt.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 09:25:15 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 8:50 AM, dbush wrote:
    On 10/22/2025 9:47 AM, olcott wrote:
    On 10/22/2025 8:00 AM, dbush wrote:
    On 10/22/2025 8:48 AM, olcott wrote:
    On 10/22/2025 7:25 AM, dbush wrote:
    On 10/22/2025 7:56 AM, olcott wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>> On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>>>> On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
    On 2025-10-19, dart200
    <user7160@newsgrouper.org.invalid> wrote:
    i don't get y polcott keep hanging onto ai for dear >>>>>>>>>>>>>>>>>> life. anyone with

    Throngs of dumb boomers are falling for AI generated >>>>>>>>>>>>>>>>> videos, believing
    them to be real. This is much the same thing. >>>>>>>>>>>>>>>>>
    AI is just another thing Olcott has no understanding >>>>>>>>>>>>>>>>> of. He's not
    researched the fundamentals of what it means to train a >>>>>>>>>>>>>>>>> language
    network, and how it is ultimately just token prediction. >>>>>>>>>>>>>>>>>
    It excels at generating good syntax. The reason for >>>>>>>>>>>>>>>>> that is that the
    vast amount of training data exhibits good syntax. >>>>>>>>>>>>>>>>> (Where it has bad
    syntax, it is idiosyncratic; whereas good syntax is >>>>>>>>>>>>>>>>> broadly shared.)


    I provide a basis to it and it does perform valid >>>>>>>>>>>>>>>> semantic logical entailment on this basis and shows >>>>>>>>>>>>>>>
    But you're incapable of recognizing valid entailment from >>>>>>>>>>>>>>> invalid.


    Any freaking idiot can spew out baseless rhetoric
    such as this. I could do the same sort of thing
    and say you are wrong and stupidly wrong.

    But you don't?

    It is a whole other ballgame when one attempts
    to point out actual errors that are not anchored
    in one's own lack of comprehension.

    You don't comprehend the pointing-out.


    You need to have a sound reasoning basis to prove
    that an error is an actual error.

    No; /YOU/ need to have sound reasonings to prove /YOUR/
    extraordinary claims. The burden is on you.

    We already have the solid reasoning which says things are >>>>>>>>>>> other than as
    you say, and you don't have the faintest idea how to put a >>>>>>>>>>> dent in it.


    In other words you assume that I must be wrong
    entirely on the basis that what I say does not
    conform to conventional wisdom.

    Yes; you are wrong entirely on the basis that what you say does >>>>>>>>> not
    follow a valid mode of inference for refuting an argument.

    If you are trying to refute something which is not only a widely >>>>>>>>> accepted result, but whose reasoning anyone can follow to see it >>>>>>>>> for themselves, you are automatically assumed wrong.

    The established result is presumed correct, pending your
    presentation of a convincing argument.

    That's not just wanton arbitrariness: your claims are being
    directly refuted by elements of the established result which >>>>>>>>> we can refer to.

    I cannot identify any flaw in the halting theorem. It's not simply >>>>>>>>> that I believe it because of the Big Names attached to it.


    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been >>>>>>> dissected by multiple poeple to a much greater detail than they >>>>>>> deserve.

    It is disingenuous to say that you've simply had your details
    ignored.


    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    False:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
    (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
    directly


    Yes that it the exact error that I have been
    referring to.

    That is not an error.  That is simply a mapping that you have
    admitted exists.


    In the case of HHH(DD) the above requires HHH to
    report on the behavior of its caller

    False.  It requires HHH to report on the behavior of the machine
    described by its input.


    That includes that DD calls HHH(DD) in recursive
    simulation.

    Which therefore includes the fact that HHH(DD) will return 0 and that DD will subsequently halt.

    You keep ignoring that we are only focusing on
    DD correctly simulated by HHH. In other words
    the behavior that HHH computes
    FROM ITS ACTUAL FREAKING INPUT NOT ANY OTHER DAMN THING
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 15:40:18 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been
    dissected by multiple poeple to a much greater detail than they deserve.

    It is disingenuous to say that you've simply had your details ignored.


    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    The halting problem positively does not propose anything
    like that, which would be gapingly wrong.

    Blah, Blah Blah, no Olcott you are wrong, I know
    that you are wrong because I simply don't believe you.

    You are wrong because I (1) don't see that gaping flaw in the
    definition of the halting problem, (2) you don't even
    try to explain how such that flaw can be. Where, how, why
    is any decider being asked to decide something other than
    an input representable as a finite string.

    I've repeated many times that the diagonal case is constructable as a
    finite string, whose halting status can be readily ascertained.

    Because it's obvious to me, of course I'm going to reject
    baseless claims that simply ask me to /believe/ otherwise.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 10:47:48 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been
    dissected by multiple poeple to a much greater detail than they deserve. >>>
    It is disingenuous to say that you've simply had your details ignored.


    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    The halting problem positively does not propose anything
    like that, which would be gapingly wrong.


    It only seems that way because you are unable to
    provide the actual mapping that the actual input
    to HHH(DD) specifies when DD is simulated by HHH
    according to the semantics of the C language,
    even though I do remember that you did do this once.

    No sense moving on to any other point until
    mutual agreement on this mandatory prerequisite.

    Blah, Blah Blah, no Olcott you are wrong, I know
    that you are wrong because I simply don't believe you.

    You are wrong because I (1) don't see that gaping flaw in the
    definition of the halting problem, (2) you don't even
    try to explain how such that flaw can be. Where, how, why
    is any decider being asked to decide something other than
    an input representable as a finite string.

    I've repeated many times that the diagonal case is constructable as a
    finite string, whose halting status can be readily ascertained.

    Because it's obvious to me, of course I'm going to reject
    baseless claims that simply ask me to /believe/ otherwise.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 17:07:42 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been
    dissected by multiple poeple to a much greater detail than they deserve. >>>>
    It is disingenuous to say that you've simply had your details ignored. >>>>

    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    The halting problem positively does not propose anything
    like that, which would be gapingly wrong.

    It only seems that way because you are unable to

    No, it doesn't only seem that way. Thanks for playing.

    provide the actual mapping that the actual input
    to HHH(DD) specifies when DD is simulated by HHH
    according to the semantics of the C language,

    DD is a "finite string input" which specifies a behavior that is
    independent of what simulates it, and in what manner.

    When DD is simulated by HHH, the simulation is left incomplete.

    That is not permitted by the semantics of the source
    or target language in which DD is written;
    an incomplete simulation is an incorrect simulation.

    Thus, DD being simulated by HHH according to the semantics. The
    semantics say that there is a next statement or instruction to execute,
    which HHH neglects to do.

    Now that would be fine, because HHH's job isn't to evoke the
    full behavior of DD but only to predict whether it will halt.

    But HHH does that incorrectly; the correct halting status is 1,
    not 0.

    Thus HHH achieves neither a correct simulation, nor a correct
    appraisal of the halting status.

    even though I do remember that you did do this once.

    I must have accidentally written something that looked
    like crackpottery.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 12:11:56 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been
    dissected by multiple poeple to a much greater detail than they deserve. >>>>>
    It is disingenuous to say that you've simply had your details ignored. >>>>>

    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    The halting problem positively does not propose anything
    like that, which would be gapingly wrong.

    It only seems that way because you are unable to

    No, it doesn't only seem that way. Thanks for playing.

    provide the actual mapping that the actual input
    to HHH(DD) specifies when DD is simulated by HHH
    according to the semantics of the C language,

    DD is a "finite string input" which specifies a behavior that is
    independent of what simulates it,

    That is stupidly incorrect.
    That DD calls HHH(DD) (its own simulator) IS PART OF
    THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 13:30:20 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 10:25 AM, olcott wrote:
    On 10/22/2025 8:50 AM, dbush wrote:
    On 10/22/2025 9:47 AM, olcott wrote:
    On 10/22/2025 8:00 AM, dbush wrote:
    On 10/22/2025 8:48 AM, olcott wrote:
    On 10/22/2025 7:25 AM, dbush wrote:
    On 10/22/2025 7:56 AM, olcott wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
    On 2025-10-21, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>>> On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
    On 2025-10-20, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>>>>> On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
    On 2025-10-19, dart200
    <user7160@newsgrouper.org.invalid> wrote: >>>>>>>>>>>>>>>>>>> i don't get y polcott keep hanging onto ai for dear >>>>>>>>>>>>>>>>>>> life. anyone with

    Throngs of dumb boomers are falling for AI generated >>>>>>>>>>>>>>>>>> videos, believing
    them to be real. This is much the same thing. >>>>>>>>>>>>>>>>>>
    AI is just another thing Olcott has no understanding >>>>>>>>>>>>>>>>>> of. He's not
    researched the fundamentals of what it means to train >>>>>>>>>>>>>>>>>> a language
    network, and how it is ultimately just token prediction. >>>>>>>>>>>>>>>>>>
    It excels at generating good syntax. The reason for >>>>>>>>>>>>>>>>>> that is that the
    vast amount of training data exhibits good syntax. >>>>>>>>>>>>>>>>>> (Where it has bad
    syntax, it is idiosyncratic; whereas good syntax is >>>>>>>>>>>>>>>>>> broadly shared.)


    I provide a basis to it and it does perform valid >>>>>>>>>>>>>>>>> semantic logical entailment on this basis and shows >>>>>>>>>>>>>>>>
    But you're incapable of recognizing valid entailment >>>>>>>>>>>>>>>> from invalid.


    Any freaking idiot can spew out baseless rhetoric >>>>>>>>>>>>>>> such as this. I could do the same sort of thing
    and say you are wrong and stupidly wrong.

    But you don't?

    It is a whole other ballgame when one attempts
    to point out actual errors that are not anchored >>>>>>>>>>>>>>> in one's own lack of comprehension.

    You don't comprehend the pointing-out.


    You need to have a sound reasoning basis to prove
    that an error is an actual error.

    No; /YOU/ need to have sound reasonings to prove /YOUR/ >>>>>>>>>>>> extraordinary claims. The burden is on you.

    We already have the solid reasoning which says things are >>>>>>>>>>>> other than as
    you say, and you don't have the faintest idea how to put a >>>>>>>>>>>> dent in it.


    In other words you assume that I must be wrong
    entirely on the basis that what I say does not
    conform to conventional wisdom.

    Yes; you are wrong entirely on the basis that what you say >>>>>>>>>> does not
    follow a valid mode of inference for refuting an argument. >>>>>>>>>>
    If you are trying to refute something which is not only a widely >>>>>>>>>> accepted result, but whose reasoning anyone can follow to see it >>>>>>>>>> for themselves, you are automatically assumed wrong.

    The established result is presumed correct, pending your
    presentation of a convincing argument.

    That's not just wanton arbitrariness: your claims are being >>>>>>>>>> directly refuted by elements of the established result which >>>>>>>>>> we can refer to.

    I cannot identify any flaw in the halting theorem. It's not >>>>>>>>>> simply
    that I believe it because of the Big Names attached to it. >>>>>>>>>>

    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been >>>>>>>> dissected by multiple poeple to a much greater detail than they >>>>>>>> deserve.

    It is disingenuous to say that you've simply had your details >>>>>>>> ignored.


    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    False:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed directly >>>>>> (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed >>>>>> directly


    Yes that it the exact error that I have been
    referring to.

    That is not an error.  That is simply a mapping that you have
    admitted exists.


    In the case of HHH(DD) the above requires HHH to
    report on the behavior of its caller

    False.  It requires HHH to report on the behavior of the machine
    described by its input.


    That includes that DD calls HHH(DD) in recursive
    simulation.

    Which therefore includes the fact that HHH(DD) will return 0 and that
    DD will subsequently halt.

    You keep ignoring that we are only focusing on
    DD correctly simulated by HHH.

    Which doesn't exist because HHH aborts.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 13:36:19 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 11:47 AM, olcott wrote:
    On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been
    dissected by multiple poeple to a much greater detail than they
    deserve.

    It is disingenuous to say that you've simply had your details ignored. >>>>

    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    The halting problem positively does not propose anything
    like that, which would be gapingly wrong.


    It only seems that way because you are unable to
    provide the actual mapping that the actual input
    to HHH(DD) specifies when DD is simulated by HHH
    according to the semantics of the C language,

    Then you have no mapping since DD is NOT simulated by HHH according to
    the semantics of the C language because HHH aborts.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 13:38:49 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 1:11 PM, olcott wrote:
    On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been >>>>>> dissected by multiple poeple to a much greater detail than they
    deserve.

    It is disingenuous to say that you've simply had your details
    ignored.


    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    The halting problem positively does not propose anything
    like that, which would be gapingly wrong.

    It only seems that way because you are unable to

    No, it doesn't only seem that way. Thanks for playing.

    provide the actual mapping that the actual input
    to HHH(DD) specifies when DD is simulated by HHH
    according to the semantics of the C language,

    DD is a "finite string input" which specifies a behavior that is
    independent of what simulates it,

    That is stupidly incorrect.
    That DD calls HHH(DD) (its own simulator) IS PART OF
    THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.

    And the fact that HHH(DD) returns 0 causing DD to subsequently halt is
    also part of the behavior specified by finite string DD.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 18:40:54 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been >>>>>> dissected by multiple poeple to a much greater detail than they deserve. >>>>>>
    It is disingenuous to say that you've simply had your details ignored. >>>>>>

    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    The halting problem positively does not propose anything
    like that, which would be gapingly wrong.

    It only seems that way because you are unable to

    No, it doesn't only seem that way. Thanks for playing.

    provide the actual mapping that the actual input
    to HHH(DD) specifies when DD is simulated by HHH
    according to the semantics of the C language,

    DD is a "finite string input" which specifies a behavior that is
    independent of what simulates it,

    That is stupidly incorrect.
    That DD calls HHH(DD) (its own simulator) IS PART OF
    THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.

    In no way am I saying that DD is not built on HHH, and
    does not have a behavior dependent on that of HHH.
    Why would I ever say that?

    But that entire bundle is one fixed case DD, with a single behavior,
    which is a property of DD, which is a finite string.

    DD can be passed as an argument to any decider, not only HHH.

    For instance, don't you have a HHH1 such that HHH1(DD)
    correctly steps DD to the end and returns the correct value 1?

    DD's behavior is dependent on a decider which it calls;
    but not dependent on anything which is analyzing DD.

    Even when those two are the same, they are different
    instances/activations.

    DD creates an activation of HHH on whose result it depends.

    The definition of DD's behavior does not depend on the ongoing
    activation of something which happens to be analyzing it;
    it has no knowledge of that.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?B?QW5kcsOpIEcuIElzYWFr?=@agisaak@gm.invalid to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 13:24:00 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-10-22 12:40, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been >>>>>>> dissected by multiple poeple to a much greater detail than they deserve.

    It is disingenuous to say that you've simply had your details ignored. >>>>>>>

    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    The halting problem positively does not propose anything
    like that, which would be gapingly wrong.

    It only seems that way because you are unable to

    No, it doesn't only seem that way. Thanks for playing.

    provide the actual mapping that the actual input
    to HHH(DD) specifies when DD is simulated by HHH
    according to the semantics of the C language,

    DD is a "finite string input" which specifies a behavior that is
    independent of what simulates it,

    That is stupidly incorrect.
    That DD calls HHH(DD) (its own simulator) IS PART OF
    THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.

    In no way am I saying that DD is not built on HHH, and
    does not have a behavior dependent on that of HHH.
    Why would I ever say that?

    But that entire bundle is one fixed case DD, with a single behavior,
    which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the
    "finite string input" DD *must* include as a substring the entire
    description of HHH.

    André

    DD can be passed as an argument to any decider, not only HHH.

    For instance, don't you have a HHH1 such that HHH1(DD)
    correctly steps DD to the end and returns the correct value 1?

    DD's behavior is dependent on a decider which it calls;
    but not dependent on anything which is analyzing DD.

    Even when those two are the same, they are different
    instances/activations.

    DD creates an activation of HHH on whose result it depends.

    The definition of DD's behavior does not depend on the ongoing
    activation of something which happens to be analyzing it;
    it has no knowledge of that.

    --
    To email remove 'invalid' & replace 'gm' with well known Google mail
    service.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 14:30:13 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 2:24 PM, André G. Isaak wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been >>>>>>>> dissected by multiple poeple to a much greater detail than they >>>>>>>> deserve.

    It is disingenuous to say that you've simply had your details >>>>>>>> ignored.


    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    The halting problem positively does not propose anything
    like that, which would be gapingly wrong.

    It only seems that way because you are unable to

    No, it doesn't only seem that way. Thanks for playing.

    provide the actual mapping that the actual input
    to HHH(DD) specifies when DD is simulated by HHH
    according to the semantics of the C language,

    DD is a "finite string input" which specifies a behavior that is
    independent of what simulates it,

    That is stupidly incorrect.
    That DD calls HHH(DD) (its own simulator) IS PART OF
    THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.

    In no way am I saying that DD is not built on HHH, and
    does not have a behavior dependent on that of HHH.
    Why would I ever say that?

    But that entire bundle is one fixed case DD, with a single behavior,
    which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the "finite string input" DD *must* include as a substring the entire description of HHH.

    André


    That includes that HHH(DD) keeps simulating yet
    another instance of itself and DD forever and ever
    until it fully understands that no simulated DD
    can possibly ever reach its own final halt state.

    That five LLM systems immediately understood this
    and figured it all out on their own seems strong
    evidence that you are being disingenuous with me
    right now.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 15:31:25 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 3:30 PM, olcott wrote:
    On 10/22/2025 2:24 PM, André G. Isaak wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have >>>>>>>>> been
    dissected by multiple poeple to a much greater detail than they >>>>>>>>> deserve.

    It is disingenuous to say that you've simply had your details >>>>>>>>> ignored.


    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    The halting problem positively does not propose anything
    like that, which would be gapingly wrong.

    It only seems that way because you are unable to

    No, it doesn't only seem that way. Thanks for playing.

    provide the actual mapping that the actual input
    to HHH(DD) specifies when DD is simulated by HHH
    according to the semantics of the C language,

    DD is a "finite string input" which specifies a behavior that is
    independent of what simulates it,

    That is stupidly incorrect.
    That DD calls HHH(DD) (its own simulator) IS PART OF
    THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.

    In no way am I saying that DD is not built on HHH, and
    does not have a behavior dependent on that of HHH.
    Why would I ever say that?

    But that entire bundle is one fixed case DD, with a single behavior,
    which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the
    "finite string input" DD *must* include as a substring the entire
    description of HHH.

    André


    That includes that HHH(DD) keeps simulating yet
    another instance of itself and DD forever and ever

    False, as demonstrated by the fact that HHH(DD) returns.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Heathfield@rjh@cpax.org.uk to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 20:34:47 2025
    From Newsgroup: comp.ai.philosophy

    On 22/10/2025 20:24, André G. Isaak wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 12:07 PM, Kaz Kylheku wrote:

    <snip>

    DD is a "finite string input" which specifies a behavior that is
    independent of what simulates it,

    That is stupidly incorrect.
    That DD calls HHH(DD) (its own simulator) IS PART OF
    THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.

    In no way am I saying that DD is not built on HHH, and
    does not have a behavior dependent on that of HHH.
    Why would I ever say that?

    But that entire bundle is one fixed case DD, with a single
    behavior,
    which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp
    that the "finite string input" DD *must* include as a substring
    the entire description of HHH.

    He also seems to be missing the fact that HHH's sole input is a
    function pointer that it immediately invalidates by casting the
    pointer into into a uint32_t.

    HHH's ability to simulate DD is like a dog's walking on his hind
    legs. It doesn't work well, but you are surprised to find it
    working at all.
    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    With apologies to Dr Johnson.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 19:52:34 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior,
    which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the "finite string input" DD *must* include as a substring the entire description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH,
    but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 14:55:23 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 1:40 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been >>>>>>> dissected by multiple poeple to a much greater detail than they deserve.

    It is disingenuous to say that you've simply had your details ignored. >>>>>>>

    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    The halting problem positively does not propose anything
    like that, which would be gapingly wrong.

    It only seems that way because you are unable to

    No, it doesn't only seem that way. Thanks for playing.

    provide the actual mapping that the actual input
    to HHH(DD) specifies when DD is simulated by HHH
    according to the semantics of the C language,

    DD is a "finite string input" which specifies a behavior that is
    independent of what simulates it,

    That is stupidly incorrect.
    That DD calls HHH(DD) (its own simulator) IS PART OF
    THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.

    In no way am I saying that DD is not built on HHH, and
    does not have a behavior dependent on that of HHH.
    Why would I ever say that?

    But that entire bundle is one fixed case DD, with a single behavior,
    which is a property of DD, which is a finite string.


    That too is stupidly incorrect.
    It is the job of every simulating halt decider
    to predict what the behavior of it simulated
    input would be if it never aborted.

    When a person is asked a yes or no question
    there are not two separate people in parallel
    universes one that answers yes and one that
    answers no. There is one person that thinks
    through both hypothetical possibilities and
    then provides one answer.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 15:00:47 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior,
    which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the
    "finite string input" DD *must* include as a substring the entire
    description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH,
    but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.


    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 20:20:39 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior,
    which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the
    "finite string input" DD *must* include as a substring the entire
    description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH,
    but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 16:24:04 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 3:55 PM, olcott wrote:
    On 10/22/2025 1:40 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been >>>>>>>> dissected by multiple poeple to a much greater detail than they >>>>>>>> deserve.

    It is disingenuous to say that you've simply had your details >>>>>>>> ignored.


    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    The halting problem positively does not propose anything
    like that, which would be gapingly wrong.

    It only seems that way because you are unable to

    No, it doesn't only seem that way. Thanks for playing.

    provide the actual mapping that the actual input
    to HHH(DD) specifies when DD is simulated by HHH
    according to the semantics of the C language,

    DD is a "finite string input" which specifies a behavior that is
    independent of what simulates it,

    That is stupidly incorrect.
    That DD calls HHH(DD) (its own simulator) IS PART OF
    THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.

    In no way am I saying that DD is not built on HHH, and
    does not have a behavior dependent on that of HHH.
    Why would I ever say that?

    But that entire bundle is one fixed case DD, with a single behavior,
    which is a property of DD, which is a finite string.


    That too is stupidly incorrect.
    It is the job of every simulating halt decider
    to predict what the behavior of it simulated
    input would be if it never aborted.

    In other words, what would happen if that same input was given to UTM.


    When a person is asked a yes or no question
    there are not two separate people in parallel
    universes one that answers yes and one that
    answers no. There is one person that thinks
    through both hypothetical possibilities and
    then provides one answer.


    Strawman. The halting problem is about the instructions themselves, not
    where instructions physically reside i.e. a particular person's brain.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 15:35:06 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior, >>>>> which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the >>>> "finite string input" DD *must* include as a substring the entire
    description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH,
    but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.


    Yet again with deflection.
    That the input to HHH(DD) specfies non-haltin and
    HHH(DD) correctly reports this proves that the
    proof does not prove its point or that the halting
    problem incorrectly requires HHH to report on
    behavior that the input to HHH(DD) does not specify.


    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 16:43:59 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 4:35 PM, olcott wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G  Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior, >>>>>> which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the >>>>> "finite string input" DD *must* include as a substring the entire
    description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH,
    but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.


    Yet again with deflection.
    That the input to HHH(DD) specfies non-haltin
    False, as you have admitted otherwise:

    On 10/20/2025 11:51 PM, olcott wrote:
    On 10/20/2025 10:45 PM, dbush wrote:
    And it is a semantic tautology that a finite string description of a
    Turing machine is stipulated to specify all semantic properties of the
    described machine, including whether it halts when executed directly.
    And it is this semantic property that halt deciders are required to
    report on.

    Yes that is all correct

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 16:12:38 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior, >>>>> which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the >>>> "finite string input" DD *must* include as a substring the entire
    description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH,
    but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.


    I made sure to read what you said all the way through
    this time. DD correctly simulated by HHH cannot possibly
    reach its own final halt state no matter what HHH does.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 14:32:20 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 2:12 PM, olcott wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G  Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior, >>>>>> which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the >>>>> "finite string input" DD *must* include as a substring the entire
    description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH,
    but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.


    I made sure to read what you said all the way through
    this time.

    This time? How many other times do not even read at all? Just a skim,
    then your self-moron program kicks in? Humm...

    DD correctly simulated by HHH cannot possibly
    reach its own final halt state no matter what HHH does.



    HHH(DD) can return 0 and DD halts? If not, just say that HHH(DD) always returns 1?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 17:50:41 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 5:12 PM, olcott wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G  Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior, >>>>>> which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the >>>>> "finite string input" DD *must* include as a substring the entire
    description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH,
    but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.


    I made sure to read what you said all the way through
    this time. DD correctly simulated by HHH

    Does not exist because HHH aborts

    cannot possibly
    reach its own final halt state no matter what HHH does.

    But HHH is an algorithm which means it does exactly one thing and one
    thing only. Anything else is not HHH.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 21:55:45 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 1:40 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
    And when I identify a flaw yo simply ignore
    whatever I say.

    Nope; all the ways you say claim you've identified a flaw have been >>>>>>>> dissected by multiple poeple to a much greater detail than they deserve.

    It is disingenuous to say that you've simply had your details ignored. >>>>>>>>

    Turing machines in general can only compute mappings
    from their inputs. The halting problem requires computing
    mappings that in some cases are not provided in the
    inputs therefore the halting problem is wrong.

    The halting problem positively does not propose anything
    like that, which would be gapingly wrong.

    It only seems that way because you are unable to

    No, it doesn't only seem that way. Thanks for playing.

    provide the actual mapping that the actual input
    to HHH(DD) specifies when DD is simulated by HHH
    according to the semantics of the C language,

    DD is a "finite string input" which specifies a behavior that is
    independent of what simulates it,

    That is stupidly incorrect.
    That DD calls HHH(DD) (its own simulator) IS PART OF
    THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.

    In no way am I saying that DD is not built on HHH, and
    does not have a behavior dependent on that of HHH.
    Why would I ever say that?

    But that entire bundle is one fixed case DD, with a single behavior,
    which is a property of DD, which is a finite string.

    That too is stupidly incorrect.
    It is the job of every simulating halt decider
    to predict what the behavior of it simulated
    input would be if it never aborted.

    DD is a fixed input string that is etched in stone. That string
    specifies a behavior which invokes a certain decider in self-reference
    and then behaves opposite.

    That decider is recorded inside that string, in every detail,
    and so is also etched in stone.

    That decider is aborting, and can be nothing else.

    No decider which is analyzing DD has the power to alter any
    aspect of that string.

    It is a non-negotiable fact that DD calls an aborting decider
    which returns 0 to it, subsequent to which DD halts; so
    the correct answer for DD is 1.

    The behavior of a correct simulation of DD that is not aborted is that
    DD terminates, and thus so does the simulation.

    Games played with the redefinition (actual or hypothetical) of a decider
    that is specified somewhere outside of that string have no effect on
    that string.

    The real halting problem doesn't deal with C and function pointers,
    where you can play games and have the test case use a pointer
    to the same function that is also analyzing it, and be influenced
    by its redefinition and other muddled confusions.

    Even if HHH assumes it is calculating something in relation to
    a hypothetically redefined HHH, that hypothesis does not extend
    into the input DD; it must not.

    You are talking about some angels-on-the-head-of-a-pin rubbish and not
    the Halting Problem.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 17:14:08 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior, >>>>> which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the >>>> "finite string input" DD *must* include as a substring the entire
    description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH,
    but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.


    Thus proving that DD correctly simulated by HHH
    cannot possibly reach its own simulated final halt
    state no matter what HHH does.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 18:33:05 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 6:14 PM, olcott wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G  Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior, >>>>>> which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the >>>>> "finite string input" DD *must* include as a substring the entire
    description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH,
    but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.


    Thus proving that DD correctly simulated by HHH

    Does not exist because HHH aborts

    cannot possibly reach its own simulated final halt
    state no matter what HHH does.

    Category error: algorithm HHH does one thing and one thing only, and
    that is an incomplete and therefore incorrect simulation.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 23:01:39 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior, >>>>>> which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the >>>>> "finite string input" DD *must* include as a substring the entire
    description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH,
    but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.


    I made sure to read what you said all the way through
    this time. DD correctly simulated by HHH cannot possibly
    reach its own final halt state no matter what HHH does.

    The /simulation/ of DD by HHH will not /reproduce/ the halt
    state of DD, which DD undeniably /has/.

    DD specifies a procedure that transitions to a terminating state,
    whether any given simulation of it is carried far enough to show that.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 23:15:32 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior, >>>>>> which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the >>>>> "finite string input" DD *must* include as a substring the entire
    description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH,
    but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.


    Thus proving that DD correctly simulated by HHH
    cannot possibly reach its own simulated final halt
    state no matter what HHH does.

    I explained that a nested simulation tower is two dimensional.

    One dimension is the simulation level, the nesting itself;
    that goes out to infinity. Due to the aborting behavior of HHH,
    it is not actually realized in simulation; we have to step
    through the aborted simulations to keep it going.

    The other dimension is the execution /within/ the simulations.
    That can be halting or non-halting.

    In the HHH(DD) simulation tower, though that is infinite,
    the simulations are halting.

    I said that before. Your memory of that has vaporized, and you have now
    focused only on my statement that the simluation tower is infinite.

    The depth of the simulation tower, and the halting of the simulations
    within that tower, are independent phenomena.

    A decider must not mistake one for the other.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 18:24:32 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 6:15 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior, >>>>>>> which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the >>>>>> "finite string input" DD *must* include as a substring the entire
    description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>> but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.


    Thus proving that DD correctly simulated by HHH
    cannot possibly reach its own simulated final halt
    state no matter what HHH does.

    I explained that a nested simulation tower is two dimensional.

    One dimension is the simulation level, the nesting itself;
    that goes out to infinity.

    Great. Thus the input to HHH(DD) specifies behavior
    such that the correctly simulated DD cannot possibly
    reach its own simulated final halt state.

    Due to the aborting behavior of HHH,
    it is not actually realized in simulation; we have to step
    through the aborted simulations to keep it going.

    The other dimension is the execution /within/ the simulations.
    That can be halting or non-halting.

    In the HHH(DD) simulation tower, though that is infinite,
    the simulations are halting.

    I said that before. Your memory of that has vaporized, and you have now focused only on my statement that the simluation tower is infinite.

    The depth of the simulation tower, and the halting of the simulations
    within that tower, are independent phenomena.

    A decider must not mistake one for the other.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 20:14:35 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 7:24 PM, olcott wrote:
    On 10/22/2025 6:15 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G  Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single
    behavior,
    which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp
    that the
    "finite string input" DD *must* include as a substring the entire >>>>>>> description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>>> but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.


    Thus proving that DD correctly simulated by HHH
    cannot possibly reach its own simulated final halt
    state no matter what HHH does.

    I explained that a nested simulation tower is two dimensional.

    One dimension is the simulation level, the nesting itself;
    that goes out to infinity.

    Great. Thus the input to HHH(DD)

    i.e. finite string DD which is the description of machine DD i.e. <DD>
    and therefore stipulated to specify all semantic properties of machine
    DD including the fact that it halts when executed directly.

    specifies behavior
    such that the correctly simulated DD

    i.e. UTM(DD)

    cannot possibly
    reach its own simulated final halt state.

    False, as proven by UTM(DD) halting.


    Due to the aborting behavior of HHH,
    it is not actually realized in simulation; we have to step
    through the aborted simulations to keep it going.

    The other dimension is the execution /within/ the simulations.
    That can be halting or non-halting.

    In the HHH(DD) simulation tower, though that is infinite,
    the simulations are halting.

    I said that before. Your memory of that has vaporized, and you have now
    focused only on my statement that the simluation tower is infinite.

    The depth of the simulation tower, and the halting of the simulations
    within that tower, are independent phenomena.

    A decider must not mistake one for the other.




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Thu Oct 23 01:22:41 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 6:15 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior, >>>>>>>> which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the >>>>>>> "finite string input" DD *must* include as a substring the entire >>>>>>> description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>>> but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.


    Thus proving that DD correctly simulated by HHH
    cannot possibly reach its own simulated final halt
    state no matter what HHH does.

    I explained that a nested simulation tower is two dimensional.

    One dimension is the simulation level, the nesting itself;
    that goes out to infinity.

    Great. Thus the input to HHH(DD) specifies behavior
    such that the correctly simulated DD cannot possibly
    reach its own simulated final halt state.

    People replying to you respond to multiple points. Yet you typically
    only read one point of a response.

    You snipped all this:

    Due to the aborting behavior of HHH,
    it is not actually realized in simulation; we have to step
    through the aborted simulations to keep it going.

    The other dimension is the execution /within/ the simulations.
    That can be halting or non-halting.

    In the HHH(DD) simulation tower, though that is infinite,
    the simulations are halting.

    I said that before. Your memory of that has vaporized, and you have now
    focused only on my statement that the simluation tower is infinite.

    The depth of the simulation tower, and the halting of the simulations
    within that tower, are independent phenomena.

    A decider must not mistake one for the other.

    There being endless nested simulations doesn't imply that the
    simulations are nonterminating.

    If we simply do this:

    void fun(void)
    {
    sim_t s = simulation_create(fun);
    return;
    }

    we get an infinite tower of simulations, all of which terminate.

    When fun() is called, it creates a simulation beginning at fun.
    No step of this simulation is performed, yet it exists.

    Then fun terminates.

    No simulation has actually started, but we have a simuation
    state which implies an infnite tower.

    If the simulation s abandoned by fun is stepped, then soon,
    inside that simulation, fun wil be called, and will create another
    simulation and exit.

    Then if we simulate that the same thing will happen.

    Suppose the simulation_create module provides a simulate_run
    function which identifies all/any unfinished simulations and
    runs them.

    Then if we do this:

    int main()
    {
    fun();
    simulate_run();
    }

    simulate_run() will get into an infinite loop inside of
    which it is always completing simulations of fun, which
    are creating new simulations.

    That won't even run out of memory because it's not recursion.

    simulation_create() dynamically allocates a simulation. If
    simulation_run() calls simulation_destroy() whenever it detects that it
    has completed a simulation, then I think thesituation can hit a steady
    state; it runs forever, continuously launching and terminating
    simulations.

    But we cannot call fun itself non-halting. It has facilitated
    the infinite generation of simulations, but is itself halting.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 20:47:08 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 6:15 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single behavior, >>>>>>> which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp that the >>>>>> "finite string input" DD *must* include as a substring the entire
    description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>> but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.


    Thus proving that DD correctly simulated by HHH
    cannot possibly reach its own simulated final halt
    state no matter what HHH does.

    I explained that a nested simulation tower is two dimensional.

    One dimension is the simulation level, the nesting itself;
    that goes out to infinity.

    Great. Thus the input to HHH(DD) specifies behavior
    such that the correctly simulated DD cannot possibly
    reach its own simulated final halt state.

    *The above point is the only relevant point to my proof*

    We need to proceed from this one point to the next points
    that are semantically entailed from this one point then
    we have my whole proof.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Oct 22 22:13:59 2025
    From Newsgroup: comp.ai.philosophy

    On 10/22/2025 9:47 PM, olcott wrote:
    On 10/22/2025 6:15 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
    On 2025-10-22, olcott <polcott333@gmail.com> wrote:
    On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
    On 2025-10-22, André G  Isaak <agisaak@gm.invalid> wrote:
    On 2025-10-22 12:40, Kaz Kylheku wrote:
    But that entire bundle is one fixed case DD, with a single
    behavior,
    which is a property of DD, which is a finite string.

    I think part of the problem here is that Olcott doesn't grasp
    that the
    "finite string input" DD *must* include as a substring the entire >>>>>>> description of HHH.

    Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>>> but the same algorithm: a workalike.

    The HHH analyzing DD's halting could be in C, while the HHH
    called by DD could be in Python.

    DD does call HHH(DD) in recursive simulation
    and you try to get away with lying about it.

    I'm saying that's not a requirement in the halting problem.

    DD does not have to use that implementation of HHH; it can have
    its own clean-room implementation and it can be in any language.

    But nonetheless, yes, there will still be a nested simulation tower.


    Thus proving that DD correctly simulated by HHH
    cannot possibly reach its own simulated final halt
    state no matter what HHH does.

    I explained that a nested simulation tower is two dimensional.

    One dimension is the simulation level, the nesting itself;
    that goes out to infinity.

    Great. Thus the input to HHH(DD) specifies behavior
    such that the correctly simulated DD cannot possibly
    reach its own simulated final halt state.

    Repeat of previously refuted point:

    On 10/22/2025 8:14 PM, dbush wrote:
    On 10/22/2025 7:24 PM, olcott wrote:
    Great. Thus the input to HHH(DD)

    i.e. finite string DD which is the description of machine DD i.e. <DD>
    and therefore stipulated to specify all semantic properties of machine
    DD including the fact that it halts when executed directly.

    specifies behavior
    such that the correctly simulated DD

    i.e. UTM(DD)

    cannot possibly
    reach its own simulated final halt state.

    False, as proven by UTM(DD) halting.

    This constitutes your admission that:
    1) the prior refutation is correct
    2) the point you are responding to is correct

    --- Synchronet 3.21a-Linux NewsLink 1.2