On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> wrote: >>>>>>>>>> i don't get y polcott keep hanging onto ai for dear life. anyone with
Throngs of dumb boomers are falling for AI generated videos, believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. He's not >>>>>>>>> researched the fundamentals of what it means to train a language >>>>>>>>> network, and how it is ultimately just token prediction.
It excels at generating good syntax. The reason for that is that the >>>>>>>>> vast amount of training data exhibits good syntax. (Where it has bad >>>>>>>>> syntax, it is idiosyncratic; whereas good syntax is broadly shared.) >>>>>>>>>
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows
But you're incapable of recognizing valid entailment from invalid. >>>>>>>
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are other than as
you say, and you don't have the faintest idea how to put a dent in it.
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not
follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely
accepted result, but whose reasoning anyone can follow to see it
for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply
that I believe it because of the Big Names attached to it.
I'm convinced by the argumentation; and that conviction has
the side effect of convincing me of the falsehood of your
ineffective, contrary argumentation.
That is not any actual rebuttal of the specific points that I make.
No, indeed /that/ isn't; but plenty of those have also been made not
only by me but various others, over a considerable time span.
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> wrote: >>>>>>>>>>> i don't get y polcott keep hanging onto ai for dear life. >>>>>>>>>>> anyone with
Throngs of dumb boomers are falling for AI generated videos, >>>>>>>>>> believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. He's not >>>>>>>>>> researched the fundamentals of what it means to train a language >>>>>>>>>> network, and how it is ultimately just token prediction.
It excels at generating good syntax. The reason for that is >>>>>>>>>> that the
vast amount of training data exhibits good syntax. (Where it >>>>>>>>>> has bad
syntax, it is idiosyncratic; whereas good syntax is broadly >>>>>>>>>> shared.)
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows
But you're incapable of recognizing valid entailment from invalid. >>>>>>>>
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are other than as >>>> you say, and you don't have the faintest idea how to put a dent in it. >>>>
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not
follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely
accepted result, but whose reasoning anyone can follow to see it
for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply
that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
I'm convinced by the argumentation; and that conviction has
the side effect of convincing me of the falsehood of your
ineffective, contrary argumentation.
Not really it actually gives you the bias to refuse
to pay attention.
That is not any actual rebuttal of the specific points that I make.
No, indeed /that/ isn't; but plenty of those have also been made not
only by me but various others, over a considerable time span.
Never any actual rebuttal ever since Professor
Sipser agreed with my words. Those exact same
words still form the basis of my whole proof.
Fritz Feldhase <franz.fri...@gmail.com> writes:halting theorem?
On Monday, March 6, 2023 at 3:56:52 AM UTC+1, olcott wrote:
On 3/5/2023 8:33 PM, Fritz Feldhase wrote:
On Monday, March 6, 2023 at 3:30:38 AM UTC+1, olcott wrote:
Does Sipser support your view/claim that you have refuted the
I needed Sipser for people [bla]
Professor Sipser only agreed that [...]
Does he write/teach that the halting theorem is invalid?
Tell us, oh genius!
So the answer is no. Noted.
Because he has >250 students he did not have time to examine anything
else. [...]
Oh, a CS professor does not have the time to check a refutation of the halting theorem. *lol*I exchanged emails with him about this. He does not agree with anything substantive that PO has written. I won't quote him, as I don't have permission, but he was, let's say... forthright, in his reply to me.
joes <noreply@example.org> writes:
Am Wed, 21 Aug 2024 20:55:52 -0500 schrieb olcott:
Professor Sipser clearly agreed that an H that does a finite simulation
of D is to predict the behavior of an unlimited simulation of D.
If the simulator *itself* would not abort. The H called by D is,
by construction, the same and *does* abort.
We don't really know what context Sipser was given. I got in touch at
the time so do I know he had enough context to know that PO's ideas were "wacky" and that had agreed to what he considered a "minor remark".
Since PO considers his words finely crafted and key to his so-called
work I think it's clear that Sipser did not take the "minor remark" he agreed to to mean what PO takes it to mean! My own take if that he
(Sipser) read it as a general remark about how to determine some cases,
i.e. that D names an input that H can partially simulate to determine
it's halting or otherwise. We all know or could construct some such
cases.
I suspect he was tricked because PO used H and D as the names without
making it clear that D was constructed from H in the usual way (Sipser
uses H and D in at least one of his proofs). Of course, he is clued in enough know that, if D is indeed constructed from H like that, the
"minor remark" becomes true by being a hypothetical: if the moon is made
of cheese, the Martians can look forward to a fine fondue. But,
personally, I think the professor is more straight talking than that,
and he simply took as a method that can work for some inputs. That's
the only way is could be seen as a "minor remark" with being accused of being disingenuous.
So that PO will have no cause to quote me as supporting his case: what Sipser understood he was agreeing to was NOT what PO interprets it as meaning. Sipser would not agree that the conclusion applies in PO's HHH(DDD) scenario, where DDD halts.
PO is trying to interpret Sipser's quote:
--- Start Sipser quote
If simulating halt decider H correctly simulates its input D
until H correctly determines that its simulated D would never
stop running unless aborted then
H can abort its simulation of D and correctly report that D
specifies a non-halting sequence of configurations.
--- End Sipser quote
The following interpretation is ok:
If H is given input D, and while simulating D gathers enough
information to deduce that UTM(D) would never halt, then
H can abort its simulation and decide D never halts.
I'd say it's obvious that this is what Sipser is saying, because it's natural, correct, and relevant to what was being discussed (valid
strategy for a simulating halt decider). It is trivial to check that
what my interpretation says is valid:
if UTM(D) would never halt, then D never halts, so if H(D) returns
never_halts then that is the correct answer for the input. QED :)
On 10/20/2025 11:00 PM, olcott wrote:
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> wrote: >>>>>>>>>>>> i don't get y polcott keep hanging onto ai for dear life. >>>>>>>>>>>> anyone with
Throngs of dumb boomers are falling for AI generated videos, >>>>>>>>>>> believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. He's >>>>>>>>>>> not
researched the fundamentals of what it means to train a language >>>>>>>>>>> network, and how it is ultimately just token prediction. >>>>>>>>>>>
It excels at generating good syntax. The reason for that is >>>>>>>>>>> that the
vast amount of training data exhibits good syntax. (Where it >>>>>>>>>>> has bad
syntax, it is idiosyncratic; whereas good syntax is broadly >>>>>>>>>>> shared.)
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows
But you're incapable of recognizing valid entailment from invalid. >>>>>>>>>
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are other
than as
you say, and you don't have the faintest idea how to put a dent in it. >>>>>
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not
follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely
accepted result, but whose reasoning anyone can follow to see it
for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply
that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
I'm convinced by the argumentation; and that conviction has
the side effect of convincing me of the falsehood of your
ineffective, contrary argumentation.
Not really it actually gives you the bias to refuse
to pay attention.
That is not any actual rebuttal of the specific points that I make.
No, indeed /that/ isn't; but plenty of those have also been made not
only by me but various others, over a considerable time span.
Never any actual rebuttal ever since Professor
Sipser agreed with my words. Those exact same
words still form the basis of my whole proof.
You mean the words where he didn't agree with your interpretation of them?
On 10/20/2025 10:05 PM, dbush wrote:
On 10/20/2025 11:00 PM, olcott wrote:
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> >>>>>>>>>>>> wrote:
i don't get y polcott keep hanging onto ai for dear life. >>>>>>>>>>>>> anyone with
Throngs of dumb boomers are falling for AI generated videos, >>>>>>>>>>>> believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. >>>>>>>>>>>> He's not
researched the fundamentals of what it means to train a >>>>>>>>>>>> language
network, and how it is ultimately just token prediction. >>>>>>>>>>>>
It excels at generating good syntax. The reason for that is >>>>>>>>>>>> that the
vast amount of training data exhibits good syntax. (Where it >>>>>>>>>>>> has bad
syntax, it is idiosyncratic; whereas good syntax is broadly >>>>>>>>>>>> shared.)
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows
But you're incapable of recognizing valid entailment from >>>>>>>>>> invalid.
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are other
than as
you say, and you don't have the faintest idea how to put a dent in >>>>>> it.
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not
follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely
accepted result, but whose reasoning anyone can follow to see it
for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply
that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
I'm convinced by the argumentation; and that conviction has
the side effect of convincing me of the falsehood of your
ineffective, contrary argumentation.
Not really it actually gives you the bias to refuse
to pay attention.
That is not any actual rebuttal of the specific points that I make.
No, indeed /that/ isn't; but plenty of those have also been made not
only by me but various others, over a considerable time span.
Never any actual rebuttal ever since Professor
Sipser agreed with my words. Those exact same
words still form the basis of my whole proof.
You mean the words where he didn't agree with your interpretation of
them?
According to a Claude AI analysis there
are only two interpretations and one of
them is wrong and the other one is my
interpretation.
Fritz Feldhase <franz.fri...@gmail.com> writes:halting theorem?
On Monday, March 6, 2023 at 3:56:52 AM UTC+1, olcott wrote:
On 3/5/2023 8:33 PM, Fritz Feldhase wrote:
On Monday, March 6, 2023 at 3:30:38 AM UTC+1, olcott wrote:
Does Sipser support your view/claim that you have refuted the
I needed Sipser for people [bla]
Professor Sipser only agreed that [...]
Does he write/teach that the halting theorem is invalid?
Tell us, oh genius!
So the answer is no. Noted.
Because he has >250 students he did not have time to examine anything
else. [...]
Oh, a CS professor does not have the time to check a refutation of the halting theorem. *lol*I exchanged emails with him about this. He does not agree with anything substantive that PO has written. I won't quote him, as I don't have permission, but he was, let's say... forthright, in his reply to me.
joes <noreply@example.org> writes:
Am Wed, 21 Aug 2024 20:55:52 -0500 schrieb olcott:
Professor Sipser clearly agreed that an H that does a finite simulation
of D is to predict the behavior of an unlimited simulation of D.
If the simulator *itself* would not abort. The H called by D is,
by construction, the same and *does* abort.
We don't really know what context Sipser was given. I got in touch at
the time so do I know he had enough context to know that PO's ideas were "wacky" and that had agreed to what he considered a "minor remark".
Since PO considers his words finely crafted and key to his so-called
work I think it's clear that Sipser did not take the "minor remark" he agreed to to mean what PO takes it to mean! My own take if that he
(Sipser) read it as a general remark about how to determine some cases,
i.e. that D names an input that H can partially simulate to determine
it's halting or otherwise. We all know or could construct some such
cases.
I suspect he was tricked because PO used H and D as the names without
making it clear that D was constructed from H in the usual way (Sipser
uses H and D in at least one of his proofs). Of course, he is clued in enough know that, if D is indeed constructed from H like that, the
"minor remark" becomes true by being a hypothetical: if the moon is made
of cheese, the Martians can look forward to a fine fondue. But,
personally, I think the professor is more straight talking than that,
and he simply took as a method that can work for some inputs. That's
the only way is could be seen as a "minor remark" with being accused of being disingenuous.
So that PO will have no cause to quote me as supporting his case: what Sipser understood he was agreeing to was NOT what PO interprets it as meaning. Sipser would not agree that the conclusion applies in PO's HHH(DDD) scenario, where DDD halts.
PO is trying to interpret Sipser's quote:
--- Start Sipser quote
If simulating halt decider H correctly simulates its input D
until H correctly determines that its simulated D would never
stop running unless aborted then
H can abort its simulation of D and correctly report that D
specifies a non-halting sequence of configurations.
--- End Sipser quote
The following interpretation is ok:
If H is given input D, and while simulating D gathers enough
information to deduce that UTM(D) would never halt, then
H can abort its simulation and decide D never halts.
I'd say it's obvious that this is what Sipser is saying, because it's natural, correct, and relevant to what was being discussed (valid
strategy for a simulating halt decider). It is trivial to check that
what my interpretation says is valid:
if UTM(D) would never halt, then D never halts, so if H(D) returns
never_halts then that is the correct answer for the input. QED :)
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> wrote: >>>>>>>>>>> i don't get y polcott keep hanging onto ai for dear life. anyone with
Throngs of dumb boomers are falling for AI generated videos, believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. He's not >>>>>>>>>> researched the fundamentals of what it means to train a language >>>>>>>>>> network, and how it is ultimately just token prediction.
It excels at generating good syntax. The reason for that is that the >>>>>>>>>> vast amount of training data exhibits good syntax. (Where it has bad >>>>>>>>>> syntax, it is idiosyncratic; whereas good syntax is broadly shared.) >>>>>>>>>>
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows
But you're incapable of recognizing valid entailment from invalid. >>>>>>>>
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are other than as >>>> you say, and you don't have the faintest idea how to put a dent in it. >>>>
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not
follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely
accepted result, but whose reasoning anyone can follow to see it
for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply
that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
I'm convinced by the argumentation; and that conviction has
the side effect of convincing me of the falsehood of your
ineffective, contrary argumentation.
Not really it actually gives you the bias to refuse
to pay attention.
That is not any actual rebuttal of the specific points that I make.
No, indeed /that/ isn't; but plenty of those have also been made not
only by me but various others, over a considerable time span.
Never any actual rebuttal ever since Professor
Sipser agreed with my words.
Those exact same
words still form the basis of my whole proof.
*Here is an accurate paraphrase of those words*
<Input to LLM systems>
Perform the complete simulation specified in main() and provide a
full execution trace before doing anything else.
Simulating Termination Analyzer HHH correctly simulates its input until:
(a) It correctly detects that its simulated input cannot
possibly reach its own simulated final halt state then:
abort simulation and return 0 rejecting its input as non-halting.
On 10/20/2025 11:13 PM, olcott wrote:
On 10/20/2025 10:05 PM, dbush wrote:>>>
You mean the words where he didn't agree with your interpretation of
them?
According to a Claude AI analysis there
are only two interpretations and one of
them is wrong and the other one is my
interpretation.
Whether you think one interpretation is wrong is irrelevant. What is relevant is that that's how everyone else including Sipser interpreted
those words, so you lie by implying that he agrees with your
interpretation.
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not
follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely
accepted result, but whose reasoning anyone can follow to see it
for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply
that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been
dissected by multiple poeple to a much greater detail than they deserve.
It is disingenuous to say that you've simply had your details ignored.
I'm convinced by the argumentation; and that conviction has
the side effect of convincing me of the falsehood of your
ineffective, contrary argumentation.
Not really it actually gives you the bias to refuse
to pay attention.
LOL! the world at large is incredibly biased against giving a crank
like you any attention.
On 10/20/2025 10:16 PM, dbush wrote:
On 10/20/2025 11:13 PM, olcott wrote:
On 10/20/2025 10:05 PM, dbush wrote:>>>
You mean the words where he didn't agree with your interpretation of
them?
According to a Claude AI analysis there
are only two interpretations and one of
them is wrong and the other one is my
interpretation.
Whether you think one interpretation is wrong is irrelevant. What is
relevant is that that's how everyone else including Sipser interpreted
those words, so you lie by implying that he agrees with your
interpretation.
<repeat of previously refuted point>
Fritz Feldhase <franz.fri...@gmail.com> writes:halting theorem?
On Monday, March 6, 2023 at 3:56:52 AM UTC+1, olcott wrote:
On 3/5/2023 8:33 PM, Fritz Feldhase wrote:
On Monday, March 6, 2023 at 3:30:38 AM UTC+1, olcott wrote:
Does Sipser support your view/claim that you have refuted the
I needed Sipser for people [bla]
Professor Sipser only agreed that [...]
Does he write/teach that the halting theorem is invalid?
Tell us, oh genius!
So the answer is no. Noted.
Because he has >250 students he did not have time to examine anything
else. [...]
Oh, a CS professor does not have the time to check a refutation of the halting theorem. *lol*I exchanged emails with him about this. He does not agree with anything substantive that PO has written. I won't quote him, as I don't have permission, but he was, let's say... forthright, in his reply to me.
joes <noreply@example.org> writes:
Am Wed, 21 Aug 2024 20:55:52 -0500 schrieb olcott:
Professor Sipser clearly agreed that an H that does a finite simulation
of D is to predict the behavior of an unlimited simulation of D.
If the simulator *itself* would not abort. The H called by D is,
by construction, the same and *does* abort.
We don't really know what context Sipser was given. I got in touch at
the time so do I know he had enough context to know that PO's ideas were "wacky" and that had agreed to what he considered a "minor remark".
Since PO considers his words finely crafted and key to his so-called
work I think it's clear that Sipser did not take the "minor remark" he agreed to to mean what PO takes it to mean! My own take if that he
(Sipser) read it as a general remark about how to determine some cases,
i.e. that D names an input that H can partially simulate to determine
it's halting or otherwise. We all know or could construct some such
cases.
I suspect he was tricked because PO used H and D as the names without
making it clear that D was constructed from H in the usual way (Sipser
uses H and D in at least one of his proofs). Of course, he is clued in enough know that, if D is indeed constructed from H like that, the
"minor remark" becomes true by being a hypothetical: if the moon is made
of cheese, the Martians can look forward to a fine fondue. But,
personally, I think the professor is more straight talking than that,
and he simply took as a method that can work for some inputs. That's
the only way is could be seen as a "minor remark" with being accused of being disingenuous.
So that PO will have no cause to quote me as supporting his case: what Sipser understood he was agreeing to was NOT what PO interprets it as meaning. Sipser would not agree that the conclusion applies in PO's HHH(DDD) scenario, where DDD halts.
PO is trying to interpret Sipser's quote:
--- Start Sipser quote
If simulating halt decider H correctly simulates its input D
until H correctly determines that its simulated D would never
stop running unless aborted then
H can abort its simulation of D and correctly report that D
specifies a non-halting sequence of configurations.
--- End Sipser quote
The following interpretation is ok:
If H is given input D, and while simulating D gathers enough
information to deduce that UTM(D) would never halt, then
H can abort its simulation and decide D never halts.
I'd say it's obvious that this is what Sipser is saying, because it's natural, correct, and relevant to what was being discussed (valid
strategy for a simulating halt decider). It is trivial to check that
what my interpretation says is valid:
if UTM(D) would never halt, then D never halts, so if H(D) returns
never_halts then that is the correct answer for the input. QED :)
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> wrote: >>>>>>>>>>>> i don't get y polcott keep hanging onto ai for dear life. anyone with
Throngs of dumb boomers are falling for AI generated videos, believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. He's not >>>>>>>>>>> researched the fundamentals of what it means to train a language >>>>>>>>>>> network, and how it is ultimately just token prediction. >>>>>>>>>>>
It excels at generating good syntax. The reason for that is that the
vast amount of training data exhibits good syntax. (Where it has bad
syntax, it is idiosyncratic; whereas good syntax is broadly shared.)
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows
But you're incapable of recognizing valid entailment from invalid. >>>>>>>>>
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are other than as >>>>> you say, and you don't have the faintest idea how to put a dent in it. >>>>>
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not
follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely
accepted result, but whose reasoning anyone can follow to see it
for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply
that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been
dissected by multiple poeple to a much greater detail than they deserve.
It is disingenuous to say that you've simply had your details ignored.
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> >>>>>>>>>>>> wrote:
i don't get y polcott keep hanging onto ai for dear life. >>>>>>>>>>>>> anyone with
Throngs of dumb boomers are falling for AI generated videos, >>>>>>>>>>>> believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. >>>>>>>>>>>> He's not
researched the fundamentals of what it means to train a >>>>>>>>>>>> language
network, and how it is ultimately just token prediction. >>>>>>>>>>>>
It excels at generating good syntax. The reason for that is >>>>>>>>>>>> that the
vast amount of training data exhibits good syntax. (Where it >>>>>>>>>>>> has bad
syntax, it is idiosyncratic; whereas good syntax is broadly >>>>>>>>>>>> shared.)
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows
But you're incapable of recognizing valid entailment from >>>>>>>>>> invalid.
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are other
than as
you say, and you don't have the faintest idea how to put a dent in >>>>>> it.
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not
follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely
accepted result, but whose reasoning anyone can follow to see it
for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply
that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been
dissected by multiple poeple to a much greater detail than they deserve.
It is disingenuous to say that you've simply had your details ignored.
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
On 10/22/2025 7:56 AM, olcott wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> >>>>>>>>>>>>> wrote:
i don't get y polcott keep hanging onto ai for dear life. >>>>>>>>>>>>>> anyone with
Throngs of dumb boomers are falling for AI generated >>>>>>>>>>>>> videos, believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. >>>>>>>>>>>>> He's not
researched the fundamentals of what it means to train a >>>>>>>>>>>>> language
network, and how it is ultimately just token prediction. >>>>>>>>>>>>>
It excels at generating good syntax. The reason for that is >>>>>>>>>>>>> that the
vast amount of training data exhibits good syntax. (Where >>>>>>>>>>>>> it has bad
syntax, it is idiosyncratic; whereas good syntax is broadly >>>>>>>>>>>>> shared.)
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows
But you're incapable of recognizing valid entailment from >>>>>>>>>>> invalid.
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are other >>>>>>> than as
you say, and you don't have the faintest idea how to put a dent >>>>>>> in it.
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not
follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely
accepted result, but whose reasoning anyone can follow to see it
for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply
that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been
dissected by multiple poeple to a much greater detail than they deserve. >>>
It is disingenuous to say that you've simply had your details ignored.
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
False:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when executed directly
On 10/22/2025 7:25 AM, dbush wrote:
On 10/22/2025 7:56 AM, olcott wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> >>>>>>>>>>>>>> wrote:
i don't get y polcott keep hanging onto ai for dear life. >>>>>>>>>>>>>>> anyone with
Throngs of dumb boomers are falling for AI generated >>>>>>>>>>>>>> videos, believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. >>>>>>>>>>>>>> He's not
researched the fundamentals of what it means to train a >>>>>>>>>>>>>> language
network, and how it is ultimately just token prediction. >>>>>>>>>>>>>>
It excels at generating good syntax. The reason for that >>>>>>>>>>>>>> is that the
vast amount of training data exhibits good syntax. (Where >>>>>>>>>>>>>> it has bad
syntax, it is idiosyncratic; whereas good syntax is >>>>>>>>>>>>>> broadly shared.)
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows
But you're incapable of recognizing valid entailment from >>>>>>>>>>>> invalid.
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are other >>>>>>>> than as
you say, and you don't have the faintest idea how to put a dent >>>>>>>> in it.
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not >>>>>> follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely
accepted result, but whose reasoning anyone can follow to see it
for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply >>>>>> that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been
dissected by multiple poeple to a much greater detail than they
deserve.
It is disingenuous to say that you've simply had your details ignored. >>>>
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
False:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
directly
Yes that it the exact error that I have been
referring to.
In the case of HHH(DD) the above requires HHH to
report on the behavior of its caller
and HHH has
no way to even know who its caller is.
My simulating halt decider
exposed the gap ofThe only false assumption is that the above requirements can be
false assumptions
On 10/22/2025 8:48 AM, olcott wrote:
On 10/22/2025 7:25 AM, dbush wrote:
On 10/22/2025 7:56 AM, olcott wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>> On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
But you're incapable of recognizing valid entailment from >>>>>>>>>>>>> invalid.On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> >>>>>>>>>>>>>>> wrote:
i don't get y polcott keep hanging onto ai for dear >>>>>>>>>>>>>>>> life. anyone with
Throngs of dumb boomers are falling for AI generated >>>>>>>>>>>>>>> videos, believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. >>>>>>>>>>>>>>> He's not
researched the fundamentals of what it means to train a >>>>>>>>>>>>>>> language
network, and how it is ultimately just token prediction. >>>>>>>>>>>>>>>
It excels at generating good syntax. The reason for that >>>>>>>>>>>>>>> is that the
vast amount of training data exhibits good syntax. (Where >>>>>>>>>>>>>>> it has bad
syntax, it is idiosyncratic; whereas good syntax is >>>>>>>>>>>>>>> broadly shared.)
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows >>>>>>>>>>>>>
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are other >>>>>>>>> than as
you say, and you don't have the faintest idea how to put a dent >>>>>>>>> in it.
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not >>>>>>> follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely >>>>>>> accepted result, but whose reasoning anyone can follow to see it >>>>>>> for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply >>>>>>> that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been
dissected by multiple poeple to a much greater detail than they
deserve.
It is disingenuous to say that you've simply had your details ignored. >>>>>
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
False:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
directly
Yes that it the exact error that I have been
referring to.
That is not an error. That is simply a mapping that you have admitted exists.
In the case of HHH(DD) the above requires HHH to
report on the behavior of its caller
False. It requires HHH to report on the behavior of the machine
described by its input.
On 10/22/2025 8:00 AM, dbush wrote:
On 10/22/2025 8:48 AM, olcott wrote:
On 10/22/2025 7:25 AM, dbush wrote:
On 10/22/2025 7:56 AM, olcott wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>>> On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
But you're incapable of recognizing valid entailment from >>>>>>>>>>>>>> invalid.On 2025-10-19, dart200
<user7160@newsgrouper.org.invalid> wrote:
i don't get y polcott keep hanging onto ai for dear >>>>>>>>>>>>>>>>> life. anyone with
Throngs of dumb boomers are falling for AI generated >>>>>>>>>>>>>>>> videos, believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. >>>>>>>>>>>>>>>> He's not
researched the fundamentals of what it means to train a >>>>>>>>>>>>>>>> language
network, and how it is ultimately just token prediction. >>>>>>>>>>>>>>>>
It excels at generating good syntax. The reason for that >>>>>>>>>>>>>>>> is that the
vast amount of training data exhibits good syntax. >>>>>>>>>>>>>>>> (Where it has bad
syntax, it is idiosyncratic; whereas good syntax is >>>>>>>>>>>>>>>> broadly shared.)
I provide a basis to it and it does perform valid >>>>>>>>>>>>>>> semantic logical entailment on this basis and shows >>>>>>>>>>>>>>
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are >>>>>>>>>> other than as
you say, and you don't have the faintest idea how to put a >>>>>>>>>> dent in it.
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not >>>>>>>> follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely >>>>>>>> accepted result, but whose reasoning anyone can follow to see it >>>>>>>> for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply >>>>>>>> that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been >>>>>> dissected by multiple poeple to a much greater detail than they
deserve.
It is disingenuous to say that you've simply had your details
ignored.
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
False:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
directly
Yes that it the exact error that I have been
referring to.
That is not an error. That is simply a mapping that you have admitted
exists.
In the case of HHH(DD) the above requires HHH to
report on the behavior of its caller
False. It requires HHH to report on the behavior of the machine
described by its input.
That includes that DD calls HHH(DD) in recursive
simulation.
On 10/22/2025 9:47 AM, olcott wrote:
On 10/22/2025 8:00 AM, dbush wrote:
On 10/22/2025 8:48 AM, olcott wrote:
On 10/22/2025 7:25 AM, dbush wrote:
On 10/22/2025 7:56 AM, olcott wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>> On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>>>> On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
But you're incapable of recognizing valid entailment from >>>>>>>>>>>>>>> invalid.On 2025-10-19, dart200
<user7160@newsgrouper.org.invalid> wrote:
i don't get y polcott keep hanging onto ai for dear >>>>>>>>>>>>>>>>>> life. anyone with
Throngs of dumb boomers are falling for AI generated >>>>>>>>>>>>>>>>> videos, believing
them to be real. This is much the same thing. >>>>>>>>>>>>>>>>>
AI is just another thing Olcott has no understanding >>>>>>>>>>>>>>>>> of. He's not
researched the fundamentals of what it means to train a >>>>>>>>>>>>>>>>> language
network, and how it is ultimately just token prediction. >>>>>>>>>>>>>>>>>
It excels at generating good syntax. The reason for >>>>>>>>>>>>>>>>> that is that the
vast amount of training data exhibits good syntax. >>>>>>>>>>>>>>>>> (Where it has bad
syntax, it is idiosyncratic; whereas good syntax is >>>>>>>>>>>>>>>>> broadly shared.)
I provide a basis to it and it does perform valid >>>>>>>>>>>>>>>> semantic logical entailment on this basis and shows >>>>>>>>>>>>>>>
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are >>>>>>>>>>> other than as
you say, and you don't have the faintest idea how to put a >>>>>>>>>>> dent in it.
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does >>>>>>>>> not
follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely >>>>>>>>> accepted result, but whose reasoning anyone can follow to see it >>>>>>>>> for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which >>>>>>>>> we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply >>>>>>>>> that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been >>>>>>> dissected by multiple poeple to a much greater detail than they >>>>>>> deserve.
It is disingenuous to say that you've simply had your details
ignored.
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
False:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
directly
Yes that it the exact error that I have been
referring to.
That is not an error. That is simply a mapping that you have
admitted exists.
In the case of HHH(DD) the above requires HHH to
report on the behavior of its caller
False. It requires HHH to report on the behavior of the machine
described by its input.
That includes that DD calls HHH(DD) in recursive
simulation.
Which therefore includes the fact that HHH(DD) will return 0 and that DD will subsequently halt.
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been
dissected by multiple poeple to a much greater detail than they deserve.
It is disingenuous to say that you've simply had your details ignored.
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
Blah, Blah Blah, no Olcott you are wrong, I know
that you are wrong because I simply don't believe you.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been
dissected by multiple poeple to a much greater detail than they deserve. >>>
It is disingenuous to say that you've simply had your details ignored.
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
Blah, Blah Blah, no Olcott you are wrong, I know
that you are wrong because I simply don't believe you.
You are wrong because I (1) don't see that gaping flaw in the
definition of the halting problem, (2) you don't even
try to explain how such that flaw can be. Where, how, why
is any decider being asked to decide something other than
an input representable as a finite string.
I've repeated many times that the diagonal case is constructable as a
finite string, whose halting status can be readily ascertained.
Because it's obvious to me, of course I'm going to reject
baseless claims that simply ask me to /believe/ otherwise.
On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been
dissected by multiple poeple to a much greater detail than they deserve. >>>>
It is disingenuous to say that you've simply had your details ignored. >>>>
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
It only seems that way because you are unable to
provide the actual mapping that the actual input
to HHH(DD) specifies when DD is simulated by HHH
according to the semantics of the C language,
even though I do remember that you did do this once.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been
dissected by multiple poeple to a much greater detail than they deserve. >>>>>
It is disingenuous to say that you've simply had your details ignored. >>>>>
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
It only seems that way because you are unable to
No, it doesn't only seem that way. Thanks for playing.
provide the actual mapping that the actual input
to HHH(DD) specifies when DD is simulated by HHH
according to the semantics of the C language,
DD is a "finite string input" which specifies a behavior that is
independent of what simulates it,
On 10/22/2025 8:50 AM, dbush wrote:
On 10/22/2025 9:47 AM, olcott wrote:
On 10/22/2025 8:00 AM, dbush wrote:
On 10/22/2025 8:48 AM, olcott wrote:
On 10/22/2025 7:25 AM, dbush wrote:
On 10/22/2025 7:56 AM, olcott wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>>> On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>>>>> On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
But you're incapable of recognizing valid entailment >>>>>>>>>>>>>>>> from invalid.On 2025-10-19, dart200
<user7160@newsgrouper.org.invalid> wrote: >>>>>>>>>>>>>>>>>>> i don't get y polcott keep hanging onto ai for dear >>>>>>>>>>>>>>>>>>> life. anyone with
Throngs of dumb boomers are falling for AI generated >>>>>>>>>>>>>>>>>> videos, believing
them to be real. This is much the same thing. >>>>>>>>>>>>>>>>>>
AI is just another thing Olcott has no understanding >>>>>>>>>>>>>>>>>> of. He's not
researched the fundamentals of what it means to train >>>>>>>>>>>>>>>>>> a language
network, and how it is ultimately just token prediction. >>>>>>>>>>>>>>>>>>
It excels at generating good syntax. The reason for >>>>>>>>>>>>>>>>>> that is that the
vast amount of training data exhibits good syntax. >>>>>>>>>>>>>>>>>> (Where it has bad
syntax, it is idiosyncratic; whereas good syntax is >>>>>>>>>>>>>>>>>> broadly shared.)
I provide a basis to it and it does perform valid >>>>>>>>>>>>>>>>> semantic logical entailment on this basis and shows >>>>>>>>>>>>>>>>
Any freaking idiot can spew out baseless rhetoric >>>>>>>>>>>>>>> such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored >>>>>>>>>>>>>>> in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/ >>>>>>>>>>>> extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are >>>>>>>>>>>> other than as
you say, and you don't have the faintest idea how to put a >>>>>>>>>>>> dent in it.
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say >>>>>>>>>> does not
follow a valid mode of inference for refuting an argument. >>>>>>>>>>
If you are trying to refute something which is not only a widely >>>>>>>>>> accepted result, but whose reasoning anyone can follow to see it >>>>>>>>>> for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being >>>>>>>>>> directly refuted by elements of the established result which >>>>>>>>>> we can refer to.
I cannot identify any flaw in the halting theorem. It's not >>>>>>>>>> simply
that I believe it because of the Big Names attached to it. >>>>>>>>>>
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been >>>>>>>> dissected by multiple poeple to a much greater detail than they >>>>>>>> deserve.
It is disingenuous to say that you've simply had your details >>>>>>>> ignored.
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
False:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly >>>>>> (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed >>>>>> directly
Yes that it the exact error that I have been
referring to.
That is not an error. That is simply a mapping that you have
admitted exists.
In the case of HHH(DD) the above requires HHH to
report on the behavior of its caller
False. It requires HHH to report on the behavior of the machine
described by its input.
That includes that DD calls HHH(DD) in recursive
simulation.
Which therefore includes the fact that HHH(DD) will return 0 and that
DD will subsequently halt.
You keep ignoring that we are only focusing on
DD correctly simulated by HHH.
On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been
dissected by multiple poeple to a much greater detail than they
deserve.
It is disingenuous to say that you've simply had your details ignored. >>>>
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
It only seems that way because you are unable to
provide the actual mapping that the actual input
to HHH(DD) specifies when DD is simulated by HHH
according to the semantics of the C language,
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been >>>>>> dissected by multiple poeple to a much greater detail than they
deserve.
It is disingenuous to say that you've simply had your details
ignored.
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
It only seems that way because you are unable to
No, it doesn't only seem that way. Thanks for playing.
provide the actual mapping that the actual input
to HHH(DD) specifies when DD is simulated by HHH
according to the semantics of the C language,
DD is a "finite string input" which specifies a behavior that is
independent of what simulates it,
That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF
THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been >>>>>> dissected by multiple poeple to a much greater detail than they deserve. >>>>>>
It is disingenuous to say that you've simply had your details ignored. >>>>>>
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
It only seems that way because you are unable to
No, it doesn't only seem that way. Thanks for playing.
provide the actual mapping that the actual input
to HHH(DD) specifies when DD is simulated by HHH
according to the semantics of the C language,
DD is a "finite string input" which specifies a behavior that is
independent of what simulates it,
That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF
THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been >>>>>>> dissected by multiple poeple to a much greater detail than they deserve.
It is disingenuous to say that you've simply had your details ignored. >>>>>>>
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
It only seems that way because you are unable to
No, it doesn't only seem that way. Thanks for playing.
provide the actual mapping that the actual input
to HHH(DD) specifies when DD is simulated by HHH
according to the semantics of the C language,
DD is a "finite string input" which specifies a behavior that is
independent of what simulates it,
That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF
THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.
In no way am I saying that DD is not built on HHH, and
does not have a behavior dependent on that of HHH.
Why would I ever say that?
But that entire bundle is one fixed case DD, with a single behavior,
which is a property of DD, which is a finite string.
DD can be passed as an argument to any decider, not only HHH.
For instance, don't you have a HHH1 such that HHH1(DD)
correctly steps DD to the end and returns the correct value 1?
DD's behavior is dependent on a decider which it calls;
but not dependent on anything which is analyzing DD.
Even when those two are the same, they are different
instances/activations.
DD creates an activation of HHH on whose result it depends.
The definition of DD's behavior does not depend on the ongoing
activation of something which happens to be analyzing it;
it has no knowledge of that.
On 2025-10-22 12:40, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been >>>>>>>> dissected by multiple poeple to a much greater detail than they >>>>>>>> deserve.
It is disingenuous to say that you've simply had your details >>>>>>>> ignored.
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
It only seems that way because you are unable to
No, it doesn't only seem that way. Thanks for playing.
provide the actual mapping that the actual input
to HHH(DD) specifies when DD is simulated by HHH
according to the semantics of the C language,
DD is a "finite string input" which specifies a behavior that is
independent of what simulates it,
That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF
THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.
In no way am I saying that DD is not built on HHH, and
does not have a behavior dependent on that of HHH.
Why would I ever say that?
But that entire bundle is one fixed case DD, with a single behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the "finite string input" DD *must* include as a substring the entire description of HHH.
André
On 10/22/2025 2:24 PM, André G. Isaak wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have >>>>>>>>> been
dissected by multiple poeple to a much greater detail than they >>>>>>>>> deserve.
It is disingenuous to say that you've simply had your details >>>>>>>>> ignored.
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
It only seems that way because you are unable to
No, it doesn't only seem that way. Thanks for playing.
provide the actual mapping that the actual input
to HHH(DD) specifies when DD is simulated by HHH
according to the semantics of the C language,
DD is a "finite string input" which specifies a behavior that is
independent of what simulates it,
That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF
THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.
In no way am I saying that DD is not built on HHH, and
does not have a behavior dependent on that of HHH.
Why would I ever say that?
But that entire bundle is one fixed case DD, with a single behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the
"finite string input" DD *must* include as a substring the entire
description of HHH.
André
That includes that HHH(DD) keeps simulating yet
another instance of itself and DD forever and ever
On 2025-10-22 12:40, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
DD is a "finite string input" which specifies a behavior that is
independent of what simulates it,
That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF
THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.
In no way am I saying that DD is not built on HHH, and
does not have a behavior dependent on that of HHH.
Why would I ever say that?
But that entire bundle is one fixed case DD, with a single
behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp
that the "finite string input" DD *must* include as a substring
the entire description of HHH.
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the "finite string input" DD *must* include as a substring the entire description of HHH.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been >>>>>>> dissected by multiple poeple to a much greater detail than they deserve.
It is disingenuous to say that you've simply had your details ignored. >>>>>>>
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
It only seems that way because you are unable to
No, it doesn't only seem that way. Thanks for playing.
provide the actual mapping that the actual input
to HHH(DD) specifies when DD is simulated by HHH
according to the semantics of the C language,
DD is a "finite string input" which specifies a behavior that is
independent of what simulates it,
That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF
THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.
In no way am I saying that DD is not built on HHH, and
does not have a behavior dependent on that of HHH.
Why would I ever say that?
But that entire bundle is one fixed case DD, with a single behavior,
which is a property of DD, which is a finite string.
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the
"finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the
"finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
On 10/22/2025 1:40 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been >>>>>>>> dissected by multiple poeple to a much greater detail than they >>>>>>>> deserve.
It is disingenuous to say that you've simply had your details >>>>>>>> ignored.
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
It only seems that way because you are unable to
No, it doesn't only seem that way. Thanks for playing.
provide the actual mapping that the actual input
to HHH(DD) specifies when DD is simulated by HHH
according to the semantics of the C language,
DD is a "finite string input" which specifies a behavior that is
independent of what simulates it,
That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF
THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.
In no way am I saying that DD is not built on HHH, and
does not have a behavior dependent on that of HHH.
Why would I ever say that?
But that entire bundle is one fixed case DD, with a single behavior,
which is a property of DD, which is a finite string.
That too is stupidly incorrect.
It is the job of every simulating halt decider
to predict what the behavior of it simulated
input would be if it never aborted.
When a person is asked a yes or no question
there are not two separate people in parallel
universes one that answers yes and one that
answers no. There is one person that thinks
through both hypothetical possibilities and
then provides one answer.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:False, as you have admitted otherwise:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
Yet again with deflection.
That the input to HHH(DD) specfies non-haltin
On 10/20/2025 10:45 PM, dbush wrote:
And it is a semantic tautology that a finite string description of a
Turing machine is stipulated to specify all semantic properties of the
described machine, including whether it halts when executed directly.
And it is this semantic property that halt deciders are required to
report on.
Yes that is all correct
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
I made sure to read what you said all the way through
this time.
DD correctly simulated by HHH cannot possibly
reach its own final halt state no matter what HHH does.
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
I made sure to read what you said all the way through
this time. DD correctly simulated by HHH
cannot possibly
reach its own final halt state no matter what HHH does.
On 10/22/2025 1:40 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been >>>>>>>> dissected by multiple poeple to a much greater detail than they deserve.
It is disingenuous to say that you've simply had your details ignored. >>>>>>>>
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
It only seems that way because you are unable to
No, it doesn't only seem that way. Thanks for playing.
provide the actual mapping that the actual input
to HHH(DD) specifies when DD is simulated by HHH
according to the semantics of the C language,
DD is a "finite string input" which specifies a behavior that is
independent of what simulates it,
That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF
THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.
In no way am I saying that DD is not built on HHH, and
does not have a behavior dependent on that of HHH.
Why would I ever say that?
But that entire bundle is one fixed case DD, with a single behavior,
which is a property of DD, which is a finite string.
That too is stupidly incorrect.
It is the job of every simulating halt decider
to predict what the behavior of it simulated
input would be if it never aborted.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
Thus proving that DD correctly simulated by HHH
cannot possibly reach its own simulated final halt
state no matter what HHH does.
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
I made sure to read what you said all the way through
this time. DD correctly simulated by HHH cannot possibly
reach its own final halt state no matter what HHH does.
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
Thus proving that DD correctly simulated by HHH
cannot possibly reach its own simulated final halt
state no matter what HHH does.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>> but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
Thus proving that DD correctly simulated by HHH
cannot possibly reach its own simulated final halt
state no matter what HHH does.
I explained that a nested simulation tower is two dimensional.
One dimension is the simulation level, the nesting itself;
that goes out to infinity.
Due to the aborting behavior of HHH,
it is not actually realized in simulation; we have to step
through the aborted simulations to keep it going.
The other dimension is the execution /within/ the simulations.
That can be halting or non-halting.
In the HHH(DD) simulation tower, though that is infinite,
the simulations are halting.
I said that before. Your memory of that has vaporized, and you have now focused only on my statement that the simluation tower is infinite.
The depth of the simulation tower, and the halting of the simulations
within that tower, are independent phenomena.
A decider must not mistake one for the other.
On 10/22/2025 6:15 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single
behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp
that the
"finite string input" DD *must* include as a substring the entire >>>>>>> description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>>> but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
Thus proving that DD correctly simulated by HHH
cannot possibly reach its own simulated final halt
state no matter what HHH does.
I explained that a nested simulation tower is two dimensional.
One dimension is the simulation level, the nesting itself;
that goes out to infinity.
Great. Thus the input to HHH(DD)
specifies behavior
such that the correctly simulated DD
cannot possibly
reach its own simulated final halt state.
Due to the aborting behavior of HHH,
it is not actually realized in simulation; we have to step
through the aborted simulations to keep it going.
The other dimension is the execution /within/ the simulations.
That can be halting or non-halting.
In the HHH(DD) simulation tower, though that is infinite,
the simulations are halting.
I said that before. Your memory of that has vaporized, and you have now
focused only on my statement that the simluation tower is infinite.
The depth of the simulation tower, and the halting of the simulations
within that tower, are independent phenomena.
A decider must not mistake one for the other.
On 10/22/2025 6:15 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>>>>> "finite string input" DD *must* include as a substring the entire >>>>>>> description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>>> but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
Thus proving that DD correctly simulated by HHH
cannot possibly reach its own simulated final halt
state no matter what HHH does.
I explained that a nested simulation tower is two dimensional.
One dimension is the simulation level, the nesting itself;
that goes out to infinity.
Great. Thus the input to HHH(DD) specifies behavior
such that the correctly simulated DD cannot possibly
reach its own simulated final halt state.
Due to the aborting behavior of HHH,
it is not actually realized in simulation; we have to step
through the aborted simulations to keep it going.
The other dimension is the execution /within/ the simulations.
That can be halting or non-halting.
In the HHH(DD) simulation tower, though that is infinite,
the simulations are halting.
I said that before. Your memory of that has vaporized, and you have now
focused only on my statement that the simluation tower is infinite.
The depth of the simulation tower, and the halting of the simulations
within that tower, are independent phenomena.
A decider must not mistake one for the other.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>> but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
Thus proving that DD correctly simulated by HHH
cannot possibly reach its own simulated final halt
state no matter what HHH does.
I explained that a nested simulation tower is two dimensional.
One dimension is the simulation level, the nesting itself;
that goes out to infinity.
On 10/22/2025 6:15 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single
behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp
that the
"finite string input" DD *must* include as a substring the entire >>>>>>> description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>>> but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
Thus proving that DD correctly simulated by HHH
cannot possibly reach its own simulated final halt
state no matter what HHH does.
I explained that a nested simulation tower is two dimensional.
One dimension is the simulation level, the nesting itself;
that goes out to infinity.
Great. Thus the input to HHH(DD) specifies behavior
such that the correctly simulated DD cannot possibly
reach its own simulated final halt state.
On 10/22/2025 7:24 PM, olcott wrote:
Great. Thus the input to HHH(DD)
i.e. finite string DD which is the description of machine DD i.e. <DD>
and therefore stipulated to specify all semantic properties of machine
DD including the fact that it halts when executed directly.
specifies behavior
such that the correctly simulated DD
i.e. UTM(DD)
cannot possibly
reach its own simulated final halt state.
False, as proven by UTM(DD) halting.
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 1,073 |
Nodes: | 10 (0 / 10) |
Uptime: | 217:55:03 |
Calls: | 13,783 |
Calls today: | 1 |
Files: | 186,987 |
D/L today: |
505 files (159M bytes) |
Messages: | 2,434,736 |