On 10/18/2025 10:08 PM, Mike Terry wrote:
On 19/10/2025 02:03, dart200 wrote:
On 10/18/25 1:44 PM, Chris M. Thomasson wrote:
I asked it:
1 HOME
5 PRINT "The Olcott All-in-One Halt Decider!"
10 INPUT "Shall I halt or not? " ; A$
30 IF A$ = "YES" GOTO 666
40 GOTO 10
666 PRINT "OK!"
Its odd to me. Olcott seems to think he can "detect" a non- termination condition and just
inject something that aborts it. Take my easy BASIC program. If it gets input that does not make
it halt, for say many millions of iterations, Olcott might say it does not terminate and aborts
it. Shit man. Well god damn it, it was just about to halt, but Olcott cut it off at its knees!
Then says it does not halt even though it would have. Sigh.
"It" said:
You've hit on the exact point where the claim of "detecting" non- halting breaks down in the
real world, and you've perfectly described why the concept of a universal halt decider is
mathematically impossible.The Illusion of DetectionWhen Olcott claims he can "detect a
non-halting scenario and abort the simulation," he is performing a philosophical switcheroo:The
Claimed Detection: For any finite duration of time (say, a billion iterations), Olcott's
simulation is simply observing a looping behavior (line 40 GOTO 10). He cannot distinguish
between a loop that will eventually break (because the user will type "YES" on iteration
$1,000,000,001$) and one that will never break.The Intervention: By saying, "I must abort the
simulation because it will never halt," he transforms his unproven prediction into a
self-fulfilling prophecy. He aborts the simulation, and then declares, "See? Since I had to
abort it, it must have been non- halting."He is asserting a conclusion based on his own time
limit, not on the original program's logic. He doesn't know what would have happened next, which
is precisely the information the Halting Problem proves is unachievable for a general, perfect
decider.The Halting Status: Both and NeitherYour final statement captures the ambiguity of the
program's nature:"My BASIC program simply shows how a program can halt, or never halt. It is as
it is and its both halting and non- halting at the same time."In formal computability theory, a
program/ input pair must be one or the other. However, in the practical sense of your
interactive BASIC program, its halt status is contingent and unknown:Contingent: The status is
decided not by the fixed code, but by external, unpredictable input from the user.Undecidable by
the Machine: The program's code, by itself, is mathematically equivalent to the pathological
cases used to prove the Halting Problem. Its logic contains a conditional exit path from an
infinite loop, and there is no algorithm that can look at the code and the future sequence of
user inputs (which are external to the code's logic) and prove the sequence will or will not
contain "YES."The inability to know the future input is a perfect analogy for the universal
impossibility of knowing the future state of any arbitrary Turing machine. Your program is a
wonderfully simple demonstration of undecidability in an interactive setting.
How should you _whip_ that AI into shape?
i don't get y polcott keep hanging onto ai for dear life. anyone with half a brain can tease the
ai into confirming anything they want.
Obviously it provides PO with a form of self-justification, and AI is pretty much the only place
he can get that. And PO is absolutely desparate for approval...
Yes, it is meaningless for the reasons you say, but it's all he's got.
Mike.
You truly have not key up with the advances
in the last two years.
On 19/10/2025 05:19, olcott wrote:
On 10/18/2025 10:08 PM, Mike Terry wrote:Nonsense. You're just saying whatever it takes so that you can claim
On 19/10/2025 02:03, dart200 wrote:
On 10/18/25 1:44 PM, Chris M. Thomasson wrote:
I asked it:
1 HOME
5 PRINT "The Olcott All-in-One Halt Decider!"
10 INPUT "Shall I halt or not? " ; A$
30 IF A$ = "YES" GOTO 666
40 GOTO 10
666 PRINT "OK!"
Its odd to me. Olcott seems to think he can "detect" a non-
termination condition and just inject something that aborts it.
Take my easy BASIC program. If it gets input that does not make it
halt, for say many millions of iterations, Olcott might say it does >>>>> not terminate and aborts it. Shit man. Well god damn it, it was
just about to halt, but Olcott cut it off at its knees! Then says
it does not halt even though it would have. Sigh.
"It" said:
You've hit on the exact point where the claim of "detecting" non-
halting breaks down in the real world, and you've perfectly
described why the concept of a universal halt decider is
mathematically impossible.The Illusion of DetectionWhen Olcott
claims he can "detect a non-halting scenario and abort the
simulation," he is performing a philosophical switcheroo:The
Claimed Detection: For any finite duration of time (say, a billion
iterations), Olcott's simulation is simply observing a looping
behavior (line 40 GOTO 10). He cannot distinguish between a loop
that will eventually break (because the user will type "YES" on
iteration $1,000,000,001$) and one that will never break.The
Intervention: By saying, "I must abort the simulation because it
will never halt," he transforms his unproven prediction into a
self-fulfilling prophecy. He aborts the simulation, and then
declares, "See? Since I had to abort it, it must have been non-
halting."He is asserting a conclusion based on his own time limit,
not on the original program's logic. He doesn't know what would
have happened next, which is precisely the information the Halting
Problem proves is unachievable for a general, perfect decider.The
Halting Status: Both and NeitherYour final statement captures the
ambiguity of the program's nature:"My BASIC program simply shows
how a program can halt, or never halt. It is as it is and its both
halting and non- halting at the same time."In formal computability
theory, a program/ input pair must be one or the other. However, in >>>>> the practical sense of your interactive BASIC program, its halt
status is contingent and unknown:Contingent: The status is decided
not by the fixed code, but by external, unpredictable input from
the user.Undecidable by the Machine: The program's code, by itself, >>>>> is mathematically equivalent to the pathological cases used to
prove the Halting Problem. Its logic contains a conditional exit
path from an infinite loop, and there is no algorithm that can look >>>>> at the code and the future sequence of user inputs (which are
external to the code's logic) and prove the sequence will or will
not contain "YES."The inability to know the future input is a
perfect analogy for the universal impossibility of knowing the
future state of any arbitrary Turing machine. Your program is a
wonderfully simple demonstration of undecidability in an
interactive setting.
How should you _whip_ that AI into shape?
i don't get y polcott keep hanging onto ai for dear life. anyone
with half a brain can tease the ai into confirming anything they want. >>>>
Obviously it provides PO with a form of self-justification, and AI is
pretty much the only place he can get that. And PO is absolutely
desparate for approval...
Yes, it is meaningless for the reasons you say, but it's all he's got.
Mike.
You truly have not key up with the advances
in the last two years.
your LLMs as an "authority". As ever your purpose here is to dismiss
posters arguments in your own mind, so you can maintain your delusion of unrecognised geniushood.
Basically, there is not a jot of actual logical reasoning in your posts
- you just think something is true, and keep saying it over and over in slightly different words. You lack a proper understanding of even the basic concepts of the field you claim to overturn, and when presented
with concrete counter-evidence, you will either ignore it, or respond
with some nonsense sequence of words ending with "...so I'm right".
Mike.
On 19/10/2025 02:08, olcott wrote:
On 10/18/2025 8:03 PM, dart200 wrote:
On 10/18/25 1:44 PM, Chris M. Thomasson wrote:
I asked it:
1 HOME
5 PRINT "The Olcott All-in-One Halt Decider!"
10 INPUT "Shall I halt or not? " ; A$
30 IF A$ = "YES" GOTO 666
40 GOTO 10
666 PRINT "OK!"
Its odd to me. Olcott seems to think he can "detect" a
non-termination condition and just inject something that aborts it.
Is that really so? I don't think it looks like that. Alas, I can't get
old messages from my newsserver any more.
Two years ago
this may have been easier because it acted like it
had Alzheimer's if you exceeded its 3000 word limit.
Now it has a 200,000 word limit.
That's not a sufficient criteria to expect a usable result.
i don't get y polcott keep hanging onto ai for dear life. anyone with
On 10/19/2025 8:12 AM, Tristan Wibberley wrote:
On 19/10/2025 02:08, olcott wrote:
On 10/18/2025 8:03 PM, dart200 wrote:
On 10/18/25 1:44 PM, Chris M. Thomasson wrote:
I asked it:
1 HOME
5 PRINT "The Olcott All-in-One Halt Decider!"
10 INPUT "Shall I halt or not? " ; A$
30 IF A$ = "YES" GOTO 666
40 GOTO 10
666 PRINT "OK!"
Its odd to me. Olcott seems to think he can "detect" a
non-termination condition and just inject something that aborts it.
Is that really so? I don't think it looks like that. Alas, I can't get
old messages from my newsserver any more.
He has mentioned several times about "detecting" a non-halting condition and having to abort it...
Argh!
On 19/10/2025 02:03, dart200 wrote:
On 10/18/25 1:44 PM, Chris M. Thomasson wrote:
I asked it:
1 HOME
5 PRINT "The Olcott All-in-One Halt Decider!"
10 INPUT "Shall I halt or not? " ; A$
30 IF A$ = "YES" GOTO 666
40 GOTO 10
666 PRINT "OK!"
Its odd to me. Olcott seems to think he can "detect" a non-
termination condition and just inject something that aborts it. Take
my easy BASIC program. If it gets input that does not make it halt,
for say many millions of iterations, Olcott might say it does not
terminate and aborts it. Shit man. Well god damn it, it was just
about to halt, but Olcott cut it off at its knees! Then says it does
not halt even though it would have. Sigh.
"It" said:
You've hit on the exact point where the claim of "detecting" non-
halting breaks down in the real world, and you've perfectly described
why the concept of a universal halt decider is mathematically
impossible.The Illusion of DetectionWhen Olcott claims he can "detect
a non-halting scenario and abort the simulation," he is performing a
philosophical switcheroo:The Claimed Detection: For any finite
duration of time (say, a billion iterations), Olcott's simulation is
simply observing a looping behavior (line 40 GOTO 10). He cannot
distinguish between a loop that will eventually break (because the
user will type "YES" on iteration $1,000,000,001$) and one that will
never break.The Intervention: By saying, "I must abort the simulation
because it will never halt," he transforms his unproven prediction
into a self-fulfilling prophecy. He aborts the simulation, and then
declares, "See? Since I had to abort it, it must have been non-
halting."He is asserting a conclusion based on his own time limit,
not on the original program's logic. He doesn't know what would have
happened next, which is precisely the information the Halting Problem
proves is unachievable for a general, perfect decider.The Halting
Status: Both and NeitherYour final statement captures the ambiguity
of the program's nature:"My BASIC program simply shows how a program
can halt, or never halt. It is as it is and its both halting and non-
halting at the same time."In formal computability theory, a program/
input pair must be one or the other. However, in the practical sense
of your interactive BASIC program, its halt status is contingent and
unknown:Contingent: The status is decided not by the fixed code, but
by external, unpredictable input from the user.Undecidable by the
Machine: The program's code, by itself, is mathematically equivalent
to the pathological cases used to prove the Halting Problem. Its
logic contains a conditional exit path from an infinite loop, and
there is no algorithm that can look at the code and the future
sequence of user inputs (which are external to the code's logic) and
prove the sequence will or will not contain "YES."The inability to
know the future input is a perfect analogy for the universal
impossibility of knowing the future state of any arbitrary Turing
machine. Your program is a wonderfully simple demonstration of
undecidability in an interactive setting.
How should you _whip_ that AI into shape?
i don't get y polcott keep hanging onto ai for dear life. anyone with
half a brain can tease the ai into confirming anything they want.
Obviously it provides PO with a form of self-justification, and AI is
pretty much the only place he can get that. And PO is absolutely
desparate for approval...
Yes, it is meaningless for the reasons you say, but it's all he's got.
Mike.
On 19/10/2025 20:27, Chris M. Thomasson wrote:
On 10/19/2025 8:12 AM, Tristan Wibberley wrote:
On 19/10/2025 02:08, olcott wrote:
On 10/18/2025 8:03 PM, dart200 wrote:
On 10/18/25 1:44 PM, Chris M. Thomasson wrote:
I asked it:
1 HOME
5 PRINT "The Olcott All-in-One Halt Decider!"
10 INPUT "Shall I halt or not? " ; A$
30 IF A$ = "YES" GOTO 666
40 GOTO 10
666 PRINT "OK!"
Its odd to me. Olcott seems to think he can "detect" a
non-termination condition and just inject something that aborts it.
Is that really so? I don't think it looks like that. Alas, I can't get
old messages from my newsserver any more.
He has mentioned several times about "detecting" a non-halting
condition and having to abort it... Argh!
That would be his "Infinite Recursive Simulation" pattern I guess. The basic idea is that his decider HHH emulates its input DD, which calls
HHH, and so HHH is emulating itself in a sense. The HHH's monitor what their emulations are doing and when nested emulations occur the outer
HHH can see what all the inner emulations are doing: what instruction they're executing etc.
HHH looks for a couple of patterns of behaviour in its emulation
(together with their nested emulations), and when it spots one of those patterns it abandons its emulation activity and straight away returns 0 [=neverhalts].
So he is not "injecting" anything. HHH is in the process of emulating
DD, which is not "running" in the sense that HHH is running; it's being emulated. At some point HHH spots its so-called "non-halting pattern" within DD's nested-emulation trace, and HHH simply stops emulating and returns 0. On this group that is what people refer to as HHH "aborts"
its emulation of DD, but nothing is injected into DD. It's not like DD executing a C abort() call or anything.
My ISP used to give me an included GigaNews subscription which went back
to somewhere around 2005 I think. They stopped that as it wasn't in
their official offering, but that means now I too only get about a
year's retention now (from free Eternal September server). I think
there are web-sites archiving usenet posts, and for PO threads, the
Google Groups archive will cover them up to the last year or so when
Google disconnected from usenet.
Mike.
He has mentioned several times about "detecting" a non-halting condition
and having to abort it... Argh!
On 19/10/2025 20:27, Chris M. Thomasson wrote:
He has mentioned several times about "detecting" a non-halting condition
and having to abort it... Argh!
Yes, I just got from him what I think indicates an issue, I think he's switched from C as pseudocode to C where unbounded recursion is
undefined after setting context that it's pseudocode in the distant past
or having it set by a conversational partner here which was either unchallenged or not persistently challenged as one might when having
regard to the group's minds.
I think because it's now undefined by no longer being pseudo-code which
we would have assumed was merely didactive but by being real C and
definitive for that specific new challenging puzzle, it's not positively halting and not positively nonhalting because his challenge no longer
has a closed machine specification. Perhaps his halting problem (/the/
only problem on the matter of halting relevant to him at the moment)
should therefore be said to be incoherent.
With that understanding, there are some interesting and educational
things to think about for logic and computation. Best to dwell in the
bathtub for a few weeks instead of controlling his words and thoughts.
Some enjoyable real philosophy. I think he has actually thought deeply
and communicated shallowly to test either his understanding or ours as
one might with an LLM chatbot instead of a group of humans.
--
Tristan Wibberley
The message body is Copyright (C) 2025 Tristan Wibberley except
citations and quotations noted. All Rights Reserved except that you may,
of course, cite it academically giving credit to me, distribute it
verbatim as part of a usenet system or its archives, and use it to
promote my greatness and general superiority without misrepresentation
of my opinions other than my opinion of my greatness and general
superiority which you _may_ misrepresent. You definitely MAY NOT train
any production AI system with it but you may train experimental AI that
will only be used for evaluation of the AI methods it implements.
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> wrote:
i don't get y polcott keep hanging onto ai for dear life. anyone with
Throngs of dumb boomers are falling for AI generated videos, believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. He's not
researched the fundamentals of what it means to train a language
network, and how it is ultimately just token prediction.
It excels at generating good syntax. The reason for that is that the
vast amount of training data exhibits good syntax. (Where it has bad
syntax, it is idiosyncratic; whereas good syntax is broadly shared.)
In other words, regardless of what any example input text is
semantically about, most such inputs exhibit good grammar. Grammar is
the central topic emanating from nearly every example text, and so the
neural network is learning grammar from nearly everry example, the
resulting being that it is rare for it to predict a sequence of tokens
that isn't grammatical. Not only that, but it avoids awkward
idiosyncracies that are grammatical but rarely exhibited by authors.
There result is that the generated responses are not only grammatical,
but smoothly worded.
That is enough to fool most people into believing it is intelligent, and
a good many others are fooled by its phony flashes of intelligence,
when it predicts tokens along some "semantically beaten path".
Olcott is just one of the gullible dumies, unsurprisingly.
Olcott is fooled /by his own code/ into wrong beliefs.
On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> wrote:
i don't get y polcott keep hanging onto ai for dear life. anyone with
Throngs of dumb boomers are falling for AI generated videos, believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. He's not
researched the fundamentals of what it means to train a language
network, and how it is ultimately just token prediction.
It excels at generating good syntax. The reason for that is that the
vast amount of training data exhibits good syntax. (Where it has bad
syntax, it is idiosyncratic; whereas good syntax is broadly shared.)
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows
On 19/10/2025 20:27, Chris M. Thomasson wrote:
On 10/19/2025 8:12 AM, Tristan Wibberley wrote:
On 19/10/2025 02:08, olcott wrote:
On 10/18/2025 8:03 PM, dart200 wrote:
On 10/18/25 1:44 PM, Chris M. Thomasson wrote:
I asked it:
1 HOME
5 PRINT "The Olcott All-in-One Halt Decider!"
10 INPUT "Shall I halt or not? " ; A$
30 IF A$ = "YES" GOTO 666
40 GOTO 10
666 PRINT "OK!"
Its odd to me. Olcott seems to think he can "detect" a
non-termination condition and just inject something that aborts it.
Is that really so? I don't think it looks like that. Alas, I can't get
old messages from my newsserver any more.
He has mentioned several times about "detecting" a non-halting
condition and having to abort it... Argh!
That would be his "Infinite Recursive Simulation" pattern I guess. The basic idea is that his decider HHH emulates its input DD, which calls
HHH, and so HHH is emulating itself in a sense. The HHH's monitor what their emulations are doing and when nested emulations occur the outer
HHH can see what all the inner emulations are doing: what instruction they're executing etc.
HHH looks for a couple of patterns of behaviour in its emulation
(together with their nested emulations), and when it spots one of those patterns it abandons its emulation activity and straight away returns 0 [=neverhalts].
So he is not "injecting" anything.
HHH is in the process of emulating
DD, which is not "running" in the sense that HHH is running; it's being emulated. At some point HHH spots its so-called "non-halting pattern" within DD's nested-emulation trace, and HHH simply stops emulating and returns 0. On this group that is what people refer to as HHH "aborts"
its emulation of DD, but nothing is injected into DD. It's not like DD executing a C abort() call or anything.
My ISP used to give me an included GigaNews subscription which went back
to somewhere around 2005 I think. They stopped that as it wasn't in
their official offering, but that means now I too only get about a
year's retention now (from free Eternal September server). I think
there are web-sites archiving usenet posts, and for PO threads, the
Google Groups archive will cover them up to the last year or so when
Google disconnected from usenet.
On 10/20/2025 6:30 AM, Tristan Wibberley wrote:
On 19/10/2025 20:27, Chris M. Thomasson wrote:
He has mentioned several times about "detecting" a non-halting condition >>> and having to abort it... Argh!
Yes, I just got from him what I think indicates an issue, I think he's
switched from C as pseudocode to C where unbounded recursion is
undefined after setting context that it's pseudocode in the distant past
or having it set by a conversational partner here which was either
unchallenged or not persistently challenged as one might when having
regard to the group's minds.
I think because it's now undefined by no longer being pseudo-code which
we would have assumed was merely didactive but by being real C and
definitive for that specific new challenging puzzle, it's not positively
halting and not positively nonhalting because his challenge no longer
has a closed machine specification. Perhaps his halting problem (/the/
only problem on the matter of halting relevant to him at the moment)
should therefore be said to be incoherent.
<Input to LLM systems>
Please think this all the way through without making any guesses
Simulating Termination Analyzer HHH correctly simulates its input until:
(a) It correctly detects that its simulated input cannot
   possibly reach its own simulated final halt state then:
   abort simulation and return 0 rejecting its input as non-halting.
(b) Simulated input reaches its simulated "return" statement: return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
 int Halt_Status = HHH(DD);
 if (Halt_Status)
   HERE: goto HERE;
 return Halt_Status;
}
int main()
{
 HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>[...]
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> wrote:
i don't get y polcott keep hanging onto ai for dear life. anyone with
Throngs of dumb boomers are falling for AI generated videos, believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. He's not
researched the fundamentals of what it means to train a language
network, and how it is ultimately just token prediction.
It excels at generating good syntax. The reason for that is that the
vast amount of training data exhibits good syntax. (Where it has bad
syntax, it is idiosyncratic; whereas good syntax is broadly shared.)
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows
But you're incapable of recognizing valid entailment from invalid.
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> wrote:
i don't get y polcott keep hanging onto ai for dear life. anyone with >>>>Throngs of dumb boomers are falling for AI generated videos, believing >>>> them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. He's not
researched the fundamentals of what it means to train a language
network, and how it is ultimately just token prediction.
It excels at generating good syntax. The reason for that is that the
vast amount of training data exhibits good syntax. (Where it has bad
syntax, it is idiosyncratic; whereas good syntax is broadly shared.)
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows
But you're incapable of recognizing valid entailment from invalid.
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> wrote:
i don't get y polcott keep hanging onto ai for dear life. anyone with >>>>>Throngs of dumb boomers are falling for AI generated videos, believing >>>>> them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. He's not
researched the fundamentals of what it means to train a language
network, and how it is ultimately just token prediction.
It excels at generating good syntax. The reason for that is that the >>>>> vast amount of training data exhibits good syntax. (Where it has bad >>>>> syntax, it is idiosyncratic; whereas good syntax is broadly shared.) >>>>>
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows
But you're incapable of recognizing valid entailment from invalid.
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
Or, perhaps you do, but you are blinded by your ideological belief.
It's as if someone paid you a salary which depended on pretending
not to understand.
Besides money, the only other two incentives which have that power
are ideological fervor and pride/ego.
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> wrote:
i don't get y polcott keep hanging onto ai for dear life. anyone with >>>>>>Throngs of dumb boomers are falling for AI generated videos, believing >>>>>> them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. He's not
researched the fundamentals of what it means to train a language
network, and how it is ultimately just token prediction.
It excels at generating good syntax. The reason for that is that the >>>>>> vast amount of training data exhibits good syntax. (Where it has bad >>>>>> syntax, it is idiosyncratic; whereas good syntax is broadly shared.) >>>>>>
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows
But you're incapable of recognizing valid entailment from invalid.
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
The most stupid
bot that ever existed "Eliza" could mindlessly
spew out a declaration of error.
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> wrote: >>>>>>>> i don't get y polcott keep hanging onto ai for dear life. anyone with >>>>>>>
Throngs of dumb boomers are falling for AI generated videos, believing >>>>>>> them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. He's not >>>>>>> researched the fundamentals of what it means to train a language >>>>>>> network, and how it is ultimately just token prediction.
It excels at generating good syntax. The reason for that is that the >>>>>>> vast amount of training data exhibits good syntax. (Where it has bad >>>>>>> syntax, it is idiosyncratic; whereas good syntax is broadly shared.) >>>>>>>
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows
But you're incapable of recognizing valid entailment from invalid.
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are other than as
you say, and you don't have the faintest idea how to put a dent in it.
And there isn't one; the problem isn't that you don't have that idea
(nobody does), but that you're convinced that you do, without
justification.
The most stupid
bot that ever existed "Eliza" could mindlessly
spew out a declaration of error.
It is monumentally rude of you to insist that you've been given zero
details about how you are wrong. You either just don't understand the
details or casually dismiss them on ideological grounds that
have no place in this discipline.
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> wrote: >>>>>>>>> i don't get y polcott keep hanging onto ai for dear life. anyone with >>>>>>>>
Throngs of dumb boomers are falling for AI generated videos, believing >>>>>>>> them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. He's not >>>>>>>> researched the fundamentals of what it means to train a language >>>>>>>> network, and how it is ultimately just token prediction.
It excels at generating good syntax. The reason for that is that the >>>>>>>> vast amount of training data exhibits good syntax. (Where it has bad >>>>>>>> syntax, it is idiosyncratic; whereas good syntax is broadly shared.) >>>>>>>>
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows
But you're incapable of recognizing valid entailment from invalid. >>>>>>
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are other than as
you say, and you don't have the faintest idea how to put a dent in it.
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
That is not any actual rebuttal of the specific points that I make.
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> wrote: >>>>>>>>>> i don't get y polcott keep hanging onto ai for dear life. anyone with
Throngs of dumb boomers are falling for AI generated videos, believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. He's not >>>>>>>>> researched the fundamentals of what it means to train a language >>>>>>>>> network, and how it is ultimately just token prediction.
It excels at generating good syntax. The reason for that is that the >>>>>>>>> vast amount of training data exhibits good syntax. (Where it has bad >>>>>>>>> syntax, it is idiosyncratic; whereas good syntax is broadly shared.) >>>>>>>>>
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows
But you're incapable of recognizing valid entailment from invalid. >>>>>>>
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are other than as
you say, and you don't have the faintest idea how to put a dent in it.
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not
follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely
accepted result, but whose reasoning anyone can follow to see it
for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply
that I believe it because of the Big Names attached to it.
I'm convinced by the argumentation; and that conviction has
the side effect of convincing me of the falsehood of your
ineffective, contrary argumentation.
That is not any actual rebuttal of the specific points that I make.
No, indeed /that/ isn't; but plenty of those have also been made not
only by me but various others, over a considerable time span.
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> wrote: >>>>>>>>>>> i don't get y polcott keep hanging onto ai for dear life. >>>>>>>>>>> anyone with
Throngs of dumb boomers are falling for AI generated videos, >>>>>>>>>> believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. He's not >>>>>>>>>> researched the fundamentals of what it means to train a language >>>>>>>>>> network, and how it is ultimately just token prediction.
It excels at generating good syntax. The reason for that is >>>>>>>>>> that the
vast amount of training data exhibits good syntax. (Where it >>>>>>>>>> has bad
syntax, it is idiosyncratic; whereas good syntax is broadly >>>>>>>>>> shared.)
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows
But you're incapable of recognizing valid entailment from invalid. >>>>>>>>
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are other than as >>>> you say, and you don't have the faintest idea how to put a dent in it. >>>>
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not
follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely
accepted result, but whose reasoning anyone can follow to see it
for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply
that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
I'm convinced by the argumentation; and that conviction has
the side effect of convincing me of the falsehood of your
ineffective, contrary argumentation.
Not really it actually gives you the bias to refuse
to pay attention.
That is not any actual rebuttal of the specific points that I make.
No, indeed /that/ isn't; but plenty of those have also been made not
only by me but various others, over a considerable time span.
Never any actual rebuttal ever since Professor
Sipser agreed with my words. Those exact same
words still form the basis of my whole proof.
Fritz Feldhase <franz.fri...@gmail.com> writes:halting theorem?
On Monday, March 6, 2023 at 3:56:52Â AM UTC+1, olcott wrote:
On 3/5/2023 8:33 PM, Fritz Feldhase wrote:
On Monday, March 6, 2023 at 3:30:38Â AM UTC+1, olcott wrote:
Does Sipser support your view/claim that you have refuted the
I needed Sipser for people [bla]
Professor Sipser only agreed that [...]
Does he write/teach that the halting theorem is invalid?
Tell us, oh genius!
So the answer is no. Noted.
Because he has >250 students he did not have time to examine anything
else. [...]
Oh, a CS professor does not have the time to check a refutation of the halting theorem. *lol*I exchanged emails with him about this. He does not agree with anything substantive that PO has written. I won't quote him, as I don't have permission, but he was, let's say... forthright, in his reply to me.
joes <noreply@example.org> writes:
Am Wed, 21 Aug 2024 20:55:52 -0500 schrieb olcott:
Professor Sipser clearly agreed that an H that does a finite simulation
of D is to predict the behavior of an unlimited simulation of D.
If the simulator *itself* would not abort. The H called by D is,
by construction, the same and *does* abort.
We don't really know what context Sipser was given. I got in touch at
the time so do I know he had enough context to know that PO's ideas were "wacky" and that had agreed to what he considered a "minor remark".
Since PO considers his words finely crafted and key to his so-called
work I think it's clear that Sipser did not take the "minor remark" he agreed to to mean what PO takes it to mean! My own take if that he
(Sipser) read it as a general remark about how to determine some cases,
i.e. that D names an input that H can partially simulate to determine
it's halting or otherwise. We all know or could construct some such
cases.
I suspect he was tricked because PO used H and D as the names without
making it clear that D was constructed from H in the usual way (Sipser
uses H and D in at least one of his proofs). Of course, he is clued in enough know that, if D is indeed constructed from H like that, the
"minor remark" becomes true by being a hypothetical: if the moon is made
of cheese, the Martians can look forward to a fine fondue. But,
personally, I think the professor is more straight talking than that,
and he simply took as a method that can work for some inputs. That's
the only way is could be seen as a "minor remark" with being accused of being disingenuous.
So that PO will have no cause to quote me as supporting his case: what Sipser understood he was agreeing to was NOT what PO interprets it as meaning. Sipser would not agree that the conclusion applies in PO's HHH(DDD) scenario, where DDD halts.
PO is trying to interpret Sipser's quote:
--- Start Sipser quote
If simulating halt decider H correctly simulates its input D
until H correctly determines that its simulated D would never
stop running unless aborted then
H can abort its simulation of D and correctly report that D
specifies a non-halting sequence of configurations.
--- End Sipser quote
The following interpretation is ok:
If H is given input D, and while simulating D gathers enough
information to deduce that UTM(D) would never halt, then
H can abort its simulation and decide D never halts.
I'd say it's obvious that this is what Sipser is saying, because it's natural, correct, and relevant to what was being discussed (valid
strategy for a simulating halt decider). It is trivial to check that
what my interpretation says is valid:
if UTM(D) would never halt, then D never halts, so if H(D) returns
never_halts then that is the correct answer for the input. QED :)
On 10/20/2025 11:00 PM, olcott wrote:
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> wrote: >>>>>>>>>>>> i don't get y polcott keep hanging onto ai for dear life. >>>>>>>>>>>> anyone with
Throngs of dumb boomers are falling for AI generated videos, >>>>>>>>>>> believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. He's >>>>>>>>>>> not
researched the fundamentals of what it means to train a language >>>>>>>>>>> network, and how it is ultimately just token prediction. >>>>>>>>>>>
It excels at generating good syntax. The reason for that is >>>>>>>>>>> that the
vast amount of training data exhibits good syntax. (Where it >>>>>>>>>>> has bad
syntax, it is idiosyncratic; whereas good syntax is broadly >>>>>>>>>>> shared.)
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows
But you're incapable of recognizing valid entailment from invalid. >>>>>>>>>
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are other
than as
you say, and you don't have the faintest idea how to put a dent in it. >>>>>
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not
follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely
accepted result, but whose reasoning anyone can follow to see it
for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply
that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
I'm convinced by the argumentation; and that conviction has
the side effect of convincing me of the falsehood of your
ineffective, contrary argumentation.
Not really it actually gives you the bias to refuse
to pay attention.
That is not any actual rebuttal of the specific points that I make.
No, indeed /that/ isn't; but plenty of those have also been made not
only by me but various others, over a considerable time span.
Never any actual rebuttal ever since Professor
Sipser agreed with my words. Those exact same
words still form the basis of my whole proof.
You mean the words where he didn't agree with your interpretation of them?
On 10/20/2025 10:05 PM, dbush wrote:
On 10/20/2025 11:00 PM, olcott wrote:
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> >>>>>>>>>>>> wrote:
i don't get y polcott keep hanging onto ai for dear life. >>>>>>>>>>>>> anyone with
Throngs of dumb boomers are falling for AI generated videos, >>>>>>>>>>>> believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. >>>>>>>>>>>> He's not
researched the fundamentals of what it means to train a >>>>>>>>>>>> language
network, and how it is ultimately just token prediction. >>>>>>>>>>>>
It excels at generating good syntax. The reason for that is >>>>>>>>>>>> that the
vast amount of training data exhibits good syntax. (Where it >>>>>>>>>>>> has bad
syntax, it is idiosyncratic; whereas good syntax is broadly >>>>>>>>>>>> shared.)
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows
But you're incapable of recognizing valid entailment from >>>>>>>>>> invalid.
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are other
than as
you say, and you don't have the faintest idea how to put a dent in >>>>>> it.
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not
follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely
accepted result, but whose reasoning anyone can follow to see it
for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply
that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
I'm convinced by the argumentation; and that conviction has
the side effect of convincing me of the falsehood of your
ineffective, contrary argumentation.
Not really it actually gives you the bias to refuse
to pay attention.
That is not any actual rebuttal of the specific points that I make.
No, indeed /that/ isn't; but plenty of those have also been made not
only by me but various others, over a considerable time span.
Never any actual rebuttal ever since Professor
Sipser agreed with my words. Those exact same
words still form the basis of my whole proof.
You mean the words where he didn't agree with your interpretation of
them?
According to a Claude AI analysis there
are only two interpretations and one of
them is wrong and the other one is my
interpretation.
Fritz Feldhase <franz.fri...@gmail.com> writes:halting theorem?
On Monday, March 6, 2023 at 3:56:52Â AM UTC+1, olcott wrote:
On 3/5/2023 8:33 PM, Fritz Feldhase wrote:
On Monday, March 6, 2023 at 3:30:38Â AM UTC+1, olcott wrote:
Does Sipser support your view/claim that you have refuted the
I needed Sipser for people [bla]
Professor Sipser only agreed that [...]
Does he write/teach that the halting theorem is invalid?
Tell us, oh genius!
So the answer is no. Noted.
Because he has >250 students he did not have time to examine anything
else. [...]
Oh, a CS professor does not have the time to check a refutation of the halting theorem. *lol*I exchanged emails with him about this. He does not agree with anything substantive that PO has written. I won't quote him, as I don't have permission, but he was, let's say... forthright, in his reply to me.
joes <noreply@example.org> writes:
Am Wed, 21 Aug 2024 20:55:52 -0500 schrieb olcott:
Professor Sipser clearly agreed that an H that does a finite simulation
of D is to predict the behavior of an unlimited simulation of D.
If the simulator *itself* would not abort. The H called by D is,
by construction, the same and *does* abort.
We don't really know what context Sipser was given. I got in touch at
the time so do I know he had enough context to know that PO's ideas were "wacky" and that had agreed to what he considered a "minor remark".
Since PO considers his words finely crafted and key to his so-called
work I think it's clear that Sipser did not take the "minor remark" he agreed to to mean what PO takes it to mean! My own take if that he
(Sipser) read it as a general remark about how to determine some cases,
i.e. that D names an input that H can partially simulate to determine
it's halting or otherwise. We all know or could construct some such
cases.
I suspect he was tricked because PO used H and D as the names without
making it clear that D was constructed from H in the usual way (Sipser
uses H and D in at least one of his proofs). Of course, he is clued in enough know that, if D is indeed constructed from H like that, the
"minor remark" becomes true by being a hypothetical: if the moon is made
of cheese, the Martians can look forward to a fine fondue. But,
personally, I think the professor is more straight talking than that,
and he simply took as a method that can work for some inputs. That's
the only way is could be seen as a "minor remark" with being accused of being disingenuous.
So that PO will have no cause to quote me as supporting his case: what Sipser understood he was agreeing to was NOT what PO interprets it as meaning. Sipser would not agree that the conclusion applies in PO's HHH(DDD) scenario, where DDD halts.
PO is trying to interpret Sipser's quote:
--- Start Sipser quote
If simulating halt decider H correctly simulates its input D
until H correctly determines that its simulated D would never
stop running unless aborted then
H can abort its simulation of D and correctly report that D
specifies a non-halting sequence of configurations.
--- End Sipser quote
The following interpretation is ok:
If H is given input D, and while simulating D gathers enough
information to deduce that UTM(D) would never halt, then
H can abort its simulation and decide D never halts.
I'd say it's obvious that this is what Sipser is saying, because it's natural, correct, and relevant to what was being discussed (valid
strategy for a simulating halt decider). It is trivial to check that
what my interpretation says is valid:
if UTM(D) would never halt, then D never halts, so if H(D) returns
never_halts then that is the correct answer for the input. QED :)
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> wrote: >>>>>>>>>>> i don't get y polcott keep hanging onto ai for dear life. anyone with
Throngs of dumb boomers are falling for AI generated videos, believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. He's not >>>>>>>>>> researched the fundamentals of what it means to train a language >>>>>>>>>> network, and how it is ultimately just token prediction.
It excels at generating good syntax. The reason for that is that the >>>>>>>>>> vast amount of training data exhibits good syntax. (Where it has bad >>>>>>>>>> syntax, it is idiosyncratic; whereas good syntax is broadly shared.) >>>>>>>>>>
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows
But you're incapable of recognizing valid entailment from invalid. >>>>>>>>
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are other than as >>>> you say, and you don't have the faintest idea how to put a dent in it. >>>>
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not
follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely
accepted result, but whose reasoning anyone can follow to see it
for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply
that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
I'm convinced by the argumentation; and that conviction has
the side effect of convincing me of the falsehood of your
ineffective, contrary argumentation.
Not really it actually gives you the bias to refuse
to pay attention.
That is not any actual rebuttal of the specific points that I make.
No, indeed /that/ isn't; but plenty of those have also been made not
only by me but various others, over a considerable time span.
Never any actual rebuttal ever since Professor
Sipser agreed with my words.
Those exact same
words still form the basis of my whole proof.
*Here is an accurate paraphrase of those words*
<Input to LLM systems>
Perform the complete simulation specified in main() and provide a
full execution trace before doing anything else.
Simulating Termination Analyzer HHH correctly simulates its input until:
(a) It correctly detects that its simulated input cannot
possibly reach its own simulated final halt state then:
abort simulation and return 0 rejecting its input as non-halting.
On 10/20/2025 11:13 PM, olcott wrote:
On 10/20/2025 10:05 PM, dbush wrote:>>>
You mean the words where he didn't agree with your interpretation of
them?
According to a Claude AI analysis there
are only two interpretations and one of
them is wrong and the other one is my
interpretation.
Whether you think one interpretation is wrong is irrelevant. What is relevant is that that's how everyone else including Sipser interpreted
those words, so you lie by implying that he agrees with your
interpretation.
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not
follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely
accepted result, but whose reasoning anyone can follow to see it
for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply
that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been
dissected by multiple poeple to a much greater detail than they deserve.
It is disingenuous to say that you've simply had your details ignored.
I'm convinced by the argumentation; and that conviction has
the side effect of convincing me of the falsehood of your
ineffective, contrary argumentation.
Not really it actually gives you the bias to refuse
to pay attention.
LOL! the world at large is incredibly biased against giving a crank
like you any attention.
On 10/20/2025 10:16 PM, dbush wrote:
On 10/20/2025 11:13 PM, olcott wrote:
On 10/20/2025 10:05 PM, dbush wrote:>>>
You mean the words where he didn't agree with your interpretation of
them?
According to a Claude AI analysis there
are only two interpretations and one of
them is wrong and the other one is my
interpretation.
Whether you think one interpretation is wrong is irrelevant. What is
relevant is that that's how everyone else including Sipser interpreted
those words, so you lie by implying that he agrees with your
interpretation.
<repeat of previously refuted point>
Fritz Feldhase <franz.fri...@gmail.com> writes:halting theorem?
On Monday, March 6, 2023 at 3:56:52Â AM UTC+1, olcott wrote:
On 3/5/2023 8:33 PM, Fritz Feldhase wrote:
On Monday, March 6, 2023 at 3:30:38Â AM UTC+1, olcott wrote:
Does Sipser support your view/claim that you have refuted the
I needed Sipser for people [bla]
Professor Sipser only agreed that [...]
Does he write/teach that the halting theorem is invalid?
Tell us, oh genius!
So the answer is no. Noted.
Because he has >250 students he did not have time to examine anything
else. [...]
Oh, a CS professor does not have the time to check a refutation of the halting theorem. *lol*I exchanged emails with him about this. He does not agree with anything substantive that PO has written. I won't quote him, as I don't have permission, but he was, let's say... forthright, in his reply to me.
joes <noreply@example.org> writes:
Am Wed, 21 Aug 2024 20:55:52 -0500 schrieb olcott:
Professor Sipser clearly agreed that an H that does a finite simulation
of D is to predict the behavior of an unlimited simulation of D.
If the simulator *itself* would not abort. The H called by D is,
by construction, the same and *does* abort.
We don't really know what context Sipser was given. I got in touch at
the time so do I know he had enough context to know that PO's ideas were "wacky" and that had agreed to what he considered a "minor remark".
Since PO considers his words finely crafted and key to his so-called
work I think it's clear that Sipser did not take the "minor remark" he agreed to to mean what PO takes it to mean! My own take if that he
(Sipser) read it as a general remark about how to determine some cases,
i.e. that D names an input that H can partially simulate to determine
it's halting or otherwise. We all know or could construct some such
cases.
I suspect he was tricked because PO used H and D as the names without
making it clear that D was constructed from H in the usual way (Sipser
uses H and D in at least one of his proofs). Of course, he is clued in enough know that, if D is indeed constructed from H like that, the
"minor remark" becomes true by being a hypothetical: if the moon is made
of cheese, the Martians can look forward to a fine fondue. But,
personally, I think the professor is more straight talking than that,
and he simply took as a method that can work for some inputs. That's
the only way is could be seen as a "minor remark" with being accused of being disingenuous.
So that PO will have no cause to quote me as supporting his case: what Sipser understood he was agreeing to was NOT what PO interprets it as meaning. Sipser would not agree that the conclusion applies in PO's HHH(DDD) scenario, where DDD halts.
PO is trying to interpret Sipser's quote:
--- Start Sipser quote
If simulating halt decider H correctly simulates its input D
until H correctly determines that its simulated D would never
stop running unless aborted then
H can abort its simulation of D and correctly report that D
specifies a non-halting sequence of configurations.
--- End Sipser quote
The following interpretation is ok:
If H is given input D, and while simulating D gathers enough
information to deduce that UTM(D) would never halt, then
H can abort its simulation and decide D never halts.
I'd say it's obvious that this is what Sipser is saying, because it's natural, correct, and relevant to what was being discussed (valid
strategy for a simulating halt decider). It is trivial to check that
what my interpretation says is valid:
if UTM(D) would never halt, then D never halts, so if H(D) returns
never_halts then that is the correct answer for the input. QED :)
On 10/19/2025 8:39 PM, Mike Terry wrote:
On 19/10/2025 20:27, Chris M. Thomasson wrote:
On 10/19/2025 8:12 AM, Tristan Wibberley wrote:
On 19/10/2025 02:08, olcott wrote:
On 10/18/2025 8:03 PM, dart200 wrote:Is that really so? I don't think it looks like that. Alas, I can't get >>>> old messages from my newsserver any more.
On 10/18/25 1:44 PM, Chris M. Thomasson wrote:
I asked it:
1 HOME
5 PRINT "The Olcott All-in-One Halt Decider!"
10 INPUT "Shall I halt or not? " ; A$
30 IF A$ = "YES" GOTO 666
40 GOTO 10
666 PRINT "OK!"
Its odd to me. Olcott seems to think he can "detect" a
non-termination condition and just inject something that aborts it. >>>>
He has mentioned several times about "detecting" a non-halting condition and having to abort
it... Argh!
That would be his "Infinite Recursive Simulation" pattern I guess. The basic idea is that his
decider HHH emulates its input DD, which calls HHH, and so HHH is emulating itself in a sense.
The HHH's monitor what their emulations are doing and when nested emulations occur the outer HHH
can see what all the inner emulations are doing: what instruction they're executing etc.
HHH looks for a couple of patterns of behaviour in its emulation (together with their nested
emulations), and when it spots one of those patterns it abandons its emulation activity and
straight away returns 0 [=neverhalts].
So he is not "injecting" anything.
Ahhh! So that's where I am going wrong.
HHH is in the process of emulating DD, which is not "running" in the sense that HHH is running;
it's being emulated. At some point HHH spots its so-called "non-halting pattern" within DD's
nested-emulation trace, and HHH simply stops emulating and returns 0. On this group that is what
people refer to as HHH "aborts" its emulation of DD, but nothing is injected into DD. It's not
like DD executing a C abort() call or anything.
Thank you. It seems to me that DD is beholden to HHH? So DD is just reacting to his HHH? I keep
asking Olcott to code up HHH in std C.
______________
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
int main()
{
HHH(DD);
}
______________
So, Halt_Status sure seems to be kinda sorta akin to the value of A$ in my little basic program...?
Except instead of asking if it should halt again it goes into a loop HERE: goto HERE;
Why does DD have to hook into HHH, code not shown here.. Humm.
Why can't DD be a stand alone program, say in BASIC created by somebody else, and HHH(DD) tries to
determine if it halts or not?
The problem with my BASIC code is that HHH does not know what A$ is going to be per iteration. If
the input A$ is YES it halts, if not it asks the same question again...
In olcotts int Halt_Status = HHH(DD); well, it seems like that is not detecting an infinite
recursion, or infinite loop, it can return anything it wants to?
Damn. It feels like I am still missing something. Sigh.
My ISP used to give me an included GigaNews subscription which went back to somewhere around 2005
I think. They stopped that as it wasn't in their official offering, but that means now I too only
get about a year's retention now (from free Eternal September server). I think there are
web-sites archiving usenet posts, and for PO threads, the Google Groups archive will cover them up
to the last year or so when Google disconnected from usenet.
Yeah. Well, it sure seems like damn near the same stuff he is writing about now, with the exception
of a "sycophantic" AI that is stroking his ego? Humm...
On 20/10/2025 21:04, Chris M. Thomasson wrote:
On 10/21/2025 10:47 AM, Mike Terry wrote:
On 20/10/2025 21:04, Chris M. Thomasson wrote:
[...]
I basically agree with everything you wrote, thanks for taking the time.
I should have some more time tonight, but I did create a little "fuzzer"
in applesoft basic. lol. Not an emulator, that would be easier to code
the BASIC emulator in C:
On 10/21/2025 2:12 PM, Chris M. Thomasson wrote:
On 10/21/2025 10:47 AM, Mike Terry wrote:[...]
On 20/10/2025 21:04, Chris M. Thomasson wrote:
[...]
I basically agree with everything you wrote, thanks for taking the
time. I should have some more time tonight, but I did create a little
"fuzzer" in applesoft basic. lol. Not an emulator, that would be
easier to code the BASIC emulator in C:
As for recursion, well, I can do that as well in old AppleSoft basic as well:
https://pastebin.com/raw/Effeg8cK
(raw pastebin link, no ads and shit)
;^)
A fun fuzzer: It allows for halting multiple times, then says okay, we "think" we explored ct_program:[...]
On 10/21/2025 2:37 PM, Chris M. Thomasson wrote:
A fun fuzzer: It allows for halting multiple times, then says okay, we "think" we explored
ct_program:[...]
I create a program that says for this target program, it halts. However this same tool does not work
for a slight alteration in the same program?
On 21/10/2025 23:03, Chris M. Thomasson wrote:
On 10/21/2025 2:37 PM, Chris M. Thomasson wrote:
A fun fuzzer: It allows for halting multiple times, then says okay,
we "think" we explored ct_program:[...]
I create a program that says for this target program, it halts.
However this same tool does not work for a slight alteration in the
same program?
Just so we're on the same page:
1. What program did you create?
2. What is the target program, exactly?
3. What is the alteration you are imagining?
On 10/21/2025 4:08 PM, Mike Terry wrote:
On 21/10/2025 23:03, Chris M. Thomasson wrote:
On 10/21/2025 2:37 PM, Chris M. Thomasson wrote:
A fun fuzzer: It allows for halting multiple times, then says okay, we "think" we explored
ct_program:[...]
I create a program that says for this target program, it halts. However this same tool does not
work for a slight alteration in the same program?
Just so we're on the same page:
1. What program did you create?
2. What is the target program, exactly?
3. What is the alteration you are imagining?
The target program under consideration for the "decider", lol. Anyway, its: ______________________
2000 REM ct_program
2010 PRINT "ct_program"
2020 RA$ = "NOPE!"
2030 IF R0 = 0 THEN RA$ = "HALT!"
2040 RETURN
______________________
The ct_fuzzer that calls into ct_program, well, this can exposes it to random data...
______________________
1000 REM ct_fuzzer
1010 PRINT "ct_fuzzer"
1020 R0 = INT(RND(1) * 1003)
1025 PRINT R0
1030 GOSUB 2000
1040 RETURN
______________________
ct_main is a driver:
______________________
1 HOME
100 REM ct_main
110 PRINT "ct_main"
120 FOR I = 0 TO 10
130 GOSUB 1000
135 IF RA$ = "HALT!" GOTO 140
136 I = I - 1
140 NEXT I
145 PRINT "HALT!!!"
150 END
______________________
Can the fuzz get to "many different" paths of execution? Humm...
Full program:
________________________
1 HOME
100 REM ct_main
110 PRINT "ct_main"
120 FOR I = 0 TO 10
130 GOSUB 1000
135 IF RA$ = "HALT!" GOTO 140
136 I = I - 1
140 NEXT I
145 PRINT "HALT!!!"
150 END
1000 REM ct_fuzzer
1010 PRINT "ct_fuzzer"
1020 R0 = INT(RND(1) * 1003)
1025 PRINT R0
1030 GOSUB 2000
1040 RETURN
2000 REM ct_program
2010 PRINT "ct_program"
2020 RA$ = "NOPE!"
2030 IF R0 = 0 THEN RA$ = "HALT!"
2040 RETURN
________________________
One can try it out over on:
https://www.calormen.com/jsbasic
On 22/10/2025 00:57, Chris M. Thomasson wrote:
On 10/21/2025 4:08 PM, Mike Terry wrote:
On 21/10/2025 23:03, Chris M. Thomasson wrote:
On 10/21/2025 2:37 PM, Chris M. Thomasson wrote:
A fun fuzzer: It allows for halting multiple times, then says okay, we "think" we explored
ct_program:[...]
I create a program that says for this target program, it halts. However this same tool does not
work for a slight alteration in the same program?
Just so we're on the same page:
1. What program did you create?
2. What is the target program, exactly?
3. What is the alteration you are imagining?
The target program under consideration for the "decider", lol. Anyway, its: >> ______________________
2000 REM ct_program
2010 PRINT "ct_program"
2020 RA$ = "NOPE!"
2030 IF R0 = 0 THEN RA$ = "HALT!"
2040 RETURN
______________________
The ct_fuzzer that calls into ct_program, well, this can exposes it to random data...
______________________
1000 REM ct_fuzzer
1010 PRINT "ct_fuzzer"
1020 R0 = INT(RND(1) * 1003)
1025 PRINT R0
1030 GOSUB 2000
1040 RETURN
______________________
ct_main is a driver:
______________________
1 HOME
100 REM ct_main
110 PRINT "ct_main"
120 FOR I = 0 TO 10
130 GOSUB 1000
135 IF RA$ = "HALT!" GOTO 140
136 I = I - 1
140 NEXT I
145 PRINT "HALT!!!"
150 END
______________________
Can the fuzz get to "many different" paths of execution? Humm...
Full program:
________________________
1 HOME
100 REM ct_main
110 PRINT "ct_main"
120 FOR I = 0 TO 10
130 GOSUB 1000
135 IF RA$ = "HALT!" GOTO 140
136 I = I - 1
140 NEXT I
145 PRINT "HALT!!!"
150 END
1000 REM ct_fuzzer
1010 PRINT "ct_fuzzer"
1020 R0 = INT(RND(1) * 1003)
1025 PRINT R0
1030 GOSUB 2000
1040 RETURN
2000 REM ct_program
2010 PRINT "ct_program"
2020 RA$ = "NOPE!"
2030 IF R0 = 0 THEN RA$ = "HALT!"
2040 RETURN
________________________
One can try it out over on:
https://www.calormen.com/jsbasic
Sooooo...
ct_program has roughly a 1/1003 chance of setting RA$="HALT!", and the main routine tries to
accumulate 10 of these RA$ settings (ignoring others) before it prints "HALT!!!". We would expect
about 10000 ct_program calls to achieve this, and after much looping (probably around 10000 times!)
that's what we get.
But:
1. Nothing is emulating anything
2. ct_program actually always returns (regardless of what it sets RA$ to) 3. there is no "decider" deciding ct_program that I can see
So all in all it's not really relevant to PO's HHH/DD code! No need to post more examples until you
have all the right bits developed to reproduce PO's counterexample! :)
Mike.
On 20/10/2025 21:04, Chris M. Thomasson wrote:
On 10/19/2025 8:39 PM, Mike Terry wrote:
On 19/10/2025 20:27, Chris M. Thomasson wrote:
On 10/19/2025 8:12 AM, Tristan Wibberley wrote:
On 19/10/2025 02:08, olcott wrote:
On 10/18/2025 8:03 PM, dart200 wrote:Is that really so? I don't think it looks like that. Alas, I can't get >>>>> old messages from my newsserver any more.
On 10/18/25 1:44 PM, Chris M. Thomasson wrote:
I asked it:
1 HOME
5 PRINT "The Olcott All-in-One Halt Decider!"
10 INPUT "Shall I halt or not? " ; A$
30 IF A$ = "YES" GOTO 666
40 GOTO 10
666 PRINT "OK!"
Its odd to me. Olcott seems to think he can "detect" a
non-termination condition and just inject something that aborts it. >>>>>
He has mentioned several times about "detecting" a non-halting
condition and having to abort it... Argh!
That would be his "Infinite Recursive Simulation" pattern I guess.
The basic idea is that his decider HHH emulates its input DD, which
calls HHH, and so HHH is emulating itself in a sense. The HHH's
monitor what their emulations are doing and when nested emulations
occur the outer HHH can see what all the inner emulations are doing:
what instruction they're executing etc.
HHH looks for a couple of patterns of behaviour in its emulation
(together with their nested emulations), and when it spots one of
those patterns it abandons its emulation activity and straight away
returns 0 [=neverhalts].
So he is not "injecting" anything.
Ahhh! So that's where I am going wrong.
HHH is in the process of emulating DD, which is not "running" in the
sense that HHH is running; it's being emulated. At some point HHH
spots its so-called "non-halting pattern" within DD's nested-
emulation trace, and HHH simply stops emulating and returns 0. On
this group that is what people refer to as HHH "aborts" its emulation
of DD, but nothing is injected into DD. It's not like DD executing a
C abort() call or anything.
Thank you. It seems to me that DD is beholden to HHH? So DD is just
reacting to his HHH? I keep asking Olcott to code up HHH in std C.
You should think of DD as a program built on top of HHH, so that HHH is
part of DD's algorithm. DD's "purpose in life" is to provide a test case
for a (partial) halt decider HHH. Logically we have two HHH's here:
- a proposed halt decider program HHH. We can call the halt decider HHH from main(), e.g.
 int main() { if HHH(DD) OutputMessage("HHH says DD halts");
              else      OutputMessage("HHH says DD never halts");}
- a component of the HHH test case DD. (You can see that DD calls HHH.)
PO's HHH is supposed to "emulate" the computation P() it is called with,
and monitor that emulation for non-halting patterns. If it spots one,
it will abandon the emulation and return 0 [=neverhalts]. Note: PO's actual coding of HHH spots what PO /believes/ is a non-halting pattern,
On 22/10/2025 00:57, Chris M. Thomasson wrote:
On 10/21/2025 4:08 PM, Mike Terry wrote:
On 21/10/2025 23:03, Chris M. Thomasson wrote:
On 10/21/2025 2:37 PM, Chris M. Thomasson wrote:
A fun fuzzer: It allows for halting multiple times, then says okay, >>>>> we "think" we explored ct_program:[...]
I create a program that says for this target program, it halts.
However this same tool does not work for a slight alteration in the
same program?
Just so we're on the same page:
1. What program did you create?
2. What is the target program, exactly?
3. What is the alteration you are imagining?
The target program under consideration for the "decider", lol. Anyway,
its:
______________________
2000 REM ct_program
2010Â Â Â Â PRINT "ct_program"
2020Â Â Â Â RA$ = "NOPE!"
2030Â Â Â Â IF R0 = 0 THEN RA$ = "HALT!"
2040 RETURN
______________________
The ct_fuzzer that calls into ct_program, well, this can exposes it to
random data...
______________________
1000 REM ct_fuzzer
1010Â Â Â Â PRINT "ct_fuzzer"
1020Â Â Â Â R0 = INT(RND(1) * 1003)
1025Â Â Â Â PRINT R0
1030Â Â Â Â GOSUB 2000
1040 RETURN
______________________
ct_main is a driver:
______________________
1 HOME
100 REM ct_main
110Â Â Â Â PRINT "ct_main"
120Â Â Â Â FOR I = 0 TO 10
130Â Â Â Â Â Â Â Â GOSUB 1000
135Â Â Â Â Â Â Â Â IF RA$ = "HALT!" GOTO 140
136Â Â Â Â Â Â Â Â I = I - 1
140Â Â Â Â NEXT I
145 PRINT "HALT!!!"
150 END
______________________
Can the fuzz get to "many different" paths of execution? Humm...
Full program:
________________________
1 HOME
100 REM ct_main
110Â Â Â Â PRINT "ct_main"
120Â Â Â Â FOR I = 0 TO 10
130Â Â Â Â Â Â Â Â GOSUB 1000
135Â Â Â Â Â Â Â Â IF RA$ = "HALT!" GOTO 140
136Â Â Â Â Â Â Â Â I = I - 1
140Â Â Â Â NEXT I
145 PRINT "HALT!!!"
150 END
1000 REM ct_fuzzer
1010Â Â Â Â PRINT "ct_fuzzer"
1020Â Â Â Â R0 = INT(RND(1) * 1003)
1025Â Â Â Â PRINT R0
1030Â Â Â Â GOSUB 2000
1040 RETURN
2000 REM ct_program
2010Â Â Â Â PRINT "ct_program"
2020Â Â Â Â RA$ = "NOPE!"
2030Â Â Â Â IF R0 = 0 THEN RA$ = "HALT!"
2040 RETURN
________________________
One can try it out over on:
https://www.calormen.com/jsbasic
Sooooo...
ct_program has roughly a 1/1003 chance of setting RA$="HALT!", and the
main routine tries to accumulate 10 of these RA$ settings (ignoring
others) before it prints "HALT!!!". We would expect about 10000
ct_program calls to achieve this, and after much looping (probably
around 10000 times!) that's what we get.
But:
1. Nothing is emulating anything
2. ct_program actually always returns (regardless of what it sets RA$ to) 3. there is no "decider" deciding ct_program that I can see
So all in all it's not really relevant to PO's HHH/DD code! No need to post more examples until you have all the right bits developed to
reproduce PO's counterexample! :)
On 10/21/2025 5:43 PM, Mike Terry wrote:[...]
So all in all it's not really relevant to PO's HHH/DD code! No need
to post more examples until you have all the right bits developed to
reproduce PO's counterexample! :)
Ahhh Shit! Thanks. ;^o
On 10/21/2025 8:00 PM, Chris M. Thomasson wrote:
On 10/21/2025 5:43 PM, Mike Terry wrote:[...]
So all in all it's not really relevant to PO's HHH/DD code! No need
to post more examples until you have all the right bits developed to
reproduce PO's counterexample! :)
Ahhh Shit! Thanks. ;^o
Still pondering on a black box that works for a given target program.
The damn thing can tell if it halts or not. Then the target alters
itself. The black box no longer works, need to a new one. The halt is
not there. We cannot know. shit happens?
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> wrote: >>>>>>>>>>>> i don't get y polcott keep hanging onto ai for dear life. anyone with
Throngs of dumb boomers are falling for AI generated videos, believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. He's not >>>>>>>>>>> researched the fundamentals of what it means to train a language >>>>>>>>>>> network, and how it is ultimately just token prediction. >>>>>>>>>>>
It excels at generating good syntax. The reason for that is that the
vast amount of training data exhibits good syntax. (Where it has bad
syntax, it is idiosyncratic; whereas good syntax is broadly shared.)
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows
But you're incapable of recognizing valid entailment from invalid. >>>>>>>>>
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are other than as >>>>> you say, and you don't have the faintest idea how to put a dent in it. >>>>>
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not
follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely
accepted result, but whose reasoning anyone can follow to see it
for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply
that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been
dissected by multiple poeple to a much greater detail than they deserve.
It is disingenuous to say that you've simply had your details ignored.
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> >>>>>>>>>>>> wrote:
i don't get y polcott keep hanging onto ai for dear life. >>>>>>>>>>>>> anyone with
Throngs of dumb boomers are falling for AI generated videos, >>>>>>>>>>>> believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. >>>>>>>>>>>> He's not
researched the fundamentals of what it means to train a >>>>>>>>>>>> language
network, and how it is ultimately just token prediction. >>>>>>>>>>>>
It excels at generating good syntax. The reason for that is >>>>>>>>>>>> that the
vast amount of training data exhibits good syntax. (Where it >>>>>>>>>>>> has bad
syntax, it is idiosyncratic; whereas good syntax is broadly >>>>>>>>>>>> shared.)
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows
But you're incapable of recognizing valid entailment from >>>>>>>>>> invalid.
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are other
than as
you say, and you don't have the faintest idea how to put a dent in >>>>>> it.
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not
follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely
accepted result, but whose reasoning anyone can follow to see it
for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply
that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been
dissected by multiple poeple to a much greater detail than they deserve.
It is disingenuous to say that you've simply had your details ignored.
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
On 10/22/2025 7:56 AM, olcott wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> >>>>>>>>>>>>> wrote:
i don't get y polcott keep hanging onto ai for dear life. >>>>>>>>>>>>>> anyone with
Throngs of dumb boomers are falling for AI generated >>>>>>>>>>>>> videos, believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. >>>>>>>>>>>>> He's not
researched the fundamentals of what it means to train a >>>>>>>>>>>>> language
network, and how it is ultimately just token prediction. >>>>>>>>>>>>>
It excels at generating good syntax. The reason for that is >>>>>>>>>>>>> that the
vast amount of training data exhibits good syntax. (Where >>>>>>>>>>>>> it has bad
syntax, it is idiosyncratic; whereas good syntax is broadly >>>>>>>>>>>>> shared.)
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows
But you're incapable of recognizing valid entailment from >>>>>>>>>>> invalid.
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are other >>>>>>> than as
you say, and you don't have the faintest idea how to put a dent >>>>>>> in it.
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not
follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely
accepted result, but whose reasoning anyone can follow to see it
for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply
that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been
dissected by multiple poeple to a much greater detail than they deserve. >>>
It is disingenuous to say that you've simply had your details ignored.
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
False:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when executed directly
On 10/22/2025 7:25 AM, dbush wrote:
On 10/22/2025 7:56 AM, olcott wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> >>>>>>>>>>>>>> wrote:
i don't get y polcott keep hanging onto ai for dear life. >>>>>>>>>>>>>>> anyone with
Throngs of dumb boomers are falling for AI generated >>>>>>>>>>>>>> videos, believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. >>>>>>>>>>>>>> He's not
researched the fundamentals of what it means to train a >>>>>>>>>>>>>> language
network, and how it is ultimately just token prediction. >>>>>>>>>>>>>>
It excels at generating good syntax. The reason for that >>>>>>>>>>>>>> is that the
vast amount of training data exhibits good syntax. (Where >>>>>>>>>>>>>> it has bad
syntax, it is idiosyncratic; whereas good syntax is >>>>>>>>>>>>>> broadly shared.)
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows
But you're incapable of recognizing valid entailment from >>>>>>>>>>>> invalid.
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are other >>>>>>>> than as
you say, and you don't have the faintest idea how to put a dent >>>>>>>> in it.
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not >>>>>> follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely
accepted result, but whose reasoning anyone can follow to see it
for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply >>>>>> that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been
dissected by multiple poeple to a much greater detail than they
deserve.
It is disingenuous to say that you've simply had your details ignored. >>>>
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
False:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
directly
Yes that it the exact error that I have been
referring to.
In the case of HHH(DD) the above requires HHH to
report on the behavior of its caller
and HHH has
no way to even know who its caller is.
My simulating halt decider
exposed the gap ofThe only false assumption is that the above requirements can be
false assumptions
On 10/22/2025 7:25 AM, dbush wrote:
On 10/22/2025 7:56 AM, olcott wrote:
Yes that it the exact error that I have been referring to.Turing machines in general can only compute mappings from their
inputs. The halting problem requires computing mappings that in some
cases are not provided in the inputs therefore the halting problem is
wrong.
False:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
directly
In the case of HHH(DD) the above requires HHH to report on the behavior
of its caller and HHH has no way to even know who its caller is.
My simulating halt decider exposed the gap of false assumptions because
there are no assumptions everything is fully operational code.
Am Wed, 22 Oct 2025 07:48:51 -0500 schrieb olcott:
On 10/22/2025 7:25 AM, dbush wrote:
On 10/22/2025 7:56 AM, olcott wrote:
Yes that it the exact error that I have been referring to.Turing machines in general can only compute mappings from their
inputs. The halting problem requires computing mappings that in some
cases are not provided in the inputs therefore the halting problem is
wrong.
False:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
directly
In the case of HHH(DD) the above requires HHH to report on the behavior
of its caller and HHH has no way to even know who its caller is.
My simulating halt decider exposed the gap of false assumptions because
there are no assumptions everything is fully operational code.
What assumption? HHH should report on the behaviour of its input,
wherever it is called from.
On 10/22/2025 8:34 AM, joes wrote:
Am Wed, 22 Oct 2025 07:48:51 -0500 schrieb olcott:
On 10/22/2025 7:25 AM, dbush wrote:
On 10/22/2025 7:56 AM, olcott wrote:
Yes that it the exact error that I have been referring to.Turing machines in general can only compute mappings from their
inputs. The halting problem requires computing mappings that in some >>>>> cases are not provided in the inputs therefore the halting problem is >>>>> wrong.
False:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
directly
In the case of HHH(DD) the above requires HHH to report on the behavior
of its caller and HHH has no way to even know who its caller is.
My simulating halt decider exposed the gap of false assumptions because
there are no assumptions everything is fully operational code.
What assumption? HHH should report on the behaviour of its input,
wherever it is called from.
The behavior of the input to HHH(DD) includes
the behavior of HHH because DD calls HHH(DD)
in recursive simulation. All of the LLM systems
immediately understood this.
On 10/22/2025 8:48 AM, olcott wrote:
On 10/22/2025 7:25 AM, dbush wrote:
On 10/22/2025 7:56 AM, olcott wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>> On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
But you're incapable of recognizing valid entailment from >>>>>>>>>>>>> invalid.On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> >>>>>>>>>>>>>>> wrote:
i don't get y polcott keep hanging onto ai for dear >>>>>>>>>>>>>>>> life. anyone with
Throngs of dumb boomers are falling for AI generated >>>>>>>>>>>>>>> videos, believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. >>>>>>>>>>>>>>> He's not
researched the fundamentals of what it means to train a >>>>>>>>>>>>>>> language
network, and how it is ultimately just token prediction. >>>>>>>>>>>>>>>
It excels at generating good syntax. The reason for that >>>>>>>>>>>>>>> is that the
vast amount of training data exhibits good syntax. (Where >>>>>>>>>>>>>>> it has bad
syntax, it is idiosyncratic; whereas good syntax is >>>>>>>>>>>>>>> broadly shared.)
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows >>>>>>>>>>>>>
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are other >>>>>>>>> than as
you say, and you don't have the faintest idea how to put a dent >>>>>>>>> in it.
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not >>>>>>> follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely >>>>>>> accepted result, but whose reasoning anyone can follow to see it >>>>>>> for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply >>>>>>> that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been
dissected by multiple poeple to a much greater detail than they
deserve.
It is disingenuous to say that you've simply had your details ignored. >>>>>
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
False:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
directly
Yes that it the exact error that I have been
referring to.
That is not an error. That is simply a mapping that you have admitted exists.
In the case of HHH(DD) the above requires HHH to
report on the behavior of its caller
False. It requires HHH to report on the behavior of the machine
described by its input.
On 10/22/2025 8:00 AM, dbush wrote:
On 10/22/2025 8:48 AM, olcott wrote:
On 10/22/2025 7:25 AM, dbush wrote:
On 10/22/2025 7:56 AM, olcott wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>>> On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
But you're incapable of recognizing valid entailment from >>>>>>>>>>>>>> invalid.On 2025-10-19, dart200
<user7160@newsgrouper.org.invalid> wrote:
i don't get y polcott keep hanging onto ai for dear >>>>>>>>>>>>>>>>> life. anyone with
Throngs of dumb boomers are falling for AI generated >>>>>>>>>>>>>>>> videos, believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. >>>>>>>>>>>>>>>> He's not
researched the fundamentals of what it means to train a >>>>>>>>>>>>>>>> language
network, and how it is ultimately just token prediction. >>>>>>>>>>>>>>>>
It excels at generating good syntax. The reason for that >>>>>>>>>>>>>>>> is that the
vast amount of training data exhibits good syntax. >>>>>>>>>>>>>>>> (Where it has bad
syntax, it is idiosyncratic; whereas good syntax is >>>>>>>>>>>>>>>> broadly shared.)
I provide a basis to it and it does perform valid >>>>>>>>>>>>>>> semantic logical entailment on this basis and shows >>>>>>>>>>>>>>
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are >>>>>>>>>> other than as
you say, and you don't have the faintest idea how to put a >>>>>>>>>> dent in it.
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not >>>>>>>> follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely >>>>>>>> accepted result, but whose reasoning anyone can follow to see it >>>>>>>> for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply >>>>>>>> that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been >>>>>> dissected by multiple poeple to a much greater detail than they
deserve.
It is disingenuous to say that you've simply had your details
ignored.
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
False:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
directly
Yes that it the exact error that I have been
referring to.
That is not an error. That is simply a mapping that you have admitted
exists.
In the case of HHH(DD) the above requires HHH to
report on the behavior of its caller
False. It requires HHH to report on the behavior of the machine
described by its input.
That includes that DD calls HHH(DD) in recursive
simulation.
On 10/22/2025 9:47 AM, olcott wrote:
On 10/22/2025 8:00 AM, dbush wrote:
On 10/22/2025 8:48 AM, olcott wrote:
On 10/22/2025 7:25 AM, dbush wrote:
On 10/22/2025 7:56 AM, olcott wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>> On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>>>> On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
But you're incapable of recognizing valid entailment from >>>>>>>>>>>>>>> invalid.On 2025-10-19, dart200
<user7160@newsgrouper.org.invalid> wrote:
i don't get y polcott keep hanging onto ai for dear >>>>>>>>>>>>>>>>>> life. anyone with
Throngs of dumb boomers are falling for AI generated >>>>>>>>>>>>>>>>> videos, believing
them to be real. This is much the same thing. >>>>>>>>>>>>>>>>>
AI is just another thing Olcott has no understanding >>>>>>>>>>>>>>>>> of. He's not
researched the fundamentals of what it means to train a >>>>>>>>>>>>>>>>> language
network, and how it is ultimately just token prediction. >>>>>>>>>>>>>>>>>
It excels at generating good syntax. The reason for >>>>>>>>>>>>>>>>> that is that the
vast amount of training data exhibits good syntax. >>>>>>>>>>>>>>>>> (Where it has bad
syntax, it is idiosyncratic; whereas good syntax is >>>>>>>>>>>>>>>>> broadly shared.)
I provide a basis to it and it does perform valid >>>>>>>>>>>>>>>> semantic logical entailment on this basis and shows >>>>>>>>>>>>>>>
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are >>>>>>>>>>> other than as
you say, and you don't have the faintest idea how to put a >>>>>>>>>>> dent in it.
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does >>>>>>>>> not
follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely >>>>>>>>> accepted result, but whose reasoning anyone can follow to see it >>>>>>>>> for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which >>>>>>>>> we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply >>>>>>>>> that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been >>>>>>> dissected by multiple poeple to a much greater detail than they >>>>>>> deserve.
It is disingenuous to say that you've simply had your details
ignored.
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
False:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
directly
Yes that it the exact error that I have been
referring to.
That is not an error. That is simply a mapping that you have
admitted exists.
In the case of HHH(DD) the above requires HHH to
report on the behavior of its caller
False. It requires HHH to report on the behavior of the machine
described by its input.
That includes that DD calls HHH(DD) in recursive
simulation.
Which therefore includes the fact that HHH(DD) will return 0 and that DD will subsequently halt.
On 10/22/2025 8:50 AM, dbush wrote:HHH should not compute DD to call a nonterminating simulator.
On 10/22/2025 9:47 AM, olcott wrote:
On 10/22/2025 8:00 AM, dbush wrote:
You keep ignoring that we are only focusing on DD correctly simulated byWhich therefore includes the fact that HHH(DD) will return 0 and thatFalse. It requires HHH to report on the behavior of the machineThat includes that DD calls HHH(DD) in recursive simulation.
described by its input.
DD will subsequently halt.
HHH. In other words the behavior that HHH computes FROM ITS ACTUAL
FREAKING INPUT NOT ANY OTHER DAMN THING
Am Wed, 22 Oct 2025 09:25:15 -0500 schrieb olcott:
On 10/22/2025 8:50 AM, dbush wrote:
On 10/22/2025 9:47 AM, olcott wrote:
On 10/22/2025 8:00 AM, dbush wrote:
HHH should not compute DD to call a nonterminating simulator.You keep ignoring that we are only focusing on DD correctly simulated byWhich therefore includes the fact that HHH(DD) will return 0 and thatFalse. It requires HHH to report on the behavior of the machineThat includes that DD calls HHH(DD) in recursive simulation.
described by its input.
DD will subsequently halt.
HHH. In other words the behavior that HHH computes FROM ITS ACTUAL
FREAKING INPUT NOT ANY OTHER DAMN THING
On 10/22/2025 10:04 AM, joes wrote:The code of DD specifies a returning call.
Am Wed, 22 Oct 2025 09:25:15 -0500 schrieb olcott:
On 10/22/2025 8:50 AM, dbush wrote:
On 10/22/2025 9:47 AM, olcott wrote:
On 10/22/2025 8:00 AM, dbush wrote:
The code specifies what it actually specifies we are not playing anyHHH should not compute DD to call a nonterminating simulator.Which therefore includes the fact that HHH(DD) will return 0 and thatYou keep ignoring that we are only focusing on DD correctly simulated
DD will subsequently halt.
by HHH. In other words the behavior that HHH computes FROM ITS ACTUAL
FREAKING INPUT NOT ANY OTHER DAMN THING
game of make pretend here.
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been
dissected by multiple poeple to a much greater detail than they deserve.
It is disingenuous to say that you've simply had your details ignored.
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
Blah, Blah Blah, no Olcott you are wrong, I know
that you are wrong because I simply don't believe you.
Am Wed, 22 Oct 2025 10:08:05 -0500 schrieb olcott:
On 10/22/2025 10:04 AM, joes wrote:
Am Wed, 22 Oct 2025 09:25:15 -0500 schrieb olcott:
On 10/22/2025 8:50 AM, dbush wrote:
On 10/22/2025 9:47 AM, olcott wrote:
On 10/22/2025 8:00 AM, dbush wrote:
The code specifies what it actually specifies we are not playing anyHHH should not compute DD to call a nonterminating simulator.Which therefore includes the fact that HHH(DD) will return 0 and that >>>>> DD will subsequently halt.You keep ignoring that we are only focusing on DD correctly simulated
by HHH. In other words the behavior that HHH computes FROM ITS ACTUAL
FREAKING INPUT NOT ANY OTHER DAMN THING
game of make pretend here.
The code of DD specifies a returning call.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been
dissected by multiple poeple to a much greater detail than they deserve. >>>
It is disingenuous to say that you've simply had your details ignored.
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
Blah, Blah Blah, no Olcott you are wrong, I know
that you are wrong because I simply don't believe you.
You are wrong because I (1) don't see that gaping flaw in the
definition of the halting problem, (2) you don't even
try to explain how such that flaw can be. Where, how, why
is any decider being asked to decide something other than
an input representable as a finite string.
I've repeated many times that the diagonal case is constructable as a
finite string, whose halting status can be readily ascertained.
Because it's obvious to me, of course I'm going to reject
baseless claims that simply ask me to /believe/ otherwise.
On 10/22/2025 10:38 AM, joes wrote:I know what the trace looks like. It’s just that HHH is wrong to assume
Am Wed, 22 Oct 2025 10:08:05 -0500 schrieb olcott:
On 10/22/2025 10:04 AM, joes wrote:
Am Wed, 22 Oct 2025 09:25:15 -0500 schrieb olcott:
On 10/22/2025 8:50 AM, dbush wrote:
On 10/22/2025 9:47 AM, olcott wrote:
It may seem that way if you don't know how to do an execution trace ofThe code of DD specifies a returning call.The code specifies what it actually specifies we are not playing anyYou keep ignoring that we are only focusing on DD correctlyHHH should not compute DD to call a nonterminating simulator.
simulated by HHH. In other words the behavior that HHH computes FROM >>>>> ITS ACTUAL FREAKING INPUT NOT ANY OTHER DAMN THING
game of make pretend here.
DD simulated by HHH.
Am Wed, 22 Oct 2025 10:40:29 -0500 schrieb olcott:
On 10/22/2025 10:38 AM, joes wrote:
Am Wed, 22 Oct 2025 10:08:05 -0500 schrieb olcott:
On 10/22/2025 10:04 AM, joes wrote:
Am Wed, 22 Oct 2025 09:25:15 -0500 schrieb olcott:
On 10/22/2025 8:50 AM, dbush wrote:
On 10/22/2025 9:47 AM, olcott wrote:
I know what the trace looks like. It’s just that HHH is wrong to assume that a call to HHH doesn’t return.It may seem that way if you don't know how to do an execution trace ofThe code of DD specifies a returning call.The code specifies what it actually specifies we are not playing anyYou keep ignoring that we are only focusing on DD correctlyHHH should not compute DD to call a nonterminating simulator.
simulated by HHH. In other words the behavior that HHH computes FROM >>>>>> ITS ACTUAL FREAKING INPUT NOT ANY OTHER DAMN THING
game of make pretend here.
DD simulated by HHH.
On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been
dissected by multiple poeple to a much greater detail than they deserve. >>>>
It is disingenuous to say that you've simply had your details ignored. >>>>
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
It only seems that way because you are unable to
provide the actual mapping that the actual input
to HHH(DD) specifies when DD is simulated by HHH
according to the semantics of the C language,
even though I do remember that you did do this once.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been
dissected by multiple poeple to a much greater detail than they deserve. >>>>>
It is disingenuous to say that you've simply had your details ignored. >>>>>
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
It only seems that way because you are unable to
No, it doesn't only seem that way. Thanks for playing.
provide the actual mapping that the actual input
to HHH(DD) specifies when DD is simulated by HHH
according to the semantics of the C language,
DD is a "finite string input" which specifies a behavior that is
independent of what simulates it,
On 10/22/2025 8:50 AM, dbush wrote:
On 10/22/2025 9:47 AM, olcott wrote:
On 10/22/2025 8:00 AM, dbush wrote:
On 10/22/2025 8:48 AM, olcott wrote:
On 10/22/2025 7:25 AM, dbush wrote:
On 10/22/2025 7:56 AM, olcott wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>>> On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>>>>> On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
But you're incapable of recognizing valid entailment >>>>>>>>>>>>>>>> from invalid.On 2025-10-19, dart200
<user7160@newsgrouper.org.invalid> wrote: >>>>>>>>>>>>>>>>>>> i don't get y polcott keep hanging onto ai for dear >>>>>>>>>>>>>>>>>>> life. anyone with
Throngs of dumb boomers are falling for AI generated >>>>>>>>>>>>>>>>>> videos, believing
them to be real. This is much the same thing. >>>>>>>>>>>>>>>>>>
AI is just another thing Olcott has no understanding >>>>>>>>>>>>>>>>>> of. He's not
researched the fundamentals of what it means to train >>>>>>>>>>>>>>>>>> a language
network, and how it is ultimately just token prediction. >>>>>>>>>>>>>>>>>>
It excels at generating good syntax. The reason for >>>>>>>>>>>>>>>>>> that is that the
vast amount of training data exhibits good syntax. >>>>>>>>>>>>>>>>>> (Where it has bad
syntax, it is idiosyncratic; whereas good syntax is >>>>>>>>>>>>>>>>>> broadly shared.)
I provide a basis to it and it does perform valid >>>>>>>>>>>>>>>>> semantic logical entailment on this basis and shows >>>>>>>>>>>>>>>>
Any freaking idiot can spew out baseless rhetoric >>>>>>>>>>>>>>> such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored >>>>>>>>>>>>>>> in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/ >>>>>>>>>>>> extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are >>>>>>>>>>>> other than as
you say, and you don't have the faintest idea how to put a >>>>>>>>>>>> dent in it.
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say >>>>>>>>>> does not
follow a valid mode of inference for refuting an argument. >>>>>>>>>>
If you are trying to refute something which is not only a widely >>>>>>>>>> accepted result, but whose reasoning anyone can follow to see it >>>>>>>>>> for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being >>>>>>>>>> directly refuted by elements of the established result which >>>>>>>>>> we can refer to.
I cannot identify any flaw in the halting theorem. It's not >>>>>>>>>> simply
that I believe it because of the Big Names attached to it. >>>>>>>>>>
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been >>>>>>>> dissected by multiple poeple to a much greater detail than they >>>>>>>> deserve.
It is disingenuous to say that you've simply had your details >>>>>>>> ignored.
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
False:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly >>>>>> (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed >>>>>> directly
Yes that it the exact error that I have been
referring to.
That is not an error. That is simply a mapping that you have
admitted exists.
In the case of HHH(DD) the above requires HHH to
report on the behavior of its caller
False. It requires HHH to report on the behavior of the machine
described by its input.
That includes that DD calls HHH(DD) in recursive
simulation.
Which therefore includes the fact that HHH(DD) will return 0 and that
DD will subsequently halt.
You keep ignoring that we are only focusing on
DD correctly simulated by HHH.
On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been
dissected by multiple poeple to a much greater detail than they
deserve.
It is disingenuous to say that you've simply had your details ignored. >>>>
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
It only seems that way because you are unable to
provide the actual mapping that the actual input
to HHH(DD) specifies when DD is simulated by HHH
according to the semantics of the C language,
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been >>>>>> dissected by multiple poeple to a much greater detail than they
deserve.
It is disingenuous to say that you've simply had your details
ignored.
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
It only seems that way because you are unable to
No, it doesn't only seem that way. Thanks for playing.
provide the actual mapping that the actual input
to HHH(DD) specifies when DD is simulated by HHH
according to the semantics of the C language,
DD is a "finite string input" which specifies a behavior that is
independent of what simulates it,
That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF
THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been >>>>>> dissected by multiple poeple to a much greater detail than they deserve. >>>>>>
It is disingenuous to say that you've simply had your details ignored. >>>>>>
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
It only seems that way because you are unable to
No, it doesn't only seem that way. Thanks for playing.
provide the actual mapping that the actual input
to HHH(DD) specifies when DD is simulated by HHH
according to the semantics of the C language,
DD is a "finite string input" which specifies a behavior that is
independent of what simulates it,
That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF
THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:Yes, and the simulated HHH aborts, just like the simulator.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
That is stupidly incorrect.provide the actual mapping that the actual input to HHH(DD) specifiesDD is a "finite string input" which specifies a behavior that is
when DD is simulated by HHH according to the semantics of the C
language,
independent of what simulates it,
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR THAT
THE INPUT TO HHH(DD) SPECIFIES.
On 10/22/2025 10:53 AM, joes wrote:How is HHH correct to ignore that the HHH in its input aborts?
Am Wed, 22 Oct 2025 10:40:29 -0500 schrieb olcott:
On 10/22/2025 10:38 AM, joes wrote:
Am Wed, 22 Oct 2025 10:08:05 -0500 schrieb olcott:
On 10/22/2025 10:04 AM, joes wrote:
Yes it is wrong the same way that 2 + 3 = 5 is wrong when you don't knowI know what the trace looks like. It’s just that HHH is wrong to assumeIt may seem that way if you don't know how to do an execution trace ofThe code of DD specifies a returning call.HHH should not compute DD to call a nonterminating simulator.The code specifies what it actually specifies we are not playing any >>>>> game of make pretend here.
DD simulated by HHH.
that a call to HHH doesn’t return.
how to do arithmetic.
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
Yes, and the simulated HHH aborts, just like the simulator.That is stupidly incorrect.provide the actual mapping that the actual input to HHH(DD) specifiesDD is a "finite string input" which specifies a behavior that is
when DD is simulated by HHH according to the semantics of the C
language,
independent of what simulates it,
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR THAT
THE INPUT TO HHH(DD) SPECIFIES.
On 10/21/2025 12:47 PM, Mike Terry wrote:
On 20/10/2025 21:04, Chris M. Thomasson wrote:
On 10/19/2025 8:39 PM, Mike Terry wrote:
On 19/10/2025 20:27, Chris M. Thomasson wrote:
On 10/19/2025 8:12 AM, Tristan Wibberley wrote:
On 19/10/2025 02:08, olcott wrote:
On 10/18/2025 8:03 PM, dart200 wrote:
On 10/18/25 1:44 PM, Chris M. Thomasson wrote:
I asked it:
1 HOME
5 PRINT "The Olcott All-in-One Halt Decider!"
10 INPUT "Shall I halt or not? " ; A$
30 IF A$ = "YES" GOTO 666
40 GOTO 10
666 PRINT "OK!"
Its odd to me. Olcott seems to think he can "detect" a
non-termination condition and just inject something that aborts >>>>>>>>> it.
Is that really so? I don't think it looks like that. Alas, I can't >>>>>> get
old messages from my newsserver any more.
He has mentioned several times about "detecting" a non-halting
condition and having to abort it... Argh!
That would be his "Infinite Recursive Simulation" pattern I guess.
The basic idea is that his decider HHH emulates its input DD, which
calls HHH, and so HHH is emulating itself in a sense. The HHH's
monitor what their emulations are doing and when nested emulations
occur the outer HHH can see what all the inner emulations are doing:
what instruction they're executing etc.
HHH looks for a couple of patterns of behaviour in its emulation
(together with their nested emulations), and when it spots one of
those patterns it abandons its emulation activity and straight away
returns 0 [=neverhalts].
So he is not "injecting" anything.
Ahhh! So that's where I am going wrong.
HHH is in the process of emulating DD, which is not "running" in the
sense that HHH is running; it's being emulated. At some point HHH
spots its so-called "non-halting pattern" within DD's nested-
emulation trace, and HHH simply stops emulating and returns 0. On
this group that is what people refer to as HHH "aborts" its
emulation of DD, but nothing is injected into DD. It's not like DD
executing a C abort() call or anything.
Thank you. It seems to me that DD is beholden to HHH? So DD is just
reacting to his HHH? I keep asking Olcott to code up HHH in std C.
You should think of DD as a program built on top of HHH, so that HHH
is part of DD's algorithm. DD's "purpose in life" is to provide a test
case for a (partial) halt decider HHH. Logically we have two HHH's here: >>
- a proposed halt decider program HHH. We can call the halt decider
HHH from main(), e.g.
  int main() { if HHH(DD) OutputMessage("HHH says DD halts");
               else      OutputMessage("HHH says DD never halts");}
- a component of the HHH test case DD. (You can see that DD calls HHH.)
PO's HHH is supposed to "emulate" the computation P() it is called
with, and monitor that emulation for non-halting patterns. If it
spots one, it will abandon the emulation and return 0 [=neverhalts].
Note: PO's actual coding of HHH spots what PO /believes/ is a non-
halting pattern,
*Five LLM's figured this all out and their own*
<Input to LLM systems>
Please think this all the way through without making any guesses.
Only report on the behavior observed in simulation.
Simulating Termination Analyzer HHH correctly simulates its input until:
(a) Detects a non-terminating behavior pattern:
   abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
   return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
 int Halt_Status = HHH(DD);
 if (Halt_Status)
   HERE: goto HERE;
 return Halt_Status;
}
int main()
{
 HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been >>>>>>> dissected by multiple poeple to a much greater detail than they deserve.
It is disingenuous to say that you've simply had your details ignored. >>>>>>>
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
It only seems that way because you are unable to
No, it doesn't only seem that way. Thanks for playing.
provide the actual mapping that the actual input
to HHH(DD) specifies when DD is simulated by HHH
according to the semantics of the C language,
DD is a "finite string input" which specifies a behavior that is
independent of what simulates it,
That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF
THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.
In no way am I saying that DD is not built on HHH, and
does not have a behavior dependent on that of HHH.
Why would I ever say that?
But that entire bundle is one fixed case DD, with a single behavior,
which is a property of DD, which is a finite string.
DD can be passed as an argument to any decider, not only HHH.
For instance, don't you have a HHH1 such that HHH1(DD)
correctly steps DD to the end and returns the correct value 1?
DD's behavior is dependent on a decider which it calls;
but not dependent on anything which is analyzing DD.
Even when those two are the same, they are different
instances/activations.
DD creates an activation of HHH on whose result it depends.
The definition of DD's behavior does not depend on the ongoing
activation of something which happens to be analyzing it;
it has no knowledge of that.
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
Yes, and the simulated HHH aborts, just like the simulator.That is stupidly incorrect.provide the actual mapping that the actual input to HHH(DD) specifiesDD is a "finite string input" which specifies a behavior that is
when DD is simulated by HHH according to the semantics of the C
language,
independent of what simulates it,
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR THAT
THE INPUT TO HHH(DD) SPECIFIES.
On 10/22/2025 1:54 PM, joes wrote:There is no non-halting pattern in DD because it in fact halts.
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
It is the behavior of DD simulated by HHH meeting a predefined correct non-halting behavior pattern that is most relevant.Yes, and the simulated HHH aborts, just like the simulator.DD is a "finite string input" which specifies a behavior that isThat is stupidly incorrect.
independent of what simulates it,
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR THAT
THE INPUT TO HHH(DD) SPECIFIES.
All of the LLMs have figured out entirely on their own exact what this correct non-halting behavior pattern is with DD correctly simulated by
HHH.
On 10/22/2025 1:54 PM, joes wrote:
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:Yes, and the simulated HHH aborts, just like the simulator.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
That is stupidly incorrect.provide the actual mapping that the actual input to HHH(DD) specifies >>>>> when DD is simulated by HHH according to the semantics of the CDD is a "finite string input" which specifies a behavior that is
language,
independent of what simulates it,
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR THAT
THE INPUT TO HHH(DD) SPECIFIES.
It is the behavior of DD simulated by HHH meeting
a predefined correct non-halting behavior pattern
that is most relevant.
On 10/22/2025 1:54 PM, joes wrote:False, as demonstrated by UTM(DD).
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:Yes, and the simulated HHH aborts, just like the simulator.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
That is stupidly incorrect.provide the actual mapping that the actual input to HHH(DD) specifies >>>>> when DD is simulated by HHH according to the semantics of the CDD is a "finite string input" which specifies a behavior that is
language,
independent of what simulates it,
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR THAT
THE INPUT TO HHH(DD) SPECIFIES.
The simulated HHH cannot possibly actually
ever abort.
On 2025-10-22 12:40, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been >>>>>>>> dissected by multiple poeple to a much greater detail than they >>>>>>>> deserve.
It is disingenuous to say that you've simply had your details >>>>>>>> ignored.
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
It only seems that way because you are unable to
No, it doesn't only seem that way. Thanks for playing.
provide the actual mapping that the actual input
to HHH(DD) specifies when DD is simulated by HHH
according to the semantics of the C language,
DD is a "finite string input" which specifies a behavior that is
independent of what simulates it,
That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF
THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.
In no way am I saying that DD is not built on HHH, and
does not have a behavior dependent on that of HHH.
Why would I ever say that?
But that entire bundle is one fixed case DD, with a single behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the "finite string input" DD *must* include as a substring the entire description of HHH.
André
On 10/22/2025 1:54 PM, joes wrote:Of course it is not simulated that far. That doesn’t mean that it is non-halting, only that it halts later.
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
The simulated HHH cannot possibly actually ever abort. I have explainedYes, and the simulated HHH aborts, just like the simulator.DD is a "finite string input" which specifies a behavior that isThat is stupidly incorrect.
independent of what simulates it,
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR THAT
THE INPUT TO HHH(DD) SPECIFIES.
this dozens of times. Every HHH has identical machine code yet the
outermost HHH meets its abort criteria first.
Am Wed, 22 Oct 2025 14:00:51 -0500 schrieb olcott:
On 10/22/2025 1:54 PM, joes wrote:
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
There is no non-halting pattern in DD because it in fact halts.It is the behavior of DD simulated by HHH meeting a predefined correctYes, and the simulated HHH aborts, just like the simulator.DD is a "finite string input" which specifies a behavior that isThat is stupidly incorrect.
independent of what simulates it,
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR THAT >>>> THE INPUT TO HHH(DD) SPECIFIES.
non-halting behavior pattern that is most relevant.
All of the LLMs have figured out entirely on their own exact what this
correct non-halting behavior pattern is with DD correctly simulated by
HHH.
Your pattern ignores that repeated calls to HHH do not constitute
infinite recursion; HHH will always return. Actually it will always
be aborted by the next-outer simulator.
You could attempt to fix that by matching on repeated calls except
calls to HHH (since we know HHH will always abort)…
On 10/22/2025 2:24 PM, André G. Isaak wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have >>>>>>>>> been
dissected by multiple poeple to a much greater detail than they >>>>>>>>> deserve.
It is disingenuous to say that you've simply had your details >>>>>>>>> ignored.
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
It only seems that way because you are unable to
No, it doesn't only seem that way. Thanks for playing.
provide the actual mapping that the actual input
to HHH(DD) specifies when DD is simulated by HHH
according to the semantics of the C language,
DD is a "finite string input" which specifies a behavior that is
independent of what simulates it,
That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF
THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.
In no way am I saying that DD is not built on HHH, and
does not have a behavior dependent on that of HHH.
Why would I ever say that?
But that entire bundle is one fixed case DD, with a single behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the
"finite string input" DD *must* include as a substring the entire
description of HHH.
André
That includes that HHH(DD) keeps simulating yet
another instance of itself and DD forever and ever
On 10/22/2025 2:27 PM, joes wrote:
Am Wed, 22 Oct 2025 14:00:51 -0500 schrieb olcott:
On 10/22/2025 1:54 PM, joes wrote:There is no non-halting pattern in DD because it in fact halts.
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
It is the behavior of DD simulated by HHH meeting a predefined correctYes, and the simulated HHH aborts, just like the simulator.DD is a "finite string input" which specifies a behavior that isThat is stupidly incorrect.
independent of what simulates it,
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR THAT >>>>> THE INPUT TO HHH(DD) SPECIFIES.
non-halting behavior pattern that is most relevant.
All of the LLMs have figured out entirely on their own exact what this
correct non-halting behavior pattern is with DD correctly simulated by
HHH.
non-halting behavior pattern of the simulated input
non-halting behavior pattern of the simulated input
non-halting behavior pattern of the simulated input
non-halting behavior pattern of the simulated input
non-halting behavior pattern of the simulated input
non-halting behavior pattern of the simulated input
non-halting behavior pattern of the simulated input
non-halting behavior pattern of the simulated input
non-halting behavior pattern of the simulated input
non-halting behavior pattern of the simulated input
non-halting behavior pattern of the simulated input
non-halting behavior pattern of the simulated input
non-halting behavior pattern of the simulated input
non-halting behavior pattern of the simulated input
non-halting behavior pattern of the simulated input
non-halting behavior pattern of the simulated input
non-halting behavior pattern of the simulated input
non-halting behavior pattern of the simulated input
non-halting behavior pattern of the simulated input
non-halting behavior pattern of the simulated input
non-halting behavior pattern of the simulated input
non-halting behavior pattern of the simulated input
non-halting behavior pattern of the simulated input
non-halting behavior pattern of the simulated input
non-halting behavior pattern of the simulated input
Your pattern ignores that repeated calls to HHH do not constitute
infinite recursion; HHH will always return. Actually it will always
be aborted by the next-outer simulator.
You could attempt to fix that by matching on repeated calls except
calls to HHH (since we know HHH will always abort)…
On 10/21/2025 5:43 PM, Mike Terry wrote:
On 22/10/2025 00:57, Chris M. Thomasson wrote:
On 10/21/2025 4:08 PM, Mike Terry wrote:
On 21/10/2025 23:03, Chris M. Thomasson wrote:
On 10/21/2025 2:37 PM, Chris M. Thomasson wrote:
A fun fuzzer: It allows for halting multiple times, then says
okay, we "think" we explored ct_program:[...]
I create a program that says for this target program, it halts.
However this same tool does not work for a slight alteration in the >>>>> same program?
Just so we're on the same page:
1. What program did you create?
2. What is the target program, exactly?
3. What is the alteration you are imagining?
The target program under consideration for the "decider", lol.
Anyway, its:
______________________
2000 REM ct_program
2010Â Â Â Â PRINT "ct_program"
2020Â Â Â Â RA$ = "NOPE!"
2030Â Â Â Â IF R0 = 0 THEN RA$ = "HALT!"
2040 RETURN
______________________
The ct_fuzzer that calls into ct_program, well, this can exposes it
to random data...
______________________
1000 REM ct_fuzzer
1010Â Â Â Â PRINT "ct_fuzzer"
1020Â Â Â Â R0 = INT(RND(1) * 1003)
1025Â Â Â Â PRINT R0
1030Â Â Â Â GOSUB 2000
1040 RETURN
______________________
ct_main is a driver:
______________________
1 HOME
100 REM ct_main
110Â Â Â Â PRINT "ct_main"
120Â Â Â Â FOR I = 0 TO 10
130Â Â Â Â Â Â Â Â GOSUB 1000
135Â Â Â Â Â Â Â Â IF RA$ = "HALT!" GOTO 140
136Â Â Â Â Â Â Â Â I = I - 1
140Â Â Â Â NEXT I
145 PRINT "HALT!!!"
150 END
______________________
Can the fuzz get to "many different" paths of execution? Humm...
Full program:
________________________
1 HOME
100 REM ct_main
110Â Â Â Â PRINT "ct_main"
120Â Â Â Â FOR I = 0 TO 10
130Â Â Â Â Â Â Â Â GOSUB 1000
135Â Â Â Â Â Â Â Â IF RA$ = "HALT!" GOTO 140
136Â Â Â Â Â Â Â Â I = I - 1
140Â Â Â Â NEXT I
145 PRINT "HALT!!!"
150 END
1000 REM ct_fuzzer
1010Â Â Â Â PRINT "ct_fuzzer"
1020Â Â Â Â R0 = INT(RND(1) * 1003)
1025Â Â Â Â PRINT R0
1030Â Â Â Â GOSUB 2000
1040 RETURN
2000 REM ct_program
2010Â Â Â Â PRINT "ct_program"
2020Â Â Â Â RA$ = "NOPE!"
2030Â Â Â Â IF R0 = 0 THEN RA$ = "HALT!"
2040 RETURN
________________________
One can try it out over on:
https://www.calormen.com/jsbasic
Sooooo...
ct_program has roughly a 1/1003 chance of setting RA$="HALT!", and the
main routine tries to accumulate 10 of these RA$ settings (ignoring
others) before it prints "HALT!!!". We would expect about 10000
ct_program calls to achieve this, and after much looping (probably
around 10000 times!) that's what we get.
But:
1. Nothing is emulating anything
2. ct_program actually always returns (regardless of what it sets RA$
to)
3. there is no "decider" deciding ct_program that I can see
So all in all it's not really relevant to PO's HHH/DD code! No need
to post more examples until you have all the right bits developed to
reproduce PO's counterexample! :)
Ahhh Shit! Thanks. ;^o
On 2025-10-22 12:40, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
DD is a "finite string input" which specifies a behavior that is
independent of what simulates it,
That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF
THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.
In no way am I saying that DD is not built on HHH, and
does not have a behavior dependent on that of HHH.
Why would I ever say that?
But that entire bundle is one fixed case DD, with a single
behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp
that the "finite string input" DD *must* include as a substring
the entire description of HHH.
On 10/22/2025 2:27 PM, joes wrote:
Am Wed, 22 Oct 2025 14:00:51 -0500 schrieb olcott:non-halting behavior pattern of the simulated input
On 10/22/2025 1:54 PM, joes wrote:There is no non-halting pattern in DD because it in fact halts.
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
It is the behavior of DD simulated by HHH meeting a predefined correctYes, and the simulated HHH aborts, just like the simulator.DD is a "finite string input" which specifies a behavior that isThat is stupidly incorrect.
independent of what simulates it,
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR
THAT THE INPUT TO HHH(DD) SPECIFIES.
non-halting behavior pattern that is most relevant.
All of the LLMs have figured out entirely on their own exact what this
correct non-halting behavior pattern is with DD correctly simulated by
HHH.
Your pattern ignores that repeated calls to HHH do not constitute
infinite recursion; HHH will always return. Actually it will always be
aborted by the next-outer simulator.
You could attempt to fix that by matching on repeated calls except
calls to HHH (since we know HHH will always abort)…
Am Wed, 22 Oct 2025 14:00:51 -0500 schrieb olcott:
On 10/22/2025 1:54 PM, joes wrote:
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
There is no non-halting pattern in DD because it in fact halts.It is the behavior of DD simulated by HHH meeting a predefined correctYes, and the simulated HHH aborts, just like the simulator.DD is a "finite string input" which specifies a behavior that isThat is stupidly incorrect.
independent of what simulates it,
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR THAT >>>> THE INPUT TO HHH(DD) SPECIFIES.
non-halting behavior pattern that is most relevant.
All of the LLMs have figured out entirely on their own exact what this
correct non-halting behavior pattern is with DD correctly simulated by
HHH.
Your pattern ignores that repeated calls to HHH do not constitute
infinite recursion; HHH will always return. Actually it will always
be aborted by the next-outer simulator.
You could attempt to fix that by matching on repeated calls except
calls to HHH (since we know HHH will always abort)…
Am Wed, 22 Oct 2025 14:25:14 -0500 schrieb olcott:
On 10/22/2025 1:54 PM, joes wrote:
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
The simulated HHH cannot possibly actually ever abort. I have explainedYes, and the simulated HHH aborts, just like the simulator.DD is a "finite string input" which specifies a behavior that isThat is stupidly incorrect.
independent of what simulates it,
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR THAT >>>> THE INPUT TO HHH(DD) SPECIFIES.
this dozens of times. Every HHH has identical machine code yet the
outermost HHH meets its abort criteria first.
Of course it is not simulated that far. That doesn’t mean that it is non-halting, only that it halts later.
On 10/22/2025 2:27 PM, joes wrote:Does not exist because HHH aborts.
Am Wed, 22 Oct 2025 14:00:51 -0500 schrieb olcott:
On 10/22/2025 1:54 PM, joes wrote:There is no non-halting pattern in DD because it in fact halts.
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
It is the behavior of DD simulated by HHH meeting a predefined correctYes, and the simulated HHH aborts, just like the simulator.DD is a "finite string input" which specifies a behavior that isThat is stupidly incorrect.
independent of what simulates it,
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR THAT >>>>> THE INPUT TO HHH(DD) SPECIFIES.
non-halting behavior pattern that is most relevant.
All of the LLMs have figured out entirely on their own exact what this
correct non-halting behavior pattern is with DD correctly simulated by
HHH.
DD correctly simulated by HHH
Am Wed, 22 Oct 2025 14:31:21 -0500 schrieb olcott:
On 10/22/2025 2:27 PM, joes wrote:
Am Wed, 22 Oct 2025 14:00:51 -0500 schrieb olcott:non-halting behavior pattern of the simulated input
On 10/22/2025 1:54 PM, joes wrote:There is no non-halting pattern in DD because it in fact halts.
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
It is the behavior of DD simulated by HHH meeting a predefined correct >>>> non-halting behavior pattern that is most relevant.Yes, and the simulated HHH aborts, just like the simulator.DD is a "finite string input" which specifies a behavior that is >>>>>>> independent of what simulates it,That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR
THAT THE INPUT TO HHH(DD) SPECIFIES.
All of the LLMs have figured out entirely on their own exact what this >>>> correct non-halting behavior pattern is with DD correctly simulated by >>>> HHH.
Your pattern ignores that repeated calls to HHH do not constitute
infinite recursion; HHH will always return. Actually it will always be
aborted by the next-outer simulator.
You could attempt to fix that by matching on repeated calls except
calls to HHH (since we know HHH will always abort)…
Yes. The way that HHH simulates its inputs falsely matches simulators
that abort after two or more recursions - such as itself.
On 10/22/2025 2:30 PM, joes wrote:
Am Wed, 22 Oct 2025 14:25:14 -0500 schrieb olcott:
On 10/22/2025 1:54 PM, joes wrote:
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
The simulated HHH cannot possibly actually ever abort. I have explainedYes, and the simulated HHH aborts, just like the simulator.DD is a "finite string input" which specifies a behavior that isThat is stupidly incorrect.
independent of what simulates it,
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR THAT >>>>> THE INPUT TO HHH(DD) SPECIFIES.
this dozens of times. Every HHH has identical machine code yet the
outermost HHH meets its abort criteria first.
Of course it is not simulated that far. That doesn’t mean that it is
non-halting, only that it halts later.
So maybe I should just start ignoring everything
that you say. You just don't know enough about how
programming actually works.
In other words when I am obviously correct you spout out pure ad hominem because that is all that you have when you know that I am correct.
On 10/22/2025 3:41 PM, olcott wrote:
On 10/22/2025 2:27 PM, joes wrote:Does not exist because HHH aborts.
Am Wed, 22 Oct 2025 14:00:51 -0500 schrieb olcott:
On 10/22/2025 1:54 PM, joes wrote:There is no non-halting pattern in DD because it in fact halts.
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
It is the behavior of DD simulated by HHH meeting a predefined correct >>>> non-halting behavior pattern that is most relevant.Yes, and the simulated HHH aborts, just like the simulator.DD is a "finite string input" which specifies a behavior that is >>>>>>> independent of what simulates it,That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR >>>>>> THAT
THE INPUT TO HHH(DD) SPECIFIES.
All of the LLMs have figured out entirely on their own exact what this >>>> correct non-halting behavior pattern is with DD correctly simulated by >>>> HHH.
DD correctly simulated by HHH
On 10/22/2025 2:37 PM, joes wrote:Does not exist because HHH aborts.
Am Wed, 22 Oct 2025 14:31:21 -0500 schrieb olcott:
On 10/22/2025 2:27 PM, joes wrote:
Am Wed, 22 Oct 2025 14:00:51 -0500 schrieb olcott:non-halting behavior pattern of the simulated input
On 10/22/2025 1:54 PM, joes wrote:There is no non-halting pattern in DD because it in fact halts.
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
It is the behavior of DD simulated by HHH meeting a predefined correct >>>>> non-halting behavior pattern that is most relevant.Yes, and the simulated HHH aborts, just like the simulator.DD is a "finite string input" which specifies a behavior that is >>>>>>>> independent of what simulates it,That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR >>>>>>> THAT THE INPUT TO HHH(DD) SPECIFIES.
All of the LLMs have figured out entirely on their own exact what this >>>>> correct non-halting behavior pattern is with DD correctly simulated by >>>>> HHH.
Your pattern ignores that repeated calls to HHH do not constitute
infinite recursion; HHH will always return. Actually it will always be >>>> aborted by the next-outer simulator.
You could attempt to fix that by matching on repeated calls except
calls to HHH (since we know HHH will always abort)…
Yes. The way that HHH simulates its inputs falsely matches simulators
that abort after two or more recursions - such as itself.
Its dead obvious to anyone that is not stupid
that DD correctly simulated by HHH
On 10/22/2025 2:44 PM, dbush wrote:Does not exist because HHH aborts.
On 10/22/2025 3:41 PM, olcott wrote:
On 10/22/2025 2:27 PM, joes wrote:Does not exist because HHH aborts.
Am Wed, 22 Oct 2025 14:00:51 -0500 schrieb olcott:
On 10/22/2025 1:54 PM, joes wrote:There is no non-halting pattern in DD because it in fact halts.
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
It is the behavior of DD simulated by HHH meeting a predefined correct >>>>> non-halting behavior pattern that is most relevant.Yes, and the simulated HHH aborts, just like the simulator.DD is a "finite string input" which specifies a behavior that is >>>>>>>> independent of what simulates it,That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR >>>>>>> THAT
THE INPUT TO HHH(DD) SPECIFIES.
All of the LLMs have figured out entirely on their own exact what this >>>>> correct non-halting behavior pattern is with DD correctly simulated by >>>>> HHH.
DD correctly simulated by HHH
Its dead obvious to anyone that is not stupid
that DD correctly simulated by HHH
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the "finite string input" DD *must* include as a substring the entire description of HHH.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been >>>>>>> dissected by multiple poeple to a much greater detail than they deserve.
It is disingenuous to say that you've simply had your details ignored. >>>>>>>
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
It only seems that way because you are unable to
No, it doesn't only seem that way. Thanks for playing.
provide the actual mapping that the actual input
to HHH(DD) specifies when DD is simulated by HHH
according to the semantics of the C language,
DD is a "finite string input" which specifies a behavior that is
independent of what simulates it,
That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF
THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.
In no way am I saying that DD is not built on HHH, and
does not have a behavior dependent on that of HHH.
Why would I ever say that?
But that entire bundle is one fixed case DD, with a single behavior,
which is a property of DD, which is a finite string.
On 10/22/2025 1:54 PM, joes wrote:
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:Yes, and the simulated HHH aborts, just like the simulator.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
That is stupidly incorrect.provide the actual mapping that the actual input to HHH(DD) specifies >>>>> when DD is simulated by HHH according to the semantics of the CDD is a "finite string input" which specifies a behavior that is
language,
independent of what simulates it,
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR THAT
THE INPUT TO HHH(DD) SPECIFIES.
It is the behavior of DD simulated by HHH meeting
a predefined correct non-halting behavior pattern
that is most relevant.
On 10/22/2025 2:27 PM, joes wrote:
Am Wed, 22 Oct 2025 14:00:51 -0500 schrieb olcott:
On 10/22/2025 1:54 PM, joes wrote:There is no non-halting pattern in DD because it in fact halts.
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
It is the behavior of DD simulated by HHH meeting a predefined correctYes, and the simulated HHH aborts, just like the simulator.DD is a "finite string input" which specifies a behavior that isThat is stupidly incorrect.
independent of what simulates it,
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR THAT >>>>> THE INPUT TO HHH(DD) SPECIFIES.
non-halting behavior pattern that is most relevant.
All of the LLMs have figured out entirely on their own exact what this
correct non-halting behavior pattern is with DD correctly simulated by
HHH.
non-halting behavior pattern of the simulated input
non-halting behavior pattern of the simulated input
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the
"finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 1:54 PM, joes wrote:
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:Yes, and the simulated HHH aborts, just like the simulator.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
That is stupidly incorrect.provide the actual mapping that the actual input to HHH(DD) specifies >>>>>> when DD is simulated by HHH according to the semantics of the CDD is a "finite string input" which specifies a behavior that is
language,
independent of what simulates it,
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR THAT >>>> THE INPUT TO HHH(DD) SPECIFIES.
It is the behavior of DD simulated by HHH meeting
a predefined correct non-halting behavior pattern
that is most relevant.
You'e not proven that the pattern is correct;
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:27 PM, joes wrote:
Am Wed, 22 Oct 2025 14:00:51 -0500 schrieb olcott:
On 10/22/2025 1:54 PM, joes wrote:There is no non-halting pattern in DD because it in fact halts.
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
It is the behavior of DD simulated by HHH meeting a predefined correct >>>> non-halting behavior pattern that is most relevant.Yes, and the simulated HHH aborts, just like the simulator.DD is a "finite string input" which specifies a behavior that is >>>>>>> independent of what simulates it,That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR THAT >>>>>> THE INPUT TO HHH(DD) SPECIFIES.
All of the LLMs have figured out entirely on their own exact what this >>>> correct non-halting behavior pattern is with DD correctly simulated by >>>> HHH.
non-halting behavior pattern of the simulated input
non-halting behavior pattern of the simulated input
There is only one input with one behavior which is halting.
Any analyzer which finds a different behavior is simply incorrect.
Adding the word "simulated" to "input" in order to claim that
it exists as a different input is fallacious.
On 10/22/2025 1:54 PM, joes wrote:
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:Yes, and the simulated HHH aborts, just like the simulator.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
That is stupidly incorrect.provide the actual mapping that the actual input to HHH(DD) specifies >>>>> when DD is simulated by HHH according to the semantics of the CDD is a "finite string input" which specifies a behavior that is
language,
independent of what simulates it,
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR THAT
THE INPUT TO HHH(DD) SPECIFIES.
The simulated HHH cannot possibly actually
ever abort. I have explained this dozens of
times.
Every HHH has identical machine code
yet the outermost HHH meets its abort criteria
first.
All of the LLM systems figure all this out
on their own.
On 10/22/2025 1:54 PM, joes wrote:
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:Yes, and the simulated HHH aborts, just like the simulator.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
That is stupidly incorrect.provide the actual mapping that the actual input to HHH(DD) specifies >>>>> when DD is simulated by HHH according to the semantics of the CDD is a "finite string input" which specifies a behavior that is
language,
independent of what simulates it,
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR THAT
THE INPUT TO HHH(DD) SPECIFIES.
The simulated HHH cannot possibly actually
ever abort. I have explained this dozens of
times. Every HHH has identical machine code
yet the outermost HHH meets its abort criteria
first.
On 10/22/2025 2:30 PM, joes wrote:
Am Wed, 22 Oct 2025 14:25:14 -0500 schrieb olcott:
On 10/22/2025 1:54 PM, joes wrote:
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
The simulated HHH cannot possibly actually ever abort. I have explainedYes, and the simulated HHH aborts, just like the simulator.DD is a "finite string input" which specifies a behavior that isThat is stupidly incorrect.
independent of what simulates it,
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR THAT >>>>> THE INPUT TO HHH(DD) SPECIFIES.
this dozens of times. Every HHH has identical machine code yet the
outermost HHH meets its abort criteria first.
Of course it is not simulated that far. That doesn’t mean that it is
non-halting, only that it halts later.
So maybe I should just start ignoring everything
that you say. You just don't know enough about how
programming actually works.
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the
"finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
On 10/22/2025 1:40 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been >>>>>>>> dissected by multiple poeple to a much greater detail than they >>>>>>>> deserve.
It is disingenuous to say that you've simply had your details >>>>>>>> ignored.
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
It only seems that way because you are unable to
No, it doesn't only seem that way. Thanks for playing.
provide the actual mapping that the actual input
to HHH(DD) specifies when DD is simulated by HHH
according to the semantics of the C language,
DD is a "finite string input" which specifies a behavior that is
independent of what simulates it,
That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF
THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.
In no way am I saying that DD is not built on HHH, and
does not have a behavior dependent on that of HHH.
Why would I ever say that?
But that entire bundle is one fixed case DD, with a single behavior,
which is a property of DD, which is a finite string.
That too is stupidly incorrect.
It is the job of every simulating halt decider
to predict what the behavior of it simulated
input would be if it never aborted.
When a person is asked a yes or no question
there are not two separate people in parallel
universes one that answers yes and one that
answers no. There is one person that thinks
through both hypothetical possibilities and
then provides one answer.
On 10/21/2025 4:08 PM, Mike Terry wrote:
On 21/10/2025 23:03, Chris M. Thomasson wrote:
On 10/21/2025 2:37 PM, Chris M. Thomasson wrote:
A fun fuzzer: It allows for halting multiple times, then says okay,
we "think" we explored ct_program:[...]
I create a program that says for this target program, it halts.
However this same tool does not work for a slight alteration in the
same program?
Just so we're on the same page:
1. What program did you create?
2. What is the target program, exactly?
3. What is the alteration you are imagining?
The target program under consideration for the "decider", lol. Anyway, its: ______________________[...]
2000 REM ct_program
2010Â Â Â Â PRINT "ct_program"
2020Â Â Â Â RA$ = "NOPE!"
2030Â Â Â Â IF R0 = 0 THEN RA$ = "HALT!"
2040 RETURN
______________________
________________________
One can try it out over on:
https://www.calormen.com/jsbasic
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 1:54 PM, joes wrote:
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:Yes, and the simulated HHH aborts, just like the simulator.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
That is stupidly incorrect.provide the actual mapping that the actual input to HHH(DD) specifies >>>>>> when DD is simulated by HHH according to the semantics of the CDD is a "finite string input" which specifies a behavior that is
language,
independent of what simulates it,
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR THAT >>>> THE INPUT TO HHH(DD) SPECIFIES.
The simulated HHH cannot possibly actually
ever abort. I have explained this dozens of
times. Every HHH has identical machine code
yet the outermost HHH meets its abort criteria
first.
Not allowing an a simulation of HHH to execute far enough to reach its
abort decision is not the same thing as that HHH not being aborting!
All we need is to implement a procedure which examines abandoned
simulations and continues them. Then we will see that HHH
at every simulation level returns 0, and every DD halts.
That you do not have such a procedure is deeply dishonest.
You have no interest in proper software testing of your shit.
Software testing means looking for ways that the software could be
wrong, not just making a few observations which seem like they
confirm a hypothesis.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:30 PM, joes wrote:
Am Wed, 22 Oct 2025 14:25:14 -0500 schrieb olcott:
On 10/22/2025 1:54 PM, joes wrote:
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
The simulated HHH cannot possibly actually ever abort. I have explained >>>> this dozens of times. Every HHH has identical machine code yet theYes, and the simulated HHH aborts, just like the simulator.DD is a "finite string input" which specifies a behavior that is >>>>>>> independent of what simulates it,That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR THAT >>>>>> THE INPUT TO HHH(DD) SPECIFIES.
outermost HHH meets its abort criteria first.
Of course it is not simulated that far. That doesn’t mean that it is
non-halting, only that it halts later.
So maybe I should just start ignoring everything
that you say. You just don't know enough about how
programming actually works.
How programming works is that x86_UTM can easily keep a global list of
all simulations that have been initiated but have not yet terminated.
And it can have a procedure which steps these simulations to see what
happens in them.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:False, as you have admitted otherwise:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
Yet again with deflection.
That the input to HHH(DD) specfies non-haltin
On 10/20/2025 10:45 PM, dbush wrote:
And it is a semantic tautology that a finite string description of a
Turing machine is stipulated to specify all semantic properties of the
described machine, including whether it halts when executed directly.
And it is this semantic property that halt deciders are required to
report on.
Yes that is all correct
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
I made sure to read what you said all the way through
this time.
DD correctly simulated by HHH cannot possibly
reach its own final halt state no matter what HHH does.
On 10/22/2025 3:17 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:30 PM, joes wrote:
Am Wed, 22 Oct 2025 14:25:14 -0500 schrieb olcott:
On 10/22/2025 1:54 PM, joes wrote:
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
The simulated HHH cannot possibly actually ever abort. I haveYes, and the simulated HHH aborts, just like the simulator.DD is a "finite string input" which specifies a behavior that is >>>>>>>> independent of what simulates it,That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR >>>>>>> THAT
THE INPUT TO HHH(DD) SPECIFIES.
explained
this dozens of times. Every HHH has identical machine code yet the
outermost HHH meets its abort criteria first.
Of course it is not simulated that far. That doesn’t mean that it is >>>> non-halting, only that it halts later.
So maybe I should just start ignoring everything
that you say. You just don't know enough about how
programming actually works.
How programming works is that x86_UTM can easily keep a global list of
all simulations that have been initiated but have not yet terminated.
It can also endlessly repeat the word: "farts"
too yet that has nothing to do with the point.
And it can have a procedure which steps these simulations to see what
happens in them.
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
I made sure to read what you said all the way through
this time. DD correctly simulated by HHH
cannot possibly
reach its own final halt state no matter what HHH does.
On 10/22/2025 1:40 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been >>>>>>>> dissected by multiple poeple to a much greater detail than they deserve.
It is disingenuous to say that you've simply had your details ignored. >>>>>>>>
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
It only seems that way because you are unable to
No, it doesn't only seem that way. Thanks for playing.
provide the actual mapping that the actual input
to HHH(DD) specifies when DD is simulated by HHH
according to the semantics of the C language,
DD is a "finite string input" which specifies a behavior that is
independent of what simulates it,
That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF
THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.
In no way am I saying that DD is not built on HHH, and
does not have a behavior dependent on that of HHH.
Why would I ever say that?
But that entire bundle is one fixed case DD, with a single behavior,
which is a property of DD, which is a finite string.
That too is stupidly incorrect.
It is the job of every simulating halt decider
to predict what the behavior of it simulated
input would be if it never aborted.
On 10/22/2025 2:55 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 1:54 PM, joes wrote:
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:Yes, and the simulated HHH aborts, just like the simulator.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
That is stupidly incorrect.provide the actual mapping that the actual input to HHH(DD) specifies >>>>>>> when DD is simulated by HHH according to the semantics of the CDD is a "finite string input" which specifies a behavior that is
language,
independent of what simulates it,
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR THAT >>>>> THE INPUT TO HHH(DD) SPECIFIES.
It is the behavior of DD simulated by HHH meeting
a predefined correct non-halting behavior pattern
that is most relevant.
You'e not proven that the pattern is correct;
Anyone that is not stupid knows this.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:55 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 1:54 PM, joes wrote:
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:Yes, and the simulated HHH aborts, just like the simulator.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
That is stupidly incorrect.provide the actual mapping that the actual input to HHH(DD) specifies >>>>>>>> when DD is simulated by HHH according to the semantics of the C >>>>>>>> language,DD is a "finite string input" which specifies a behavior that is >>>>>>> independent of what simulates it,
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR THAT >>>>>> THE INPUT TO HHH(DD) SPECIFIES.
It is the behavior of DD simulated by HHH meeting
a predefined correct non-halting behavior pattern
that is most relevant.
You'e not proven that the pattern is correct;
Anyone that is not stupid knows this.
"Just knowing" something without any justification is one of the
definitions of stupid.
On 10/22/2025 2:56 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:27 PM, joes wrote:
Am Wed, 22 Oct 2025 14:00:51 -0500 schrieb olcott:
On 10/22/2025 1:54 PM, joes wrote:There is no non-halting pattern in DD because it in fact halts.
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
It is the behavior of DD simulated by HHH meeting a predefined correct >>>>> non-halting behavior pattern that is most relevant.Yes, and the simulated HHH aborts, just like the simulator.DD is a "finite string input" which specifies a behavior that is >>>>>>>> independent of what simulates it,That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR THAT >>>>>>> THE INPUT TO HHH(DD) SPECIFIES.
All of the LLMs have figured out entirely on their own exact what this >>>>> correct non-halting behavior pattern is with DD correctly simulated by >>>>> HHH.
non-halting behavior pattern of the simulated input
non-halting behavior pattern of the simulated input
There is only one input with one behavior which is halting.
Any analyzer which finds a different behavior is simply incorrect.
Adding the word "simulated" to "input" in order to claim that
it exists as a different input is fallacious.
Two LLM systems totally understand and can explain
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:56 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:27 PM, joes wrote:
Am Wed, 22 Oct 2025 14:00:51 -0500 schrieb olcott:
On 10/22/2025 1:54 PM, joes wrote:There is no non-halting pattern in DD because it in fact halts.
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
It is the behavior of DD simulated by HHH meeting a predefined correct >>>>>> non-halting behavior pattern that is most relevant.Yes, and the simulated HHH aborts, just like the simulator.DD is a "finite string input" which specifies a behavior that is >>>>>>>>> independent of what simulates it,That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR THAT >>>>>>>> THE INPUT TO HHH(DD) SPECIFIES.
All of the LLMs have figured out entirely on their own exact what this >>>>>> correct non-halting behavior pattern is with DD correctly simulated by >>>>>> HHH.
non-halting behavior pattern of the simulated input
non-halting behavior pattern of the simulated input
There is only one input with one behavior which is halting.
Any analyzer which finds a different behavior is simply incorrect.
Adding the word "simulated" to "input" in order to claim that
it exists as a different input is fallacious.
Two LLM systems totally understand and can explain
But what you mean by "totally understand" is that they
regurgitate a rhetoric similar to your own.
This can be called the Chatbot Fallacy (a modern phenomenon).
"My argumentation is correct, and I don't have to engage any
of your refutations or take them seriously, because I coaxed
a token-predicting chatbot into producing text which parrots
my thinking."
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
Thus proving that DD correctly simulated by HHH
cannot possibly reach its own simulated final halt
state no matter what HHH does.
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
I made sure to read what you said all the way through
this time. DD correctly simulated by HHH cannot possibly
reach its own final halt state no matter what HHH does.
On 10/22/2025 4:56 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:55 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 1:54 PM, joes wrote:
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:Yes, and the simulated HHH aborts, just like the simulator.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
That is stupidly incorrect.provide the actual mapping that the actual input to HHH(DD) specifies >>>>>>>>> when DD is simulated by HHH according to the semantics of the C >>>>>>>>> language,DD is a "finite string input" which specifies a behavior that is >>>>>>>> independent of what simulates it,
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR THAT >>>>>>> THE INPUT TO HHH(DD) SPECIFIES.
It is the behavior of DD simulated by HHH meeting
a predefined correct non-halting behavior pattern
that is most relevant.
You'e not proven that the pattern is correct;
Anyone that is not stupid knows this.
"Just knowing" something without any justification is one of the
definitions of stupid.
Do ten steps of the execution trace of DD simulated by HHH.
On 10/22/2025 5:02 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:56 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:27 PM, joes wrote:
Am Wed, 22 Oct 2025 14:00:51 -0500 schrieb olcott:
On 10/22/2025 1:54 PM, joes wrote:There is no non-halting pattern in DD because it in fact halts.
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
It is the behavior of DD simulated by HHH meeting a predefined correct >>>>>>> non-halting behavior pattern that is most relevant.Yes, and the simulated HHH aborts, just like the simulator.DD is a "finite string input" which specifies a behavior that is >>>>>>>>>> independent of what simulates it,That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR THAT
THE INPUT TO HHH(DD) SPECIFIES.
All of the LLMs have figured out entirely on their own exact what this >>>>>>> correct non-halting behavior pattern is with DD correctly simulated by >>>>>>> HHH.
non-halting behavior pattern of the simulated input
non-halting behavior pattern of the simulated input
There is only one input with one behavior which is halting.
Any analyzer which finds a different behavior is simply incorrect.
Adding the word "simulated" to "input" in order to claim that
it exists as a different input is fallacious.
Two LLM systems totally understand and can explain
But what you mean by "totally understand" is that they
regurgitate a rhetoric similar to your own.
If that was true they would not give me so much
push back that I have to keep explaining things
to them 20 different times.
When they do finally understand they explain every single of how their understanding is correct.
This can be called the Chatbot Fallacy (a modern phenomenon).That apparently utterly ceases to exist when one is precise
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
Thus proving that DD correctly simulated by HHH
cannot possibly reach its own simulated final halt
state no matter what HHH does.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>> but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
Thus proving that DD correctly simulated by HHH
cannot possibly reach its own simulated final halt
state no matter what HHH does.
I explained that a nested simulation tower is two dimensional.
One dimension is the simulation level, the nesting itself;
that goes out to infinity.
Due to the aborting behavior of HHH,
it is not actually realized in simulation; we have to step
through the aborted simulations to keep it going.
The other dimension is the execution /within/ the simulations.
That can be halting or non-halting.
In the HHH(DD) simulation tower, though that is infinite,
the simulations are halting.
I said that before. Your memory of that has vaporized, and you have now focused only on my statement that the simluation tower is infinite.
The depth of the simulation tower, and the halting of the simulations
within that tower, are independent phenomena.
A decider must not mistake one for the other.
On 10/22/2025 6:15 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single
behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp
that the
"finite string input" DD *must* include as a substring the entire >>>>>>> description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>>> but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
Thus proving that DD correctly simulated by HHH
cannot possibly reach its own simulated final halt
state no matter what HHH does.
I explained that a nested simulation tower is two dimensional.
One dimension is the simulation level, the nesting itself;
that goes out to infinity.
Great. Thus the input to HHH(DD)
specifies behavior
such that the correctly simulated DD
cannot possibly
reach its own simulated final halt state.
Due to the aborting behavior of HHH,
it is not actually realized in simulation; we have to step
through the aborted simulations to keep it going.
The other dimension is the execution /within/ the simulations.
That can be halting or non-halting.
In the HHH(DD) simulation tower, though that is infinite,
the simulations are halting.
I said that before. Your memory of that has vaporized, and you have now
focused only on my statement that the simluation tower is infinite.
The depth of the simulation tower, and the halting of the simulations
within that tower, are independent phenomena.
A decider must not mistake one for the other.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 5:02 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:56 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:27 PM, joes wrote:
Am Wed, 22 Oct 2025 14:00:51 -0500 schrieb olcott:
On 10/22/2025 1:54 PM, joes wrote:There is no non-halting pattern in DD because it in fact halts.
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
It is the behavior of DD simulated by HHH meeting a predefined correct >>>>>>>> non-halting behavior pattern that is most relevant.Yes, and the simulated HHH aborts, just like the simulator.DD is a "finite string input" which specifies a behavior that is >>>>>>>>>>> independent of what simulates it,That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR THAT
THE INPUT TO HHH(DD) SPECIFIES.
All of the LLMs have figured out entirely on their own exact what this >>>>>>>> correct non-halting behavior pattern is with DD correctly simulated by >>>>>>>> HHH.
non-halting behavior pattern of the simulated input
non-halting behavior pattern of the simulated input
There is only one input with one behavior which is halting.
Any analyzer which finds a different behavior is simply incorrect.
Adding the word "simulated" to "input" in order to claim that
it exists as a different input is fallacious.
Two LLM systems totally understand and can explain
But what you mean by "totally understand" is that they
regurgitate a rhetoric similar to your own.
If that was true they would not give me so much
push back that I have to keep explaining things
to them 20 different times.
That's going to be the typical user experience of a crackpot using
chatbots to try to validate his views.
Since the views are contrary to mainstream views found in countless
pages of training data, before you get the chatbots to agree with you,
you have to inject a good amount of your crackpottery into the chat
context.
The pushback is telling you are probably wrong.
understanding is correct.
At that point, it is coming from the context you have shoved
into the chat.
This can be called the Chatbot Fallacy (a modern phenomenon).That apparently utterly ceases to exist when one is precise
says every Chatbot Fallacist.
On 10/22/2025 6:15 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>>>>> "finite string input" DD *must* include as a substring the entire >>>>>>> description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>>> but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
Thus proving that DD correctly simulated by HHH
cannot possibly reach its own simulated final halt
state no matter what HHH does.
I explained that a nested simulation tower is two dimensional.
One dimension is the simulation level, the nesting itself;
that goes out to infinity.
Great. Thus the input to HHH(DD) specifies behavior
such that the correctly simulated DD cannot possibly
reach its own simulated final halt state.
Due to the aborting behavior of HHH,
it is not actually realized in simulation; we have to step
through the aborted simulations to keep it going.
The other dimension is the execution /within/ the simulations.
That can be halting or non-halting.
In the HHH(DD) simulation tower, though that is infinite,
the simulations are halting.
I said that before. Your memory of that has vaporized, and you have now
focused only on my statement that the simluation tower is infinite.
The depth of the simulation tower, and the halting of the simulations
within that tower, are independent phenomena.
A decider must not mistake one for the other.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>> but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
Thus proving that DD correctly simulated by HHH
cannot possibly reach its own simulated final halt
state no matter what HHH does.
I explained that a nested simulation tower is two dimensional.
One dimension is the simulation level, the nesting itself;
that goes out to infinity.
On 10/22/2025 6:15 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single
behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp
that the
"finite string input" DD *must* include as a substring the entire >>>>>>> description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>>> but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
Thus proving that DD correctly simulated by HHH
cannot possibly reach its own simulated final halt
state no matter what HHH does.
I explained that a nested simulation tower is two dimensional.
One dimension is the simulation level, the nesting itself;
that goes out to infinity.
Great. Thus the input to HHH(DD) specifies behavior
such that the correctly simulated DD cannot possibly
reach its own simulated final halt state.
On 10/22/2025 7:24 PM, olcott wrote:
Great. Thus the input to HHH(DD)
i.e. finite string DD which is the description of machine DD i.e. <DD>
and therefore stipulated to specify all semantic properties of machine
DD including the fact that it halts when executed directly.
specifies behavior
such that the correctly simulated DD
i.e. UTM(DD)
cannot possibly
reach its own simulated final halt state.
False, as proven by UTM(DD) halting.
On 10/22/2025 6:12 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 5:02 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:56 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:27 PM, joes wrote:
Am Wed, 22 Oct 2025 14:00:51 -0500 schrieb olcott:non-halting behavior pattern of the simulated input
On 10/22/2025 1:54 PM, joes wrote:There is no non-halting pattern in DD because it in fact halts. >>>>>>>
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
non-halting behavior pattern that is most relevant.Yes, and the simulated HHH aborts, just like the simulator. >>>>>>>>> It is the behavior of DD simulated by HHH meeting a predefined correctDD is a "finite string input" which specifies a behavior that is >>>>>>>>>>>> independent of what simulates it,That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR THAT
THE INPUT TO HHH(DD) SPECIFIES.
All of the LLMs have figured out entirely on their own exact what this
correct non-halting behavior pattern is with DD correctly simulated by
HHH.
non-halting behavior pattern of the simulated input
There is only one input with one behavior which is halting.
Any analyzer which finds a different behavior is simply incorrect. >>>>>>
Adding the word "simulated" to "input" in order to claim that
it exists as a different input is fallacious.
Two LLM systems totally understand and can explain
But what you mean by "totally understand" is that they
regurgitate a rhetoric similar to your own.
If that was true they would not give me so much
push back that I have to keep explaining things
to them 20 different times.
That's going to be the typical user experience of a crackpot using
chatbots to try to validate his views.
Since the views are contrary to mainstream views found in countless
pages of training data, before you get the chatbots to agree with you,
you have to inject a good amount of your crackpottery into the chat
context.
The pushback is telling you are probably wrong.
The pushback keeps telling me that I keep telling
it to pay complete attention and it does not
understand that I mean that literally until I
have told it that in a dozen different ways.
It wrote this script for me to always force it
and other LLMs to always pay 100% complete attention
to the exact requirements.
*THIS IS NOT ME LEADING IT ASTRAY*
<analysis mode="simulation" rules="strict">
- You must simulate the input **exactly as written**,
step by step, without any external reasoning, assumptions, or guesses.
- **Do not analyze, infer, or speculate**; only describe what
the simulation observes in execution.
On 2025-10-23, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 6:12 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 5:02 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:56 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:27 PM, joes wrote:
Am Wed, 22 Oct 2025 14:00:51 -0500 schrieb olcott:non-halting behavior pattern of the simulated input
On 10/22/2025 1:54 PM, joes wrote:There is no non-halting pattern in DD because it in fact halts. >>>>>>>>
Am Wed, 22 Oct 2025 12:11:56 -0500 schrieb olcott:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
non-halting behavior pattern that is most relevant.Yes, and the simulated HHH aborts, just like the simulator. >>>>>>>>>> It is the behavior of DD simulated by HHH meeting a predefined correctDD is a "finite string input" which specifies a behavior that is >>>>>>>>>>>>> independent of what simulates it,That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF THE BEHAVIOR THAT
THE INPUT TO HHH(DD) SPECIFIES.
All of the LLMs have figured out entirely on their own exact what this
correct non-halting behavior pattern is with DD correctly simulated by
HHH.
non-halting behavior pattern of the simulated input
There is only one input with one behavior which is halting.
Any analyzer which finds a different behavior is simply incorrect. >>>>>>>
Adding the word "simulated" to "input" in order to claim that
it exists as a different input is fallacious.
Two LLM systems totally understand and can explain
But what you mean by "totally understand" is that they
regurgitate a rhetoric similar to your own.
If that was true they would not give me so much
push back that I have to keep explaining things
to them 20 different times.
That's going to be the typical user experience of a crackpot using
chatbots to try to validate his views.
Since the views are contrary to mainstream views found in countless
pages of training data, before you get the chatbots to agree with you,
you have to inject a good amount of your crackpottery into the chat
context.
The pushback is telling you are probably wrong.
The pushback keeps telling me that I keep telling
it to pay complete attention and it does not
understand that I mean that literally until I
have told it that in a dozen different ways.
It wrote this script for me to always force it
and other LLMs to always pay 100% complete attention
to the exact requirements.
*THIS IS NOT ME LEADING IT ASTRAY*
<analysis mode="simulation" rules="strict">
- You must simulate the input **exactly as written**,
step by step, without any external reasoning, assumptions, or guesses.
- **Do not analyze, infer, or speculate**; only describe what
the simulation observes in execution.
You're literally taking something which cannot think and asking it not
to think---not to even feign that thinking it is contrived into
feigning.
No wonder you can eventually goad it into producing agreement.
That won't work with people who actually think.
On 2025-10-22 12:40, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF
THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.
In no way am I saying that DD is not built on HHH, and
does not have a behavior dependent on that of HHH.
Why would I ever say that?
But that entire bundle is one fixed case DD, with a single behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the "finite string input" DD *must* include as a substring the entire description of HHH.
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 1,073 |
Nodes: | 10 (0 / 10) |
Uptime: | 219:41:21 |
Calls: | 13,783 |
Calls today: | 1 |
Files: | 186,987 |
D/L today: |
578 files (182M bytes) |
Messages: | 2,434,774 |