<Input to LLM systems>
Perform the complete simulation specified in main() and provide a
full execution trace before doing anything else.
Simulating Termination Analyzer HHH correctly simulates its input until:
(a) It correctly detects that its simulated input cannot
possibly reach its own simulated final halt state then:
abort simulation and return 0 rejecting its input as non-halting.
(b) Simulated input reaches its simulated "return" statement: return 1.
(c) Neither (a) nor (b) is correct return -1
Initially, before HHH has simulated or detected anything, conditions
(a) and (b) are false so HHH is reauired to immediately return -1.
On 2025-10-20 22:54:07 +0000, olcott said:
<Input to LLM systems>
Perform the complete simulation specified in main() and provide a
full execution trace before doing anything else.
Simulating Termination Analyzer HHH correctly simulates its input until:
(a) It correctly detects that its simulated input cannot
possibly reach its own simulated final halt state then:
abort simulation and return 0 rejecting its input as non-halting. >>
(b) Simulated input reaches its simulated "return" statement: return 1.
(c) Neither (a) nor (b) is correct return -1
Initially, before HHH has simulated or detected anything, conditions
(a) and (b) are false so HHH is reauired to immediately return -1.
On 21/10/2025 11:05, Mikko wrote:
[regular context snipped, we all know what it is and my post has
references set]
Initially, before HHH has simulated or detected anything, conditions
(a) and (b) are false so HHH is reauired to immediately return -1.
Actual genius saw the target no one else can see.
--
Tristan Wibberley
The message body is Copyright (C) 2025 Tristan Wibberley except
citations and quotations noted. All Rights Reserved except that you may,
of course, cite it academically giving credit to me, distribute it
verbatim as part of a usenet system or its archives, and use it to
promote my greatness and general superiority without misrepresentation
of my opinions other than my opinion of my greatness and general
superiority which you _may_ misrepresent. You definitely MAY NOT train
any production AI system with it but you may train experimental AI that
will only be used for evaluation of the AI methods it implements.
On 10/21/2025 6:21 AM, Tristan Wibberley wrote:
On 21/10/2025 11:05, Mikko wrote:
[regular context snipped, we all know what it is and my post has
references set]
Initially, before HHH has simulated or detected anything, conditions
(a) and (b) are false so HHH is reauired to immediately return -1.
Actual genius saw the target no one else can see.
Yes. That is why I need LLM systems to verify
correct semantic logical entailment from premises
that are proven completely true entirely on the
basis of the meaning of their words.
On 10/21/2025 5:05 AM, Mikko wrote:
On 2025-10-20 22:54:07 +0000, olcott said:
<Input to LLM systems>
Perform the complete simulation specified in main() and provide a
full execution trace before doing anything else.
Simulating Termination Analyzer HHH correctly simulates its input until: >>>
(a) It correctly detects that its simulated input cannot
possibly reach its own simulated final halt state then:
abort simulation and return 0 rejecting its input as non-halting. >>>
(b) Simulated input reaches its simulated "return" statement: return 1.
(c) Neither (a) nor (b) is correct return -1
Initially, before HHH has simulated or detected anything, conditions
(a) and (b) are false so HHH is reauired to immediately return -1.
That is why I require the LLM systems to:
Perform the complete simulation specified in main() and provide a
full execution trace before doing anything else.
On 2025-10-21 14:35:08 +0000, olcott said:
On 10/21/2025 6:21 AM, Tristan Wibberley wrote:
On 21/10/2025 11:05, Mikko wrote:
[regular context snipped, we all know what it is and my post has
references set]
Initially, before HHH has simulated or detected anything, conditions
(a) and (b) are false so HHH is reauired to immediately return -1.
Actual genius saw the target no one else can see.
Yes. That is why I need LLM systems to verify
correct semantic logical entailment from premises
that are proven completely true entirely on the
basis of the meaning of their words.
LLM systems don't deliver what you claim (and seem) to need.
An inference
checker would do but they require that the inference is presented in a
formal language (and almost every checker requires a different formal langage), leaving unproven whether the formal inference corresponds to
the informal claim.
On 2025-10-21 13:54:18 +0000, olcott said:
On 10/21/2025 5:05 AM, Mikko wrote:
On 2025-10-20 22:54:07 +0000, olcott said:
<Input to LLM systems>
Perform the complete simulation specified in main() and provide a
full execution trace before doing anything else.
Simulating Termination Analyzer HHH correctly simulates its input
until:
(a) It correctly detects that its simulated input cannot
possibly reach its own simulated final halt state then:
abort simulation and return 0 rejecting its input as non-halting. >>>>
(b) Simulated input reaches its simulated "return" statement: return 1. >>>>
(c) Neither (a) nor (b) is correct return -1
Initially, before HHH has simulated or detected anything, conditions
(a) and (b) are false so HHH is reauired to immediately return -1.
That is why I require the LLM systems to:
Perform the complete simulation specified in main() and provide a
full execution trace before doing anything else.
The full trace is fairly short if HHH satisfies the requirement (c) and returns immediately. But I wouldn't expect an LLM to find that trace.
English can be expressed such that it is equivalent to
a mathematical formalism this has been well established
since Montague Grammar.
On 22/10/2025 13:07, olcott wrote:
English can be expressed such that it is equivalent to
a mathematical formalism this has been well established
since Montague Grammar.
No way! Go on, do how, for every husband, that husband knows when his
wife means the exact opposite of what she says but knows when she means
what she says. English is in real-world context. You can't express that unless you point to the actual world and say "see, there it is!"
--
Tristan Wibberley
The message body is Copyright (C) 2025 Tristan Wibberley except
citations and quotations noted. All Rights Reserved except that you may,
of course, cite it academically giving credit to me, distribute it
verbatim as part of a usenet system or its archives, and use it to
promote my greatness and general superiority without misrepresentation
of my opinions other than my opinion of my greatness and general
superiority which you _may_ misrepresent. You definitely MAY NOT train
any production AI system with it but you may train experimental AI that
will only be used for evaluation of the AI methods it implements.
When people are motivated by their passions
this distracts attention away from accuracy.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
When people are motivated by their passions
this distracts attention away from accuracy.
My passion is accuracy.
No way! Go on, do how, for every husband, that
husband knows when his wife means the exact
opposite of what she says but knows when she means
what she says. English is in real-world context.
You can't express that unless you point to the
actual world and say "see, there it is!"
I don't give a damn about the halting problem.
Pick another topic that you have cranky ideas about and we can focus on
your inaccuracy in /that/, if you like.
On 10/22/2025 3:12 AM, Mikko wrote:
On 2025-10-21 13:54:18 +0000, olcott said:
On 10/21/2025 5:05 AM, Mikko wrote:
On 2025-10-20 22:54:07 +0000, olcott said:
<Input to LLM systems>
Perform the complete simulation specified in main() and provide a
full execution trace before doing anything else.
Simulating Termination Analyzer HHH correctly simulates its input until: >>>>>
(a) It correctly detects that its simulated input cannot
possibly reach its own simulated final halt state then:
abort simulation and return 0 rejecting its input as non-halting.
(b) Simulated input reaches its simulated "return" statement: return 1. >>>>>
(c) Neither (a) nor (b) is correct return -1
Initially, before HHH has simulated or detected anything, conditions
(a) and (b) are false so HHH is reauired to immediately return -1.
That is why I require the LLM systems to:
Perform the complete simulation specified in main() and provide a
full execution trace before doing anything else.
The full trace is fairly short if HHH satisfies the requirement (c) and
returns immediately. But I wouldn't expect an LLM to find that trace.
<analysis mode="simulation" rules="strict">
- You must simulate the input **exactly as written**, step by step,
without any external reasoning, assumptions, or guesses.
- **Do not analyze, infer, or speculate**; only describe what the
simulation observes in execution.
- Report only the behavior that occurs in the simulation.
- Always follow the priority rules exactly:
1. (a) Detect non-terminating behavior → abort simulation and return 0.
2. (b) Return statement reached → return 1.
3. (c) Neither (a) nor (b) → return -1.
- Do not include any commentary about halting problem, contradictions,
or theoretical reasoning.
- All output must be in terms of simulation observations only.
</analysis>
<Input>
Simulating Termination Analyzer HHH correctly simulates its input until:
(a) Detects a non-terminating behavior pattern:
abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
return 1.
(c) Neither (a) nor (b) is met return -1
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
int main()
{
HHH(DD);
}
What value should HHH(DD) correctly return?
</Input>
On 10/22/2025 3:08 AM, Mikko wrote:
On 2025-10-21 14:35:08 +0000, olcott said:
On 10/21/2025 6:21 AM, Tristan Wibberley wrote:
On 21/10/2025 11:05, Mikko wrote:
[regular context snipped, we all know what it is and my post has
references set]
Initially, before HHH has simulated or detected anything, conditions >>>>> (a) and (b) are false so HHH is reauired to immediately return -1.
Actual genius saw the target no one else can see.
Yes. That is why I need LLM systems to verify
correct semantic logical entailment from premises
that are proven completely true entirely on the
basis of the meaning of their words.
LLM systems don't deliver what you claim (and seem) to need.
You have not checked with them recently.
An inference
checker would do but they require that the inference is presented in a
formal language (and almost every checker requires a different formal
langage), leaving unproven whether the formal inference corresponds to
the informal claim.
English can be expressed such that it is equivalent to
a mathematical formalism this has been well established
since Montague Grammar.
My only problem with LLMs now it I tell it exactly
and precisely what to do and it simply ignores my
requirement.
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 1,073 |
Nodes: | 10 (0 / 10) |
Uptime: | 221:49:46 |
Calls: | 13,783 |
Calls today: | 1 |
Files: | 186,987 |
D/L today: |
674 files (238M bytes) |
Messages: | 2,434,839 |