On 10/12/25 12:04 PM, olcott wrote:
Also very important is that there is no chance of
AI hallucination when they are only reasoning
within a set of premises. Some systems must be told:
Please think this all the way through without making any guesses
I don't mean to be rude, but that is a completely insane assertion to
me. There is always a non-zero chance for an LLM to roll a bad token
during inference and spit out garbage.
Sure, the top-p decoding strategy
can help minimize such mistakes by pruning the token pool of the worst
of the bad apples, but such models will never *ever* be foolproof. The
price you pay for convincingly generating natural language is
bulletproof reasoning.
If you're interested in formalizing your ideas using cutting-edge tech,
I encourage you to look at Lean 4. Once you provide a machine-checked
proof in Lean 4 with no `sorry`/`axiom`/other cheats, come back. People might adopt a very different tone.
Best of luck, you will need it.
On 12/10/2025 16:53, Bonita Montero wrote:
Sorry, that's silly. You spend half your life discussing the
same problem over and over again and never get to the end.
This gives PO a narrative he can hold on to which gives his life a meaning: he is the heroic world-saving unrecognised genius, constantly struggling against "the system" right up to his final breath! If he
were to suddenly realise he was just a deluded dumbo who had wasted most
of his life arguing over a succession of mistakes and misunderstandings
on his part, and had never contributed a single idea of any academic
value, would his life be better? I think not.
Thankfully he has recently discovered chatbots who can give him the uncritical approval he craves,
so there is next to no chance of that
happening now. [Assuming they don't suddenly get better, to the point where they can genuinely analyse and criticise his claims in the way we do... Given how they currently work, I don't see that happening any
time soon.]
Would the lives of other posters here be better? That's a trickier question.
Mike.--
Am 12.10.2025 um 15:50 schrieb olcott:
Please think this all the way through without making any guesses
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until: >>> (a) Detects a non-terminating behavior pattern:
abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
return 1.
(c) If HHH must abort its simulation to prevent its own non-termination
then HHH is correct to abort this simulation and return 0.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
int main()
{
HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 1,073 |
Nodes: | 10 (0 / 10) |
Uptime: | 219:44:00 |
Calls: | 13,783 |
Calls today: | 1 |
Files: | 186,987 |
D/L today: |
579 files (182M bytes) |
Messages: | 2,434,774 |