On 2025-10-01, olcott <polcott333@gmail.com> wrote:
Not at all.
When LLM systems have otherwise good reasoning
They don't have reasoning. They have statistical token prediction,
plus a few dog-and-pony tricks.
and are required to cite all of their sources
Such an application exists; for instance Google's NotebookLM.
It's an application in which you create separate workspaces into which
you upload documents of your choice.
Large language models can do jaw-dropping things. But nobody knows
exactly why.
https://www.technologyreview.com/2024/03/04/1089403/large-language-models-amazing-but-nobody-knows-why/
On 01/10/2025 22:12, Kaz Kylheku wrote:
On 2025-10-01, olcott <polcott333@gmail.com> wrote:
Not at all.
When LLM systems have otherwise good reasoning
They don't have reasoning. They have statistical token prediction,
plus a few dog-and-pony tricks.
Did an AI tell you that?
On 2025-10-19, Tristan Wibberley <tristan.wibberley+netnews2@alumni.manchester.ac.uk> wrote:
On 01/10/2025 22:12, Kaz Kylheku wrote:
On 2025-10-01, olcott <polcott333@gmail.com> wrote:
Not at all.
When LLM systems have otherwise good reasoning
They don't have reasoning. They have statistical token prediction,
plus a few dog-and-pony tricks.
Did an AI tell you that?
No.
On 19/10/2025 19:25, Kaz Kylheku wrote:
On 2025-10-19, Tristan Wibberley <tristan.wibberley+netnews2@alumni.manchester.ac.uk> wrote:
On 01/10/2025 22:12, Kaz Kylheku wrote:
On 2025-10-01, olcott <polcott333@gmail.com> wrote:
Not at all.
When LLM systems have otherwise good reasoning
They don't have reasoning. They have statistical token prediction,
plus a few dog-and-pony tricks.
Did an AI tell you that?
No.
Trouble is, Kaz, they're getting pretty good at faking it, and
"pretty good" is good enough for a lot of people. Sometimes you
have to scratch quite hard to get at the inanity underneath, and
not everybody wants to scratch; narcissism and gullibility, for
example, are both contraindications for truth-digging.
On 2025-10-19, Richard Heathfield <rjh@cpax.org.uk> wrote:
On 19/10/2025 19:25, Kaz Kylheku wrote:
On 2025-10-19, Tristan Wibberley <tristan.wibberley+netnews2@alumni.manchester.ac.uk> wrote:
On 01/10/2025 22:12, Kaz Kylheku wrote:
On 2025-10-01, olcott <polcott333@gmail.com> wrote:
Not at all.
When LLM systems have otherwise good reasoning
They don't have reasoning. They have statistical token prediction,
plus a few dog-and-pony tricks.
Did an AI tell you that?
No.
Trouble is, Kaz, they're getting pretty good at faking it, and
"pretty good" is good enough for a lot of people. Sometimes you
have to scratch quite hard to get at the inanity underneath, and
not everybody wants to scratch; narcissism and gullibility, for
example, are both contraindications for truth-digging.
Like I remarked in mhy other posting, when language models are trained,
the one thing they are learning more than anything from the majority
of their inputs is *grammar*.
The inputs are diverse texts about every imaginable thing, but what they
have in common is grammar in common and so that's what the LLM learns
best.
Imagine you're a dummy who got out of high shcool with C's and D's, and nobody in your family can write so much as a postcard without grammar
and spelling errors, right down to people's names. The chatbots must
seem like geniuses!
If you're smarter than that, but not versed in a topic, wow,
the LLM output asbout that topic sure looks smart.
When you're versed in a topic, wow, shitshow. Not every single time,
but often enough that it's obvious that LLM is just faking talking
the talk out of pulling sequences of tokens from training materials.
In Billy Joel's "Piano Man", the verse goews, "the waitress is
practicing politics, as the business men slowly get stoned".
LLMs are exactly like the waitress practicing politics (or any other
topic she overhears from the regulars at the bar).
She can interject with something smart sounding in the conversation and,
man, never mind the piano man, what is /she/ doing here?
The character of Penny in the sitcom "The Big Bang Theory"
exploits this trope also. (E.g. Episode 13, Penny helps Sheldon
solve his equation.)
The inputs are diverse texts about every imaginable thing, but what they
have in common is grammar in common and so that's what the LLM learns
best.
...someone in a usenet group tells you they have disproved
a fundamental maths theorem
On 19/10/2025 19:25, Kaz Kylheku wrote:
On 2025-10-19, Tristan Wibberley
<tristan.wibberley+netnews2@alumni.manchester.ac.uk> wrote:
On 01/10/2025 22:12, Kaz Kylheku wrote:
On 2025-10-01, olcott <polcott333@gmail.com> wrote:
Not at all.
When LLM systems have otherwise good reasoning
They don't have reasoning. They have statistical token prediction,
plus a few dog-and-pony tricks.
Did an AI tell you that?
No.
Trouble is, Kaz, they're getting pretty good at faking it, and "pretty
good" is good enough for a lot of people. Sometimes you have to scratch quite hard to get at the inanity underneath, and not everybody wants to scratch; narcissism and gullibility, for example, are both
contraindications for truth-digging.
On 2025-10-19, Richard Heathfield <rjh@cpax.org.uk> wrote:
On 19/10/2025 19:25, Kaz Kylheku wrote:
On 2025-10-19, Tristan Wibberley <tristan.wibberley+netnews2@alumni.manchester.ac.uk> wrote:
On 01/10/2025 22:12, Kaz Kylheku wrote:
On 2025-10-01, olcott <polcott333@gmail.com> wrote:
Not at all.
When LLM systems have otherwise good reasoning
They don't have reasoning. They have statistical token prediction,
plus a few dog-and-pony tricks.
Did an AI tell you that?
No.
Trouble is, Kaz, they're getting pretty good at faking it, and
"pretty good" is good enough for a lot of people. Sometimes you
have to scratch quite hard to get at the inanity underneath, and
not everybody wants to scratch; narcissism and gullibility, for
example, are both contraindications for truth-digging.
Like I remarked in mhy other posting, when language models are trained,
the one thing they are learning more than anything from the majority
of their inputs is *grammar*.
The inputs are diverse texts about every imaginable thing, but what they
have in common is grammar in common and so that's what the LLM learns
best.
Imagine you're a dummy who got out of high shcool with C's and D's, and nobody in your family can write so much as a postcard without grammar
and spelling errors, right down to people's names. The chatbots must
seem like geniuses!
If you're smarter than that, but not versed in a topic, wow,
the LLM output asbout that topic sure looks smart.
When you're versed in a topic, wow, shitshow. Not every single time,
but often enough that it's obvious that LLM is just faking talking
the talk out of pulling sequences of tokens from training materials.
In Billy Joel's "Piano Man", the verse goews, "the waitress is
practicing politics, as the business men slowly get stoned".
LLMs are exactly like the waitress practicing politics (or any other
topic she overhears from the regulars at the bar).
She can interject with something smart sounding in the conversation and,
man, never mind the piano man, what is /she/ doing here?
The character of Penny in the sitcom "The Big Bang Theory"
exploits this trope also. (E.g. Episode 13, Penny helps Sheldon
solve his equation.)
On 19/10/2025 20:51, Kaz Kylheku wrote:
The inputs are diverse texts about every imaginable thing, but what they
have in common is grammar in common and so that's what the LLM learns
best.
Since language interpretation is conventionally just translation to
other languages I expect semantics to be modelled as language, with its grammar (consequences of the laws of physics, consequences of
conventions and technologies of the built environment, etc).
What do you think, are semantics special?
... so if you have some equations whose solution indicates the paths of
three bodies in space, those paths are not what the equations are
actually about.
Kaz, I note we're getting quite off-topic.
On 2025-10-21, Tristan Wibberley <tristan.wibberley+netnews2@alumni.manchester.ac.uk> wrote:
Kaz, I note we're getting quite off-topic.
Right, sorry. Where were we, again?
HH(DD) wrongly returns 0, because DD halts.
On 2025-10-21, Tristan Wibberley <tristan.wibberley+netnews2@alumni.manchester.ac.uk> wrote:
Kaz, I note we're getting quite off-topic.
Right, sorry. Where were we, again?
HH(DD) wrongly returns 0, because DD halts.
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 1,073 |
Nodes: | 10 (0 / 10) |
Uptime: | 215:52:02 |
Calls: | 13,783 |
Calls today: | 1 |
Files: | 186,987 |
D/L today: |
345 files (78,771K bytes) |
Messages: | 2,434,667 |