extremely restrictive setting and the only reason
it’s worked so well over the years is that people
have persisted at flogging it like the deadest
of dead horses
Very simple challenge conceptually, develop the idea
of Centipawn towards TicTacToe and implement the
game based on learning / training a transformer, and
then executing it. All written in Prolog itself! Optional
bonus exercise, make the execution ИИUƎ style, i.e.
incremental evaluation of the transformer.
Centipawn - Chess Wiki
https://chess.fandom.com/wiki/Centipawn
NNUE - Chess Programming Wiki
https://www.chessprogramming.org/NNUE
ILP might fail by design because it is too strict
Prologers are still on the path of Don Quixote:
extremely restrictive setting and the only reason
it’s worked so well over the years is that people
have persisted at flogging it like the deadest
of dead horses
For some its a dead horse, for others by means of the
two nobel prices, one for Geoffrey Hinton in Physics and
one for Demis Hassabis in Chemistry, both in 2024,
its rather a wakeup call.
The current state of affaire in Prolog is , autoencoders
and transformers are not available via ILP, it lacks the
conceptual setting, because its based on a model of
belief congruence,
trying to avoid cognitve dissonance. Basically ILP adopts
Abduction as already conceived by Charles Sanders Peirce.
He is also the originator of Conceptual Graphs. The
problem is solved for some
background knowledge B and some observation E, in that the
idea is to find a hypothesis H such that:
Consistency: B, H |/- f /* no absurdity */
Completess: B, H |- E
There is also a refinement with positive and negative
observation E+ and E-. The challenge I am positing is to
get some hands-on and see what are the merits of autoencoders
and transformers, and maybe to see whether there is a possible
marriage of autoencoders and transformers with ILP. The
challenge here is that autoencoders and transformers have
no concept of absurdity. The main feature of extrapolation in
autoencoders and transformers are:
- Inferencing:
The autoencoder might also tolerate deviations in
the input that are not in the training data, giving
it some inferential capability.
- Generation:
And then choose an output again not in the training
data, giving it some generative capabilities.
There is no measurement against absurdity in the
inferencing and no measurement against absurdity in
the generation. This is also seen in practice, like
when you interact with
ChatGPT, it can halucinate unicorns, and it can
even make mistake, in the halucination, like believing
the are are white chetsnut unicorns.
So the following is possible:
There are unicorns
There are white chestnut unicorns
I see this as a chance that absurdity is possible in
autoencoders and transformers, for many reasons,
especially from my interest in paraconsistent logics.
You can already not assume that training data is
consistent. That there is no ex falso explosion in the
type of autoencoder and transformer machine learning
is rather a benefit than a curse, and somehow gives a
neat solution to many problems, where ILP might
fail by design because it is too strict.
See also:
https://de.wikipedia.org/wiki/Geoffrey_Hinton
https://de.wikipedia.org/wiki/Demis_Hassabis
https://en.wikipedia.org/wiki/Abductive_reasoning#Abduction
Mild Shock schrieb:
Very simple challenge conceptually, develop the idea
of Centipawn towards TicTacToe and implement the
game based on learning / training a transformer, and
then executing it. All written in Prolog itself! Optional
bonus exercise, make the execution ИИUƎ style, i.e.
incremental evaluation of the transformer.
Centipawn - Chess Wiki
https://chess.fandom.com/wiki/Centipawn
NNUE - Chess Programming Wiki
https://www.chessprogramming.org/NNUE
**Simple PyTorch Implementation of “Grokking”**
We trained a standard decoder-only transformer (Vaswani et al., 2017) https://github.com/teddykoker/grokking
Very simple challenge conceptually, develop the idea
of Centipawn towards TicTacToe and implement the
game based on learning / training a transformer, and
then executing it. All written in Prolog itself! Optional
bonus exercise, make the execution ИИUƎ style, i.e.
incremental evaluation of the transformer.
Centipawn - Chess Wiki
https://chess.fandom.com/wiki/Centipawn
NNUE - Chess Programming Wiki
https://www.chessprogramming.org/NNUE
Are you thinking that autoencoders
could play a bigger role in tasks like
language modeling
**Attention Is All You Need**
Vaswani et al., 2017
https://arxiv.org/abs/1706.03762
In this work, we presented the Transformer,
the first sequence transduction model based
entirely on attention, replacing the recurrent
layers most commonly used in encoder-decoder
architectures with multi-headed self-attention.
Ok, my bad. You can of course also try a decoder-only.
Just like here in this Python code example:
**Simple PyTorch Implementation of “Grokking”**
We trained a standard decoder-only transformer (Vaswani et al., 2017) https://github.com/teddykoker/grokking
The transformer need not necessarely have a encoder and
a latent space. It can be also a decoder-only.
Mild Shock schrieb:
Very simple challenge conceptually, develop the idea
of Centipawn towards TicTacToe and implement the
game based on learning / training a transformer, and
then executing it. All written in Prolog itself! Optional
bonus exercise, make the execution ИИUƎ style, i.e.
incremental evaluation of the transformer.
Centipawn - Chess Wiki
https://chess.fandom.com/wiki/Centipawn
NNUE - Chess Programming Wiki
https://www.chessprogramming.org/NNUE
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 1,030 |
Nodes: | 10 (1 / 9) |
Uptime: | 203:35:02 |
Calls: | 13,341 |
Calls today: | 4 |
Files: | 186,574 |
D/L today: |
4,096 files (1,278M bytes) |
Messages: | 3,357,162 |