Concerning this boring nonsense:
https://book.simply-logical.space/src/text/2_part_ii/5.3.html#
Funny idea that anybody would be interested just now in
the year 2025 in things like teaching breadth first
search versus depth first search, or even be “mystified”
by such stuff. Its extremly trivial stuff:
Insert your favorite tree traversal pictures here.
Its even not artificial intelligence neither has anything
to do with mathematical logic, rather belongs to computer
science and discrete mathematics which you have in
1st year university
courses, making it moot to call it “simply logical”. It
reminds me of the idea of teaching how wax candles work
to dumb down students, when just light bulbs have been
invented. If this is the outcome
of the Prolog Education Group 2.0, then good night.
My suspicion, teaching WalkSAT as an altermative
to DPLL, and show its limitations would maybe give
more bang. We are currently entering an era that
already started in the end of 1990
when some new probabilistic complexity classes
were defined. Many machine learning techniques
have also such an aspect, and it will only get
worse with Quantum Computing. Having
a grip on these things helps also distinguishing
when a AI acts by chance, or whether it diverts from
chance and shows some excelling adaptation to the
problem domain at hand. Like here,
anybody an idea what they mean by “above chance”?
Intuitive physics understanding emerges from
self-supervised pretraining on natural videos https://arxiv.org/abs/2502.11831
Mild Shock schrieb:
Concerning this boring nonsense:
https://book.simply-logical.space/src/text/2_part_ii/5.3.html#
Funny idea that anybody would be interested just now in
the year 2025 in things like teaching breadth first
search versus depth first search, or even be “mystified”
by such stuff. Its extremly trivial stuff:
Insert your favorite tree traversal pictures here.
Its even not artificial intelligence neither has anything
to do with mathematical logic, rather belongs to computer
science and discrete mathematics which you have in
1st year university
courses, making it moot to call it “simply logical”. It
reminds me of the idea of teaching how wax candles work
to dumb down students, when just light bulbs have been
invented. If this is the outcome
of the Prolog Education Group 2.0, then good night.
In 2011, Geoffrey Hinton started reaching out
to colleagues about “What do I have to do to
convince you that neural networks are the future?” https://en.wikipedia.org/wiki/AlexNet
We pull it out of thin air. And the job that does
is, indeed, that it breaks up relations into
sub-relations or sub-routines, if you prefer.
Background knowledge (Second Order)
-----------------------------------
(Chain) ∃.P,Q,R ∀.x,y,z: P(x,y)← Q(x,z),R(z,y)
https://github.com/stassa/vanilla/tree/master/lib/poker
Concerning this boring nonsense:
https://book.simply-logical.space/src/text/2_part_ii/5.3.html#
Funny idea that anybody would be interested just now in
the year 2025 in things like teaching breadth first
search versus depth first search, or even be “mystified”
by such stuff. Its extremly trivial stuff:
Insert your favorite tree traversal pictures here.
Its even not artificial intelligence neither has anything
to do with mathematical logic, rather belongs to computer
science and discrete mathematics which you have in
1st year university
courses, making it moot to call it “simply logical”. It
reminds me of the idea of teaching how wax candles work
to dumb down students, when just light bulbs have been
invented. If this is the outcome
of the Prolog Education Group 2.0, then good night.
The H is the bottleneck on purpose:
relation(X, Y) :- encoder(X, H), decoder(H, Y).
The first deep learning breakthrough was
AlexNet by Alex Krizhevsky, Ilya Sutskever
and Geoffrey Hinton:
In 2011, Geoffrey Hinton started reaching out
to colleagues about “What do I have to do to
convince you that neural networks are the future?” https://en.wikipedia.org/wiki/AlexNet
Meanwhile ILP is still dreaming of higher order logic:
We pull it out of thin air. And the job that does
is, indeed, that it breaks up relations into
sub-relations or sub-routines, if you prefer.
You mean this here:
Background knowledge (Second Order)
-----------------------------------
(Chain) ∃.P,Q,R ∀.x,y,z: P(x,y)← Q(x,z),R(z,y)
https://github.com/stassa/vanilla/tree/master/lib/poker
Thats too general, it doesn’t adress
analogical reasoning.
Mild Shock schrieb:
Concerning this boring nonsense:
https://book.simply-logical.space/src/text/2_part_ii/5.3.html#
Funny idea that anybody would be interested just now in
the year 2025 in things like teaching breadth first
search versus depth first search, or even be “mystified”
by such stuff. Its extremly trivial stuff:
Insert your favorite tree traversal pictures here.
Its even not artificial intelligence neither has anything
to do with mathematical logic, rather belongs to computer
science and discrete mathematics which you have in
1st year university
courses, making it moot to call it “simply logical”. It
reminds me of the idea of teaching how wax candles work
to dumb down students, when just light bulbs have been
invented. If this is the outcome
of the Prolog Education Group 2.0, then good night.
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 1,030 |
Nodes: | 10 (0 / 10) |
Uptime: | 01:26:29 |
Calls: | 13,343 |
Calls today: | 6 |
Files: | 186,574 |
D/L today: |
4,949 files (1,474M bytes) |
Messages: | 3,357,205 |