• Prolog Education Group clueless about the AI Boom?

    From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Mar 3 14:18:40 2025
    From Newsgroup: comp.lang.prolog

    Concerning this boring nonsense:

    https://book.simply-logical.space/src/text/2_part_ii/5.3.html#

    Funny idea that anybody would be interested just now in
    the year 2025 in things like teaching breadth first
    search versus depth first search, or even be “mystified”
    by such stuff. Its extremly trivial stuff:

    Insert your favorite tree traversal pictures here.

    Its even not artificial intelligence neither has anything
    to do with mathematical logic, rather belongs to computer
    science and discrete mathematics which you have in
    1st year university

    courses, making it moot to call it “simply logical”. It
    reminds me of the idea of teaching how wax candles work
    to dumb down students, when just light bulbs have been
    invented. If this is the outcome

    of the Prolog Education Group 2.0, then good night.

    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Mar 3 14:20:14 2025
    From Newsgroup: comp.lang.prolog


    My suspicion, teaching WalkSAT as an altermative
    to DPLL, and show its limitations would maybe give
    more bang. We are currently entering an era that
    already started in the end of 1990

    when some new probabilistic complexity classes
    were defined. Many machine learning techniques
    have also such an aspect, and it will only get
    worse with Quantum Computing. Having

    a grip on these things helps also distinguishing
    when a AI acts by chance, or whether it diverts from
    chance and shows some excelling adaptation to the
    problem domain at hand. Like here,

    anybody an idea what they mean by “above chance”?

    Intuitive physics understanding emerges from
    self-supervised pretraining on natural videos
    https://arxiv.org/abs/2502.11831

    Mild Shock schrieb:
    Concerning this boring nonsense:

    https://book.simply-logical.space/src/text/2_part_ii/5.3.html#

    Funny idea that anybody would be interested just now in
    the year 2025 in things like teaching breadth first
    search versus depth first search, or even be “mystified”
    by such stuff. Its extremly trivial stuff:

    Insert your favorite tree traversal pictures here.

    Its even not artificial intelligence neither has anything
    to do with mathematical logic, rather belongs to computer
    science and discrete mathematics which you have in
    1st year university

    courses, making it moot to call it “simply logical”. It
    reminds me of the idea of teaching how wax candles work
    to dumb down students, when just light bulbs have been
    invented. If this is the outcome

    of the Prolog Education Group 2.0, then good night.


    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Mar 3 18:03:11 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Salary Templates if you "Grok" ML / AI [PhDs negotiate salaries]:

    Tendency to go towards 500'000.- USD per year https://x.com/chiefaioffice/status/1734329284821672270

    But I guess you need to have the talent to train a ChatGPT.
    Still very rare, I had interesting discussions on the internet.

    LoL

    Bye

    Final Annual Initial Negotiated
    Company Compensation Compensation Delta
    OpenAl $865K $665K 30%
    Anthropic $855K $855K 0%
    Inflection* $825K NA NA
    Tesla $780K $702K 11%
    Amazon $719K $520K 38%
    Google Brain $695K $590K 17%
    TikTok $605K $430K 40%
    FAIR $556K $480K 15%
    Google Research $549K $310K 77%
    Waymo $530K $385K 37%
    Deepmind $515K $452K 13%
    Bloomberg Al $460K $318K 44%
    Apple $450K $337K 33%
    Microsoft Research $449K $270K 66%
    Salesforce Research $441K $355K 24%
    Toyota Research $410K $370K 10%
    Twitter $409K $359K 13%
    NVIDIA $390K $340K 14%
    IBM Research $377K $262K 43%
    Allen Institute $350K $310K 12%
    Samsung Research $285K $240K 18%
    Hugging Face $238K $185K 27%

    Mild Shock schrieb:

    My suspicion, teaching WalkSAT as an altermative
    to DPLL, and show its limitations would maybe give
    more bang. We are currently entering an era that
    already started in the end of 1990

    when some new probabilistic complexity classes
    were defined. Many machine learning techniques
    have also such an aspect, and it will only get
    worse with Quantum Computing. Having

    a grip on these things helps also distinguishing
    when a AI acts by chance, or whether it diverts from
    chance and shows some excelling adaptation to the
    problem domain at hand. Like here,

    anybody an idea what they mean by “above chance”?

    Intuitive physics understanding emerges from
    self-supervised pretraining on natural videos https://arxiv.org/abs/2502.11831

    Mild Shock schrieb:
    Concerning this boring nonsense:

    https://book.simply-logical.space/src/text/2_part_ii/5.3.html#

    Funny idea that anybody would be interested just now in
    the year 2025 in things like teaching breadth first
    search versus depth first search, or even be “mystified”
    by such stuff. Its extremly trivial stuff:

    Insert your favorite tree traversal pictures here.

    Its even not artificial intelligence neither has anything
    to do with mathematical logic, rather belongs to computer
    science and discrete mathematics which you have in
    1st year university

    courses, making it moot to call it “simply logical”. It
    reminds me of the idea of teaching how wax candles work
    to dumb down students, when just light bulbs have been
    invented. If this is the outcome

    of the Prolog Education Group 2.0, then good night.



    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Mar 7 23:58:10 2025
    From Newsgroup: comp.lang.prolog

    The first deep learning breakthrough was
    AlexNet by Alex Krizhevsky, Ilya Sutskever
    and Geoffrey Hinton:

    In 2011, Geoffrey Hinton started reaching out
    to colleagues about “What do I have to do to
    convince you that neural networks are the future?” https://en.wikipedia.org/wiki/AlexNet

    Meanwhile ILP is still dreaming of higher order logic:

    We pull it out of thin air. And the job that does
    is, indeed, that it breaks up relations into
    sub-relations or sub-routines, if you prefer.

    You mean this here:

    Background knowledge (Second Order)
    -----------------------------------
    (Chain) ∃.P,Q,R ∀.x,y,z: P(x,y)← Q(x,z),R(z,y)

    https://github.com/stassa/vanilla/tree/master/lib/poker

    Thats too general, it doesn’t adress
    analogical reasoning.

    Mild Shock schrieb:
    Concerning this boring nonsense:

    https://book.simply-logical.space/src/text/2_part_ii/5.3.html#

    Funny idea that anybody would be interested just now in
    the year 2025 in things like teaching breadth first
    search versus depth first search, or even be “mystified”
    by such stuff. Its extremly trivial stuff:

    Insert your favorite tree traversal pictures here.

    Its even not artificial intelligence neither has anything
    to do with mathematical logic, rather belongs to computer
    science and discrete mathematics which you have in
    1st year university

    courses, making it moot to call it “simply logical”. It
    reminds me of the idea of teaching how wax candles work
    to dumb down students, when just light bulbs have been
    invented. If this is the outcome

    of the Prolog Education Group 2.0, then good night.


    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sat Mar 8 00:00:34 2025
    From Newsgroup: comp.lang.prolog

    You are probably aiming at the decomposition
    of autoencoder or transformer into an encoder
    and decoder. Making the split up automatically

    from a more general ILP framework.

    The H is the bottleneck on purpose:

    relation(X, Y) :- encoder(X, H), decoder(H, Y).

    Ok you missed the point. Lets assume for the
    moment the H on purpose is not something that
    happens accidential through a more general

    learning algorithm. But it is a design feature
    of how we want to learn. Can we incorporate
    analogical reasoning, the parallelogram?

    Yeah, relatively simple, just add more input
    and output layers. The new parameter K indicates
    how the representation was chosen:

    relation(X, Y) :-
    similar(X, A, K),
    encoder(A, H),
    decoder(H, B),
    similar(Y, B, K).

    Its again an autoencoder respectively transformer,
    with a bigger latent space. Prominent additional input
    layers that work here are convolutional neural networks.

    Things like max pooling or self-attention pooling:

    relation(X, Y) :-
    encoder2(X, J),
    decoder2(J, Y).

    encoder2(X, [K|H]) :-
    similar(X, A, K),
    encoder(A, H),

    decoder2([K|H], Y) :-
    decoder(H, B),
    similar(Y, B, K).

    You can learn the decoder2/2 as a whole in your
    autoencoder and transformer learning framework,
    provided it can deal with many layers, i.e.

    if it has deep learning techniques.

    Mild Shock schrieb:
    The first deep learning breakthrough was
    AlexNet by Alex Krizhevsky, Ilya Sutskever
    and Geoffrey Hinton:

    In 2011, Geoffrey Hinton started reaching out
    to colleagues about “What do I have to do to
    convince you that neural networks are the future?” https://en.wikipedia.org/wiki/AlexNet

    Meanwhile ILP is still dreaming of higher order logic:

    We pull it out of thin air. And the job that does
    is, indeed, that it breaks up relations into
    sub-relations or sub-routines, if you prefer.

    You mean this here:

    Background knowledge (Second Order)
    -----------------------------------
    (Chain) ∃.P,Q,R ∀.x,y,z: P(x,y)← Q(x,z),R(z,y)

    https://github.com/stassa/vanilla/tree/master/lib/poker

    Thats too general, it doesn’t adress
    analogical reasoning.

    Mild Shock schrieb:
    Concerning this boring nonsense:

    https://book.simply-logical.space/src/text/2_part_ii/5.3.html#

    Funny idea that anybody would be interested just now in
    the year 2025 in things like teaching breadth first
    search versus depth first search, or even be “mystified”
    by such stuff. Its extremly trivial stuff:

    Insert your favorite tree traversal pictures here.

    Its even not artificial intelligence neither has anything
    to do with mathematical logic, rather belongs to computer
    science and discrete mathematics which you have in
    1st year university

    courses, making it moot to call it “simply logical”. It
    reminds me of the idea of teaching how wax candles work
    to dumb down students, when just light bulbs have been
    invented. If this is the outcome

    of the Prolog Education Group 2.0, then good night.



    --- Synchronet 3.20c-Linux NewsLink 1.2