• WebPL is already outdated

    From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Aug 17 18:37:07 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Aug 18 14:52:50 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Heap/Stack Prolog systems could solve some Prolog
    String Problems, especially in connection with a FFI, but I am
    not showing that. More a general design limitation of the common

    take of WAM resp. ZIP. The new WebPL Prolog describes itself as a
    merged Heap/Stack architecture Prolog system. And has a reference
    in its escorting paper to an academic work by Xining Li (1999):

    A new term representation method for prolog
    Xining Li - 1999 https://www.sciencedirect.com/science/article/pii/S0743106697000629

    Besides that Program Sharing (PS), as it is called in the paper,
    is nothing new, WebPL also shows a more modern take, in that
    it already uses compound data types from Rust. Can we

    replicate some of the performance advantages of a PS system
    versus the more traditional WAM resp. ZIP based systems? Here
    is a simple test in the WebPL Playground, for Web PL without GC:

    /* WebPL NoGC */
    ?- test2(10).
    (1795.6ms)

    ?- test2(30).
    (1785.5ms)

    ?- test2(90).
    (1765.6ms)
    Then SWI-Prolog WASM as found in SWI-Tinker:

    /* SWI-Prolog WASM */
    ?- test2(10).
    (1239.3ms)

    ?- test2(30).
    (2276.1ms)

    ?- test2(90).
    (5372.3ms)

    https://webpl.whenderson.dev/

    Bye

    The test case:

    data(10, [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    data(30, [30, 29, 28, 27, 26, 25, 24, 23,
    22, 21, 20, 19, 18, 17, 16, 15, 14, 13,
    12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    data(90, [90, 89, 88, 87, 86, 85, 84, 83,
    82, 81, 80, 79, 78, 77, 76, 75, 74, 73,
    72, 71, 70, 69, 68, 67, 66, 65, 64, 63,
    62, 61, 60, 59, 58, 57, 56, 55, 54, 53,
    52, 51, 50, 49, 48, 47, 46, 45, 44, 43,
    42, 41, 40, 39, 38, 37, 36, 35, 34, 33,
    32, 31, 30, 29, 28, 27, 26, 25, 24, 23,
    22, 21, 20, 19, 18, 17, 16, 15, 14, 13,
    12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    test(N) :- between(1,1000,_), data(N,_), fail.
    test(_).

    test2(N) :- between(1,1000,_), test(N), fail.
    test2(_).

    between(Lo, Lo, R) :- !, Lo = R.
    between(Lo, _, Lo).
    between(Lo, Hi, X) :- Lo2 is Lo+1, between(Lo2, Hi, X).



    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Aug 18 15:06:39 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Ok lets run the test case on the desktop,
    and not on the web. What do we get? Its almost
    constant for Trealla Prolog as well, in

    WebPL it was perfectly constant, but here
    its only almost constant:

    /* Trealla Prolog 2.82.14 */

    ?- time(test2(10)).
    % Time elapsed 0.188s, 3004002 Inferences, 16.014 MLips
    true.

    ?- time(test2(30)).
    % Time elapsed 0.210s, 3004002 Inferences, 14.321 MLips
    true.

    ?- time(test2(90)).
    % Time elapsed 0.228s, 3004002 Inferences, 13.147 MLips
    true.

    Scryer Prolog fails the test horribly. Which
    is amazing, since it is a Rust Prolog system just
    like WebPL. But they are too traditional in

    following the stupid WAM design:

    /* Scryer Prolog 0.9.4-599 */

    ?- time(test2(10)).
    % CPU time: 0.714s, 7_049_076 inferences
    true.

    ?- time(test2(30)).
    % CPU time: 1.284s, 7_049_099 inferences
    true.

    ?- time(test2(90)).
    % CPU time: 2.984s, 7_049_099 inferences
    true.

    Bye

    Mild Shock schrieb:
    Hi,

    Heap/Stack Prolog systems could solve some Prolog
    String Problems, especially in connection with a FFI, but I am
    not showing that. More a general design limitation of the common

    take of WAM resp. ZIP. The new WebPL Prolog describes itself as a
    merged Heap/Stack architecture Prolog system. And has a reference
    in its escorting paper to an academic work by Xining Li (1999):

    A new term representation method for prolog
    Xining Li - 1999 https://www.sciencedirect.com/science/article/pii/S0743106697000629

    Besides that Program Sharing (PS), as it is called in the paper,
    is nothing new, WebPL also shows a more modern take, in that
    it already uses compound data types from Rust. Can we

    replicate some of the performance advantages of a PS system
    versus the more traditional WAM resp. ZIP based systems? Here
    is a simple test in the WebPL Playground, for Web PL without GC:

    /* WebPL NoGC */
    ?- test2(10).
    (1795.6ms)

    ?- test2(30).
    (1785.5ms)

    ?- test2(90).
    (1765.6ms)
    Then SWI-Prolog WASM as found in SWI-Tinker:

    /* SWI-Prolog WASM */
    ?- test2(10).
    (1239.3ms)

    ?- test2(30).
    (2276.1ms)

    ?- test2(90).
    (5372.3ms)

    https://webpl.whenderson.dev/

    Bye

    The test case:

    data(10, [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    data(30, [30, 29, 28, 27, 26, 25, 24, 23,
       22, 21, 20, 19, 18, 17, 16, 15, 14, 13,
       12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    data(90, [90, 89, 88, 87, 86, 85, 84, 83,
       82, 81, 80, 79, 78, 77, 76, 75, 74, 73,
       72, 71, 70, 69, 68, 67, 66, 65, 64, 63,
       62, 61, 60, 59, 58, 57, 56, 55, 54, 53,
       52, 51, 50, 49, 48, 47, 46, 45, 44, 43,
       42, 41, 40, 39, 38, 37, 36, 35, 34, 33,
       32, 31, 30, 29, 28, 27, 26, 25, 24, 23,
       22, 21, 20, 19, 18, 17, 16, 15, 14, 13,
       12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    test(N) :- between(1,1000,_), data(N,_), fail.
    test(_).

    test2(N) :- between(1,1000,_), test(N), fail.
    test2(_).

    between(Lo, Lo, R) :- !, Lo = R.
    between(Lo, _, Lo).
    between(Lo, Hi, X) :- Lo2 is Lo+1, between(Lo2, Hi, X).



    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Aug 18 15:42:38 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Smarter Partial Strings would use Program
    Sharing. Take the invention of Scryer
    Prolog and think about it from a Program

    Sharing prespective:

    p --> "abc", q

    Translates to with Partial Strings:

    p(C, B) :- C = "abc"||A, q(A, B).

    Unfortunately straight forward Program
    Sharning of the partial string doesn't
    work anymore, since it is not ground:

    p(C, B) :- C = [a,b,c|A], q(A, B).

    But we could translate the DCG also to:

    p(C, B) :- '$append'([a,b,c],A,C), q(A, B).

    Where '$append'/3 is a mode (+,-,-) specialization
    of append/3. Could be natively implemented.
    The mode (+,-,-) will be more clever

    then the failed programm sharing. The program
    sharing can share the string "abc", since with
    '$append'/3, the DCG is basically:

    p(C, B) :- '$append'("abc",A,C), q(A, B).

    Now '$append'/3 would do a copying of the string,
    if A is unbound, this is usually the "DCG used for
    text generation" mode. But if A is bound, the

    '$append'/3 would not do some copying, but it
    would actually match the prefix. So it gives
    a much better DCG for parsing, since this is

    "DCG used for text parsing" mode.

    Bye

    Mild Shock schrieb:
    Hi,

    Ok lets run the test case on the desktop,
    and not on the web. What do we get? Its almost
    constant for Trealla Prolog as well, in

    WebPL it was perfectly constant, but here
    its only almost constant:

    /* Trealla Prolog 2.82.14 */

    ?- time(test2(10)).
    % Time elapsed 0.188s, 3004002 Inferences, 16.014 MLips
       true.

    ?- time(test2(30)).
    % Time elapsed 0.210s, 3004002 Inferences, 14.321 MLips
       true.

    ?- time(test2(90)).
    % Time elapsed 0.228s, 3004002 Inferences, 13.147 MLips
       true.

    Scryer Prolog fails the test horribly. Which
    is amazing, since it is a Rust Prolog system just
    like WebPL. But they are too traditional in

    following the stupid WAM design:

    /* Scryer Prolog 0.9.4-599 */

    ?- time(test2(10)).
       % CPU time: 0.714s, 7_049_076 inferences
       true.

    ?- time(test2(30)).
       % CPU time: 1.284s, 7_049_099 inferences
       true.

    ?- time(test2(90)).
       % CPU time: 2.984s, 7_049_099 inferences
       true.

    Bye

    Mild Shock schrieb:
    Hi,

    Heap/Stack Prolog systems could solve some Prolog
    String Problems, especially in connection with a FFI, but I am
    not showing that. More a general design limitation of the common

    take of WAM resp. ZIP. The new WebPL Prolog describes itself as a
    merged Heap/Stack architecture Prolog system. And has a reference
    in its escorting paper to an academic work by Xining Li (1999):

    A new term representation method for prolog
    Xining Li - 1999
    https://www.sciencedirect.com/science/article/pii/S0743106697000629

    Besides that Program Sharing (PS), as it is called in the paper,
    is nothing new, WebPL also shows a more modern take, in that
    it already uses compound data types from Rust. Can we

    replicate some of the performance advantages of a PS system
    versus the more traditional WAM resp. ZIP based systems? Here
    is a simple test in the WebPL Playground, for Web PL without GC:

    /* WebPL NoGC */
    ?- test2(10).
    (1795.6ms)

    ?- test2(30).
    (1785.5ms)

    ?- test2(90).
    (1765.6ms)
    Then SWI-Prolog WASM as found in SWI-Tinker:

    /* SWI-Prolog WASM */
    ?- test2(10).
    (1239.3ms)

    ?- test2(30).
    (2276.1ms)

    ?- test2(90).
    (5372.3ms)

    https://webpl.whenderson.dev/

    Bye

    The test case:

    data(10, [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    data(30, [30, 29, 28, 27, 26, 25, 24, 23,
        22, 21, 20, 19, 18, 17, 16, 15, 14, 13,
        12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    data(90, [90, 89, 88, 87, 86, 85, 84, 83,
        82, 81, 80, 79, 78, 77, 76, 75, 74, 73,
        72, 71, 70, 69, 68, 67, 66, 65, 64, 63,
        62, 61, 60, 59, 58, 57, 56, 55, 54, 53,
        52, 51, 50, 49, 48, 47, 46, 45, 44, 43,
        42, 41, 40, 39, 38, 37, 36, 35, 34, 33,
        32, 31, 30, 29, 28, 27, 26, 25, 24, 23,
        22, 21, 20, 19, 18, 17, 16, 15, 14, 13,
        12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    test(N) :- between(1,1000,_), data(N,_), fail.
    test(_).

    test2(N) :- between(1,1000,_), test(N), fail.
    test2(_).

    between(Lo, Lo, R) :- !, Lo = R.
    between(Lo, _, Lo).
    between(Lo, Hi, X) :- Lo2 is Lo+1, between(Lo2, Hi, X).



    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Aug 18 15:49:54 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    In Dogelog Player I don't need to introduce
    '$append'/3, since in this code there is
    anyway an attempt to do static shunting:

    p(C, B) :- C = [a,b,c|A], q(A, B).

    It is handled as if it were:

    p([a,b,c|A], B) :- q(A, B).

    This means [a,b,c|A] is anyway program shared (PS),
    and a matching happens so that we can ultimately
    omit the creation of a real Prolog variable for

    A. It will get a special place holder that is
    not trailed. Maybe I will find a test case
    to illustrate this form of program sharing,

    which I have temporarily termed static shunting,
    whereas WebPL paper shunting, I would rather
    call dynamic shunting. Unfortuantely WebPL

    does not support DCG parsing, the (-->)/2
    clauses don't work. So will take me more time
    to test whether there is something in WebPL,

    concerning this type of program sharing as well,
    or whether it was botched.

    Bye

    Mild Shock schrieb:
    Hi,

    Smarter Partial Strings would use Program
    Sharing. Take the invention of Scryer
    Prolog and think about it from a Program

    Sharing prespective:

    p --> "abc", q

    Translates to with Partial Strings:

    p(C, B) :- C = "abc"||A, q(A, B).

    Unfortunately straight forward Program
    Sharning of the partial string doesn't
    work anymore, since it is not ground:

    p(C, B) :- C = [a,b,c|A], q(A, B).

    But we could translate the DCG also to:

    p(C, B) :- '$append'([a,b,c],A,C), q(A, B).

    Where '$append'/3 is a mode (+,-,-) specialization
    of append/3. Could be natively implemented.
    The mode (+,-,-) will be more clever

    then the failed programm sharing. The program
    sharing can share the string "abc", since with
    '$append'/3, the DCG is basically:

    p(C, B) :- '$append'("abc",A,C), q(A, B).

    Now '$append'/3 would do a copying of the string,
    if A is unbound, this is usually the "DCG used for
    text generation" mode. But if A is bound, the

    '$append'/3 would not do some copying, but it
    would actually match the prefix. So it gives
    a much better DCG for parsing, since this is

    "DCG used for text parsing" mode.

    Bye

    Mild Shock schrieb:
    Hi,

    Ok lets run the test case on the desktop,
    and not on the web. What do we get? Its almost
    constant for Trealla Prolog as well, in

    WebPL it was perfectly constant, but here
    its only almost constant:

    /* Trealla Prolog 2.82.14 */

    ?- time(test2(10)).
    % Time elapsed 0.188s, 3004002 Inferences, 16.014 MLips
        true.

    ?- time(test2(30)).
    % Time elapsed 0.210s, 3004002 Inferences, 14.321 MLips
        true.

    ?- time(test2(90)).
    % Time elapsed 0.228s, 3004002 Inferences, 13.147 MLips
        true.

    Scryer Prolog fails the test horribly. Which
    is amazing, since it is a Rust Prolog system just
    like WebPL. But they are too traditional in

    following the stupid WAM design:

    /* Scryer Prolog 0.9.4-599 */

    ?- time(test2(10)).
        % CPU time: 0.714s, 7_049_076 inferences
        true.

    ?- time(test2(30)).
        % CPU time: 1.284s, 7_049_099 inferences
        true.

    ?- time(test2(90)).
        % CPU time: 2.984s, 7_049_099 inferences
        true.

    Bye

    Mild Shock schrieb:
    Hi,

    Heap/Stack Prolog systems could solve some Prolog
    String Problems, especially in connection with a FFI, but I am
    not showing that. More a general design limitation of the common

    take of WAM resp. ZIP. The new WebPL Prolog describes itself as a
    merged Heap/Stack architecture Prolog system. And has a reference
    in its escorting paper to an academic work by Xining Li (1999):

    A new term representation method for prolog
    Xining Li - 1999
    https://www.sciencedirect.com/science/article/pii/S0743106697000629

    Besides that Program Sharing (PS), as it is called in the paper,
    is nothing new, WebPL also shows a more modern take, in that
    it already uses compound data types from Rust. Can we

    replicate some of the performance advantages of a PS system
    versus the more traditional WAM resp. ZIP based systems? Here
    is a simple test in the WebPL Playground, for Web PL without GC:

    /* WebPL NoGC */
    ?- test2(10).
    (1795.6ms)

    ?- test2(30).
    (1785.5ms)

    ?- test2(90).
    (1765.6ms)
    Then SWI-Prolog WASM as found in SWI-Tinker:

    /* SWI-Prolog WASM */
    ?- test2(10).
    (1239.3ms)

    ?- test2(30).
    (2276.1ms)

    ?- test2(90).
    (5372.3ms)

    https://webpl.whenderson.dev/

    Bye

    The test case:

    data(10, [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    data(30, [30, 29, 28, 27, 26, 25, 24, 23,
        22, 21, 20, 19, 18, 17, 16, 15, 14, 13,
        12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    data(90, [90, 89, 88, 87, 86, 85, 84, 83,
        82, 81, 80, 79, 78, 77, 76, 75, 74, 73,
        72, 71, 70, 69, 68, 67, 66, 65, 64, 63,
        62, 61, 60, 59, 58, 57, 56, 55, 54, 53,
        52, 51, 50, 49, 48, 47, 46, 45, 44, 43,
        42, 41, 40, 39, 38, 37, 36, 35, 34, 33,
        32, 31, 30, 29, 28, 27, 26, 25, 24, 23,
        22, 21, 20, 19, 18, 17, 16, 15, 14, 13,
        12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    test(N) :- between(1,1000,_), data(N,_), fail.
    test(_).

    test2(N) :- between(1,1000,_), test(N), fail.
    test2(_).

    between(Lo, Lo, R) :- !, Lo = R.
    between(Lo, _, Lo).
    between(Lo, Hi, X) :- Lo2 is Lo+1, between(Lo2, Hi, X).



    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Aug 31 23:56:56 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Woa! I didn't know that lausy Microsoft
    Copilot certified Laptops are that fast:

    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    % Dogelog Player 2.1.1 for Java

    % AMD Ryzen 5 4500U
    % ?- time(test).
    % % Zeit 756 ms, GC 1 ms, Lips 9950390, Uhr 23.08.2025 02:45
    % true.

    % AMD Ryzen AI 7 350
    % ?- time(test).
    % % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
    % true.

    What happened to the Death of Moore's Law?
    But somehow memory speed, CPU - RAM and GPU - RAM
    trippled. Possibly due to some Artificial

    Intelligence demand. And the bloody thing
    has also a NPU (Neural Processing Unit),
    nicely visible.

    Bye

    About the RAM speed. L1, L2 and L3
    caches are bigger. So its harder to poison
    the CPU. Also the CPU shows a revival of

    Hyper-Threading Technology (HTT), which
    AMD gives it a different name: They call it
    Simultaneous multithreading (SMT).

    https://www.cpubenchmark.net/compare/3702vs6397/AMD-Ryzen-5-4500U-vs-AMD-Ryzen-AI-7-350

    BTW: Still ticking along with the primes.pl example:

    test :-
    len(L, 1000),
    primes(L, _).

    primes([], 1).
    primes([J|L], J) :-
    primes(L, I),
    K is I+1,
    search(L, K, J).

    search(L, I, J) :-
    mem(X, L),
    I mod X =:= 0, !,
    K is I+1,
    search(L, K, J).
    search(_, I, I).

    mem(X, [X|_]).
    mem(X, [_|Y]) :-
    mem(X, Y).

    len([], 0) :- !.
    len([_|L], N) :-
    N > 0,
    M is N-1,
    len(L, M).

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Sep 1 00:45:00 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    2025 will be last year we hear of Python.
    This is just a tears in your eyes Eulogy:

    Python: The Documentary | An origin story https://www.youtube.com/watch?v=GfH4QL4VqJ0

    The Zen of Python is very different
    from the Zen of Copilot+ . The bloody
    Copilot+ Laptop doesn't use Python

    in its Artificial Intelligence:

    AI Content Extraction
    - Python Involced? ❌ None at runtime,
    Model runs in ONNX + DirectML on NPU

    AI Image Search
    - Python Involced? ❌ None at runtime,
    ON-device image feature, fully compiled

    AI Phi Silica
    - Python Involced? ❌ None at runtime,
    Lightweight Phi model packaged as ONNX

    AI Semantic Analysis?
    - Python Involced? ❌ None at runtime,
    Text understanding done via compiled
    ONNX operators

    Bye

    Mild Shock schrieb:
    Hi,

    Woa! I didn't know that lausy Microsoft
    Copilot certified Laptops are that fast:

    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    % Dogelog Player 2.1.1 for Java

    % AMD Ryzen 5 4500U
    % ?- time(test).
    % % Zeit 756 ms, GC 1 ms, Lips 9950390, Uhr 23.08.2025 02:45
    % true.

    % AMD Ryzen AI 7 350
    % ?- time(test).
    % % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
    % true.

    What happened to the Death of Moore's Law?
    But somehow memory speed, CPU - RAM and GPU - RAM
    trippled. Possibly due to some Artificial

    Intelligence demand. And the bloody thing
    has also a NPU (Neural Processing Unit),
    nicely visible.

    Bye

    About the RAM speed. L1, L2 and L3
    caches are bigger. So its harder to poison
    the CPU. Also the CPU shows a revival of

    Hyper-Threading Technology (HTT), which
    AMD gives it a different name: They call it
    Simultaneous multithreading (SMT).

    https://www.cpubenchmark.net/compare/3702vs6397/AMD-Ryzen-5-4500U-vs-AMD-Ryzen-AI-7-350


    BTW: Still ticking along with the primes.pl example:

    test :-
       len(L, 1000),
       primes(L, _).

    primes([], 1).
    primes([J|L], J) :-
       primes(L, I),
       K is I+1,
       search(L, K, J).

    search(L, I, J) :-
       mem(X, L),
       I mod X =:= 0, !,
       K is I+1,
       search(L, K, J).
    search(_, I, I).

    mem(X, [X|_]).
    mem(X, [_|Y]) :-
       mem(X, Y).

    len([], 0) :- !.
    len([_|L], N) :-
       N > 0,
       M is N-1,
       len(L, M).

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Sep 5 00:36:17 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Swiss AI Apertus
    Model ID: apertus-70b-instruct
    Parameters: 70 billion
    License: Apache 2.0
    Training: 15T tokens across 1,000+ languages
    Availability: Free during Swiss AI Weeks (September 2025)

    https://platform.publicai.co/docs

    Bye

    P.S.: A chat interface is here:

    Try Apertus
    https://publicai.co/

    Mild Shock schrieb:
    Hi,

    2025 will be last year we hear of Python.
    This is just a tears in your eyes Eulogy:

    Python: The Documentary | An origin story https://www.youtube.com/watch?v=GfH4QL4VqJ0

    The Zen of Python is very different
    from the Zen of Copilot+ . The bloody
    Copilot+ Laptop doesn't use Python

    in its Artificial Intelligence:

    AI Content Extraction
    - Python Involced? ❌ None at runtime,
      Model runs in ONNX + DirectML on NPU

    AI Image Search
    - Python Involced? ❌ None at runtime,
      ON-device image feature, fully compiled

    AI Phi Silica
    - Python Involced? ❌ None at runtime,
      Lightweight Phi model packaged as ONNX

    AI Semantic Analysis?
    - Python Involced? ❌ None at runtime,
      Text understanding done via compiled
      ONNX operators

    Bye

    Mild Shock schrieb:
    Hi,

    Woa! I didn't know that lausy Microsoft
    Copilot certified Laptops are that fast:

    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    % Dogelog Player 2.1.1 for Java

    % AMD Ryzen 5 4500U
    % ?- time(test).
    % % Zeit 756 ms, GC 1 ms, Lips 9950390, Uhr 23.08.2025 02:45
    % true.

    % AMD Ryzen AI 7 350
    % ?- time(test).
    % % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
    % true.

    What happened to the Death of Moore's Law?
    But somehow memory speed, CPU - RAM and GPU - RAM
    trippled. Possibly due to some Artificial

    Intelligence demand. And the bloody thing
    has also a NPU (Neural Processing Unit),
    nicely visible.

    Bye

    About the RAM speed. L1, L2 and L3
    caches are bigger. So its harder to poison
    the CPU. Also the CPU shows a revival of

    Hyper-Threading Technology (HTT), which
    AMD gives it a different name: They call it
    Simultaneous multithreading (SMT).

    https://www.cpubenchmark.net/compare/3702vs6397/AMD-Ryzen-5-4500U-vs-AMD-Ryzen-AI-7-350


    BTW: Still ticking along with the primes.pl example:

    test :-
        len(L, 1000),
        primes(L, _).

    primes([], 1).
    primes([J|L], J) :-
        primes(L, I),
        K is I+1,
        search(L, K, J).

    search(L, I, J) :-
        mem(X, L),
        I mod X =:= 0, !,
        K is I+1,
        search(L, K, J).
    search(_, I, I).

    mem(X, [X|_]).
    mem(X, [_|Y]) :-
        mem(X, Y).

    len([], 0) :- !.
    len([_|L], N) :-
        N > 0,
        M is N-1,
        len(L, M).

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Sep 5 01:03:55 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Don't try this, don't ask Apertus how
    many holes an emmentaler cheese has.

    And absolutely don't try this, ask it
    next to please answer in Schwitzerdütsch.

    Bye

    P.S.: Chatgpt can do it.

    Mild Shock schrieb:
    Hi,

    Swiss AI Apertus
    Model ID: apertus-70b-instruct
    Parameters: 70 billion
    License: Apache 2.0
    Training: 15T tokens across 1,000+ languages
    Availability: Free during Swiss AI Weeks (September 2025)

    https://platform.publicai.co/docs

    Bye

    P.S.: A chat interface is here:

    Try Apertus
    https://publicai.co/

    Mild Shock schrieb:
    Hi,

    2025 will be last year we hear of Python.
    This is just a tears in your eyes Eulogy:

    Python: The Documentary | An origin story
    https://www.youtube.com/watch?v=GfH4QL4VqJ0

    The Zen of Python is very different
    from the Zen of Copilot+ . The bloody
    Copilot+ Laptop doesn't use Python

    in its Artificial Intelligence:

    AI Content Extraction
    - Python Involced? ❌ None at runtime,
       Model runs in ONNX + DirectML on NPU

    AI Image Search
    - Python Involced? ❌ None at runtime,
       ON-device image feature, fully compiled

    AI Phi Silica
    - Python Involced? ❌ None at runtime,
       Lightweight Phi model packaged as ONNX

    AI Semantic Analysis?
    - Python Involced? ❌ None at runtime,
       Text understanding done via compiled
       ONNX operators

    Bye

    Mild Shock schrieb:
    Hi,

    Woa! I didn't know that lausy Microsoft
    Copilot certified Laptops are that fast:

    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    % Dogelog Player 2.1.1 for Java

    % AMD Ryzen 5 4500U
    % ?- time(test).
    % % Zeit 756 ms, GC 1 ms, Lips 9950390, Uhr 23.08.2025 02:45
    % true.

    % AMD Ryzen AI 7 350
    % ?- time(test).
    % % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
    % true.

    What happened to the Death of Moore's Law?
    But somehow memory speed, CPU - RAM and GPU - RAM
    trippled. Possibly due to some Artificial

    Intelligence demand. And the bloody thing
    has also a NPU (Neural Processing Unit),
    nicely visible.

    Bye

    About the RAM speed. L1, L2 and L3
    caches are bigger. So its harder to poison
    the CPU. Also the CPU shows a revival of

    Hyper-Threading Technology (HTT), which
    AMD gives it a different name: They call it
    Simultaneous multithreading (SMT).

    https://www.cpubenchmark.net/compare/3702vs6397/AMD-Ryzen-5-4500U-vs-AMD-Ryzen-AI-7-350


    BTW: Still ticking along with the primes.pl example:

    test :-
        len(L, 1000),
        primes(L, _).

    primes([], 1).
    primes([J|L], J) :-
        primes(L, I),
        K is I+1,
        search(L, K, J).

    search(L, I, J) :-
        mem(X, L),
        I mod X =:= 0, !,
        K is I+1,
        search(L, K, J).
    search(_, I, I).

    mem(X, [X|_]).
    mem(X, [_|Y]) :-
        mem(X, Y).

    len([], 0) :- !.
    len([_|L], N) :-
        N > 0,
        M is N-1,
        len(L, M).

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Sep 19 10:01:22 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    For the LP (Linear programming) part, it
    might be interesting to recall, that SWI-Prolog
    has an according library:

    A.55 library(simplex): Solve linear programming problems https://eu.swi-prolog.org/pldoc/man?section=simplex

    To model the constraint store, it doesn’t need
    any native Prolog system support, since it uses
    DCG for state threading. Linear programming was

    long time the pinnacle of mathematical problem
    solving. But some Articial Intelligence method do
    typically go beyond the linear case, might also

    tackle non-linear problems etc.. making heavy
    use of a NPU (Neural Processing Unit). In May 2025
    the first AI Laptops arrived with >40 TOPS NPUs.

    Spearheaded by Microsoft branding it Copilot+.

    Bye
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Sep 19 10:10:51 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    It seems the LP (Linear programming)
    library by SWI-Prolog has been also
    ported to Scryer Prolog using the same DCG
    design as demonstrated in SWI-Prolog again:

    Module simplex
    https://www.scryer.pl/simplex

    What it requires from the Prolog system,
    and is not covered by the ISO core standard,
    are rational number, i.e. rdiv/2 etc.. and if
    you feed it with floating point numbers,

    judging from the source code, it might bark
    that it has no CLP(R) available to solve it. CLP(R)
    could be maybe a good candidate for Copilot+
    machines, but I am currently not aware

    of a Copliot+ Prolog system so to speak:

    About Microsoft Copilot+ PCs https://www.wired.com/story/what-is-copilot-plus-pc/

    The DCG design could make it easy that a
    solver somehow hands a problem to a NPU,
    making it transparent for the end-user.

    Bye

    Mild Shock schrieb:
    Hi,

    For the LP (Linear programming) part, it
    might be interesting to recall, that SWI-Prolog
    has an according library:

    A.55 library(simplex): Solve linear programming problems https://eu.swi-prolog.org/pldoc/man?section=simplex

    To model the constraint store, it doesn’t need
    any native Prolog system support, since it uses
    DCG for state threading. Linear programming was

    long time the pinnacle of mathematical problem
    solving. But some Articial Intelligence method do
    typically go beyond the linear case, might also

    tackle non-linear problems etc.. making heavy
    use of a NPU (Neural Processing Unit). In May 2025
    the first AI Laptops arrived with >40 TOPS NPUs.

    Spearheaded by Microsoft branding it Copilot+.

    Bye

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Sep 19 14:38:28 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Thank god it was only coffee and not orange juice:

    Ozzy Pours The Perfect O.J.
    https://m.youtube.com/watch?v=ojQUYq21G-o


    Bye

    Mild Shock schrieb:
    Hi,

    It seems the LP (Linear programming)
    library by SWI-Prolog has been also
    ported to Scryer Prolog using the same DCG
    design as demonstrated in SWI-Prolog again:

    Module simplex
    https://www.scryer.pl/simplex

    What it requires from the Prolog system,
    and is not covered by the ISO core standard,
    are rational number, i.e. rdiv/2 etc.. and if
    you feed it with floating point numbers,

    judging from the source code, it might bark
    that it has no CLP(R) available to solve it. CLP(R)
    could be maybe a good candidate for Copilot+
    machines, but I am currently not aware

    of a Copliot+ Prolog system so to speak:

    About Microsoft Copilot+ PCs https://www.wired.com/story/what-is-copilot-plus-pc/

    The DCG design could make it easy that a
    solver somehow hands a problem to a NPU,
    making it transparent for the end-user.

    Bye

    Mild Shock schrieb:
    Hi,

    For the LP (Linear programming) part, it
    might be interesting to recall, that SWI-Prolog
    has an according library:

    A.55 library(simplex): Solve linear programming problems
    https://eu.swi-prolog.org/pldoc/man?section=simplex

    To model the constraint store, it doesn’t need
    any native Prolog system support, since it uses
    DCG for state threading. Linear programming was

    long time the pinnacle of mathematical problem
    solving. But some Articial Intelligence method do
    typically go beyond the linear case, might also

    tackle non-linear problems etc.. making heavy
    use of a NPU (Neural Processing Unit). In May 2025
    the first AI Laptops arrived with >40 TOPS NPUs.

    Spearheaded by Microsoft branding it Copilot+.

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Sep 19 16:08:29 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Since some idiots blocked me on Scryer Prolog issues,
    I raise the issue here. Basically uniy_with_occurs_check/2
    probably does use a different implementation of unification

    than find for (=)/2. Because it doesn't scale, I find:

    /* Scryer Prolog */
    ?- test3(25).
    % CPU time: 0.001s, 57 inferences
    true.

    ?- test4(25).
    % CPU time: 2.133s, 57 inferences
    true.

    Expectation would be that unify_with_occurs_check/2
    does just scale like it does in SWI-Prolog. In
    SWI-Prolog I find:

    /* SWI-Prolog 9.3.30 */
    ?- test3(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    ?- test4(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    The test case was simply a hydra variant. Actually the
    last hydra modification posted by @kuniaki, which I
    am currently ticking along now:

    hydra(0, _) :- !.
    hydra(N, h(X, X)):- N>0, N0 is N-1, hydra(N0, X).

    hydra(0, A, A) :- !.
    hydra(N, h(X, X), A):- N>0, N0 is N-1, hydra(N0, X, A).

    test3(N) :- hydra(N, X), hydra(N, Y, Y),
    time(X = Y).

    test4(N) :- hydra(N, X), hydra(N, Y, Y),
    time(unify_with_occurs_check(X, Y)).

    But of course there is a cut (!) in the first rules.

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Sep 19 16:18:25 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Not sure whether its a language issue, or an
    algorithmic issue. But was working hard to
    bring unlimited stacks to Dogelog Player,

    removing the use of native stacks, and introducing
    some agenda data structures for certain primitive
    built-ins. Now amazingly I get in Rust:

    /* Scryer Prolog 0.9.4-656 */
    ?- between(7,10,K), N is 4^K, test2(N), fail; true.
    % CPU time: 0.001s, 56 inferences
    % CPU time: 0.004s, 56 inferences
    % CPU time: 0.019s, 56 inferences
    % CPU time: 0.132s, 56 inferences
    true.

    On the the other hand JavaScript shows me:

    /* Dogelog Player 2.1.1 / Node.js v24.6.0 */
    ?- between(7,10,K), N is 4^K, test2(N), fail; true.
    % Zeit 1 ms, GC 0 ms, Lips 15000, Uhr 19.09.2025 09:17
    % Zeit 4 ms, GC 0 ms, Lips 3750, Uhr 19.09.2025 09:17
    % Zeit 21 ms, GC 0 ms, Lips 714, Uhr 19.09.2025 09:17
    % Zeit 57 ms, GC 0 ms, Lips 263, Uhr 19.09.2025 09:17
    true.

    Stunning! The test case is the same hydra as
    below, now benchmarking the predicate (==)/2:

    test2(N) :- hydra(N, X), hydra(N, Y, Y), time(X == Y).

    But I have to redo the tests with more iterations
    to flatten the erractic behaviour of time measurement
    garbage collection and who nows what. Could get a

    better picture. But I observe since yesterady that
    JavaScript easily beats Rust, when using the Bart
    Demoen folklore trick inside JavaScript. One of

    the big brakes was not the stack, there is practically
    no difference between using a native stack or an
    artificial stack based on Array(). Its more that

    the slowdown was Map(), and it could be removed
    by using Bart Demoen folklore trick, as referenced
    by SWI-Prolog in the source code of unify().

    Bye

    Mild Shock schrieb:
    Hi,

    Since some idiots blocked me on Scryer Prolog issues,
    I raise the issue here. Basically uniy_with_occurs_check/2
    probably does use a different implementation of unification

    than find for (=)/2. Because it doesn't scale, I find:

    /* Scryer Prolog */
    ?- test3(25).
       % CPU time: 0.001s, 57 inferences
       true.

    ?- test4(25).
       % CPU time: 2.133s, 57 inferences
       true.

    Expectation would be that unify_with_occurs_check/2
    does just scale like it does in SWI-Prolog. In
    SWI-Prolog I find:

    /* SWI-Prolog 9.3.30 */
    ?- test3(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    ?- test4(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    The test case was simply a hydra variant. Actually the
    last hydra modification posted by @kuniaki, which I
    am currently ticking along now:

    hydra(0, _) :- !.
    hydra(N, h(X, X)):- N>0, N0 is N-1, hydra(N0, X).

    hydra(0, A, A) :- !.
    hydra(N, h(X, X), A):- N>0, N0 is N-1, hydra(N0, X, A).

    test3(N) :- hydra(N, X), hydra(N, Y, Y),
       time(X = Y).

    test4(N) :- hydra(N, X), hydra(N, Y, Y),
       time(unify_with_occurs_check(X, Y)).

    But of course there is a cut (!) in the first rules.

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Sep 19 18:22:23 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    I like the expert system description by

    I used Claude code to help me create a Prolog
    program of a little expert system to manage a
    kitchen that needed to produce different dishes
    with different appliances and to be able to
    maximize revenue. -- bauhaus911

    Instead of maximizing revenue you could also
    maximize energy boost. So instead of having
    a couple of morons on SWI-Prolog discourse,

    like those that have parked their brain in the
    nowhere and are going full throttle Donald
    Trump / Kesh Patel Nazi, the system could

    indeed recommend Orange Juice instead of
    coffee. For the following brain benefits:

    - Vitamin C powerhouse: ~50–60 mg per 100 ml,
    giving a solid immune boost.

    - Quick energy: natural sugars (glucose + fructose)
    give your brain and body fast fuel.

    - Hydration: mostly water, which helps maintain
    energy and focus.

    Have Fun! LoL

    Bye


    Mild Shock schrieb:
    Hi,

    Thank god it was only coffee and not orange juice:

    Ozzy Pours The Perfect O.J.
    https://m.youtube.com/watch?v=ojQUYq21G-o


    Bye

    Mild Shock schrieb:
    Hi,

    It seems the LP (Linear programming)
    library by SWI-Prolog has been also
    ported to Scryer Prolog using the same DCG
    design as demonstrated in SWI-Prolog again:

    Module simplex
    https://www.scryer.pl/simplex

    What it requires from the Prolog system,
    and is not covered by the ISO core standard,
    are rational number, i.e. rdiv/2 etc.. and if
    you feed it with floating point numbers,

    judging from the source code, it might bark
    that it has no CLP(R) available to solve it. CLP(R)
    could be maybe a good candidate for Copilot+
    machines, but I am currently not aware

    of a Copliot+ Prolog system so to speak:

    About Microsoft Copilot+ PCs
    https://www.wired.com/story/what-is-copilot-plus-pc/

    The DCG design could make it easy that a
    solver somehow hands a problem to a NPU,
    making it transparent for the end-user.

    Bye

    Mild Shock schrieb:
    Hi,

    For the LP (Linear programming) part, it
    might be interesting to recall, that SWI-Prolog
    has an according library:

    A.55 library(simplex): Solve linear programming problems
    https://eu.swi-prolog.org/pldoc/man?section=simplex

    To model the constraint store, it doesn’t need
    any native Prolog system support, since it uses
    DCG for state threading. Linear programming was

    long time the pinnacle of mathematical problem
    solving. But some Articial Intelligence method do
    typically go beyond the linear case, might also

    tackle non-linear problems etc.. making heavy
    use of a NPU (Neural Processing Unit). In May 2025
    the first AI Laptops arrived with >40 TOPS NPUs.

    Spearheaded by Microsoft branding it Copilot+.

    Bye



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Sep 19 18:38:59 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    You deleted like 10 posts of mine in the last
    48 hours, which tried to explain why patching
    is against "discourse".

    Even Torbjörn Lager agreed. I don't think
    you can continue your forum in this style.
    And then after you deleted a dozen of posts

    of mine, I am not allowed to delete my posts?

    You are simply completely crazy!!!

    Bye

    I got the following nonsense from you:

    Jan, we’ve asked you to be less combative with
    people here, but you continue to be extremely
    aggressive towards other users of the site.
    You have very helpful things to add, but when
    you then go back and delete everything you post,
    it obviates that helpfulness.

    Mild Shock schrieb:
    Hi,

    I like the expert system description by

    I used Claude code to help me create a Prolog
    program of a little expert system to manage a
    kitchen that needed to produce different dishes
    with different appliances and to be able to
    maximize revenue. -- bauhaus911

    Instead of maximizing revenue you could also
    maximize energy boost. So instead of having
    a couple of morons on SWI-Prolog discourse,

    like those that have parked their brain in the
    nowhere and are going full throttle Donald
    Trump / Kesh Patel Nazi, the system could

    indeed recommend Orange Juice instead of
    coffee. For the following brain benefits:

    - Vitamin C powerhouse: ~50–60 mg per 100 ml,
      giving a solid immune boost.

    - Quick energy: natural sugars (glucose + fructose)
      give your brain and body fast fuel.

    - Hydration: mostly water, which helps maintain
      energy and focus.

    Have Fun! LoL

    Bye


    Mild Shock schrieb:
    Hi,

    Thank god it was only coffee and not orange juice:

    Ozzy Pours The Perfect O.J.
    https://m.youtube.com/watch?v=ojQUYq21G-o


    Bye

    Mild Shock schrieb:
    Hi,

    It seems the LP (Linear programming)
    library by SWI-Prolog has been also
    ported to Scryer Prolog using the same DCG
    design as demonstrated in SWI-Prolog again:

    Module simplex
    https://www.scryer.pl/simplex

    What it requires from the Prolog system,
    and is not covered by the ISO core standard,
    are rational number, i.e. rdiv/2 etc.. and if
    you feed it with floating point numbers,

    judging from the source code, it might bark
    that it has no CLP(R) available to solve it. CLP(R)
    could be maybe a good candidate for Copilot+
    machines, but I am currently not aware

    of a Copliot+ Prolog system so to speak:

    About Microsoft Copilot+ PCs
    https://www.wired.com/story/what-is-copilot-plus-pc/

    The DCG design could make it easy that a
    solver somehow hands a problem to a NPU,
    making it transparent for the end-user.

    Bye

    Mild Shock schrieb:
    Hi,

    For the LP (Linear programming) part, it
    might be interesting to recall, that SWI-Prolog
    has an according library:

    A.55 library(simplex): Solve linear programming problems
    https://eu.swi-prolog.org/pldoc/man?section=simplex

    To model the constraint store, it doesn’t need
    any native Prolog system support, since it uses
    DCG for state threading. Linear programming was

    long time the pinnacle of mathematical problem
    solving. But some Articial Intelligence method do
    typically go beyond the linear case, might also

    tackle non-linear problems etc.. making heavy
    use of a NPU (Neural Processing Unit). In May 2025
    the first AI Laptops arrived with >40 TOPS NPUs.

    Spearheaded by Microsoft branding it Copilot+.

    Bye




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Sep 19 18:42:23 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    I will consult a Lawyer of mine.
    Maybe I can ask for a complete
    tear down of all my content.

    Bye

    Mild Shock schrieb:
    Hi,

    You deleted like 10 posts of mine in the last
    48 hours, which tried to explain why patching
    is against "discourse".

    Even Torbjörn Lager agreed. I don't think
    you can continue your forum in this style.
    And then after you deleted a dozen of posts

    of mine, I am not allowed to delete my posts?

    You are simply completely crazy!!!

    Bye

    I got the following nonsense from you:

    Jan, we’ve asked you to be less combative with
    people here, but you continue to be extremely
    aggressive towards other users of the site.
    You have very helpful things to add, but when
    you then go back and delete everything you post,
    it obviates that helpfulness.

    Mild Shock schrieb:
    Hi,

    I like the expert system description by

    I used Claude code to help me create a Prolog
    program of a little expert system to manage a
    kitchen that needed to produce different dishes
    with different appliances and to be able to
    maximize revenue. -- bauhaus911

    Instead of maximizing revenue you could also
    maximize energy boost. So instead of having
    a couple of morons on SWI-Prolog discourse,

    like those that have parked their brain in the
    nowhere and are going full throttle Donald
    Trump / Kesh Patel Nazi, the system could

    indeed recommend Orange Juice instead of
    coffee. For the following brain benefits:

    - Vitamin C powerhouse: ~50–60 mg per 100 ml,
       giving a solid immune boost.

    - Quick energy: natural sugars (glucose + fructose)
       give your brain and body fast fuel.

    - Hydration: mostly water, which helps maintain
       energy and focus.

    Have Fun! LoL

    Bye


    Mild Shock schrieb:
    Hi,

    Thank god it was only coffee and not orange juice:

    Ozzy Pours The Perfect O.J.
    https://m.youtube.com/watch?v=ojQUYq21G-o


    Bye

    Mild Shock schrieb:
    Hi,

    It seems the LP (Linear programming)
    library by SWI-Prolog has been also
    ported to Scryer Prolog using the same DCG
    design as demonstrated in SWI-Prolog again:

    Module simplex
    https://www.scryer.pl/simplex

    What it requires from the Prolog system,
    and is not covered by the ISO core standard,
    are rational number, i.e. rdiv/2 etc.. and if
    you feed it with floating point numbers,

    judging from the source code, it might bark
    that it has no CLP(R) available to solve it. CLP(R)
    could be maybe a good candidate for Copilot+
    machines, but I am currently not aware

    of a Copliot+ Prolog system so to speak:

    About Microsoft Copilot+ PCs
    https://www.wired.com/story/what-is-copilot-plus-pc/

    The DCG design could make it easy that a
    solver somehow hands a problem to a NPU,
    making it transparent for the end-user.

    Bye

    Mild Shock schrieb:
    Hi,

    For the LP (Linear programming) part, it
    might be interesting to recall, that SWI-Prolog
    has an according library:

    A.55 library(simplex): Solve linear programming problems
    https://eu.swi-prolog.org/pldoc/man?section=simplex

    To model the constraint store, it doesn’t need
    any native Prolog system support, since it uses
    DCG for state threading. Linear programming was

    long time the pinnacle of mathematical problem
    solving. But some Articial Intelligence method do
    typically go beyond the linear case, might also

    tackle non-linear problems etc.. making heavy
    use of a NPU (Neural Processing Unit). In May 2025
    the first AI Laptops arrived with >40 TOPS NPUs.

    Spearheaded by Microsoft branding it Copilot+.

    Bye





    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Sep 25 01:50:01 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Scryer Prologs unify_with_occurs_check/2 might have
    been fixed. I can now test the following:

    /* Scryer Prolog 0.9.4-660 */

    % ?- bench, bench, bench.
    % [...]
    % % CPU time: 0.148s, 57 inferences
    % % CPU time: 0.126s, 57 inferences
    % % CPU time: 0.214s, 58 inferences
    % % CPU time: 0.213s, 58 inferences
    % true.

    % ?- bench2, bench2, bench2.
    % [...]
    % % CPU time: 0.036s, 58 inferences
    % % CPU time: 0.042s, 58 inferences
    % % CPU time: 0.018s, 59 inferences
    % % CPU time: 0.096s, 56 inferences
    % true.

    This was the test case, it includes
    unify_with_occurs_check/2:

    hydra(0, _) :- !.
    hydra(N, h(X, X)) :- N > 0, N0 is N-1, hydra(N0, X).

    hydra(0, A, A) :- !.
    hydra(N, h(X, X), A) :- N > 0, N0 is N-1, hydra(N0, X, A).

    bench :-
    hydra(1048576, X), hydra(1048576, Y, Y),
    time(X = Y),
    time(unify_with_occurs_check(X, Y)),
    time(X == Y),
    time(compare(_, X, Y)), fail; true.

    bench2 :-
    hydra(1048576, X), hydra(1048576, Y, Y),
    time(copy_term(X-Y,_)),
    time(term_variables(X-Y,_)),
    time(\+ ground(X-Y)),
    time(acyclic_term(X-Y)),
    fail; true.

    Bye

    Mild Shock schrieb:
    Hi,

    Since some idiots blocked me on Scryer Prolog issues,
    I raise the issue here. Basically uniy_with_occurs_check/2
    probably does use a different implementation of unification

    than find for (=)/2. Because it doesn't scale, I find:

    /* Scryer Prolog */
    ?- test3(25).
       % CPU time: 0.001s, 57 inferences
       true.

    ?- test4(25).
       % CPU time: 2.133s, 57 inferences
       true.

    Expectation would be that unify_with_occurs_check/2
    does just scale like it does in SWI-Prolog. In
    SWI-Prolog I find:

    /* SWI-Prolog 9.3.30 */
    ?- test3(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    ?- test4(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    The test case was simply a hydra variant. Actually the
    last hydra modification posted by @kuniaki, which I
    am currently ticking along now:

    hydra(0, _) :- !.
    hydra(N, h(X, X)):- N>0, N0 is N-1, hydra(N0, X).

    hydra(0, A, A) :- !.
    hydra(N, h(X, X), A):- N>0, N0 is N-1, hydra(N0, X, A).

    test3(N) :- hydra(N, X), hydra(N, Y, Y),
       time(X = Y).

    test4(N) :- hydra(N, X), hydra(N, Y, Y),
       time(unify_with_occurs_check(X, Y)).

    But of course there is a cut (!) in the first rules.

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Sep 25 01:59:16 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    The facinating result was the Jaffar Unification
    beats Scryer Prolog even on the target JavaScript.
    Not to speak of the Java target, which also beat it.

    But I rejected Jaffar Unification, because it
    temporarily modifies my frozen terms, which might
    impede some future program sharing across

    premptive threads. So I rolled back Pointer based
    Jaffar Unification, and went back to Map Based
    Union Find. Overall the Map and a slightly bigger

    stack incures a factor 3x slowdown. So for Java I get now:

    /* Dogelog Player 2.1.1 for Java */

    % ?- bench, bench, bench.
    % [...]
    % % Zeit 469 ms, GC 0 ms, Lips 42, Uhr 24.09.2025 20:00
    % % Zeit 318 ms, GC 0 ms, Lips 62, Uhr 24.09.2025 20:00
    % % Zeit 329 ms, GC 0 ms, Lips 60, Uhr 24.09.2025 20:00
    % % Zeit 378 ms, GC 0 ms, Lips 52, Uhr 24.09.2025 20:00
    % true.

    % ?- bench2, bench2, bench2.
    % [...]
    % % Zeit 847 ms, GC 0 ms, Lips 23, Uhr 25.09.2025 01:04
    % % Zeit 506 ms, GC 0 ms, Lips 39, Uhr 25.09.2025 01:04
    % % Zeit 186 ms, GC 0 ms, Lips 118, Uhr 25.09.2025 01:04
    % % Zeit 418 ms, GC 0 ms, Lips 35, Uhr 25.09.2025 01:04
    % true.

    In the binary predicates (bench) the factor 3x is pretty
    much seen. But in the unary predicates (bench2) the
    factor is much higher , something 10x - 20x. And JavaScript

    doesn't help. But this might be the price to pay for
    a "non-intrusive" algorithm. Another name I have for my
    current take is "non-tainting" algorithms.

    Should put a closer eye what could be done "non-intrusive",
    or maybe device an algorithm that is a mixture of "non-
    intrusive" and "intrucive".

    Bye

    Mild Shock schrieb:
    Hi,

    Scryer Prologs unify_with_occurs_check/2 might have
    been fixed. I can now test the following:

    /* Scryer Prolog 0.9.4-660 */

    % ?- bench, bench, bench.
    % [...]
    %    % CPU time: 0.148s, 57 inferences
    %    % CPU time: 0.126s, 57 inferences
    %    % CPU time: 0.214s, 58 inferences
    %    % CPU time: 0.213s, 58 inferences
    %    true.

    % ?- bench2, bench2, bench2.
    % [...]
    %    % CPU time: 0.036s, 58 inferences
    %    % CPU time: 0.042s, 58 inferences
    %    % CPU time: 0.018s, 59 inferences
    %    % CPU time: 0.096s, 56 inferences
    %    true.

    This was the test case, it includes
    unify_with_occurs_check/2:

    hydra(0, _) :- !.
    hydra(N, h(X, X)) :- N > 0, N0 is N-1, hydra(N0, X).

    hydra(0, A, A) :- !.
    hydra(N, h(X, X), A) :- N > 0, N0 is N-1, hydra(N0, X, A).

    bench :-
       hydra(1048576, X), hydra(1048576, Y, Y),
       time(X = Y),
       time(unify_with_occurs_check(X, Y)),
       time(X == Y),
       time(compare(_, X, Y)), fail; true.

    bench2 :-
       hydra(1048576, X), hydra(1048576, Y, Y),
       time(copy_term(X-Y,_)),
       time(term_variables(X-Y,_)),
       time(\+ ground(X-Y)),
       time(acyclic_term(X-Y)),
       fail; true.

    Bye

    Mild Shock schrieb:
    Hi,

    Since some idiots blocked me on Scryer Prolog issues,
    I raise the issue here. Basically uniy_with_occurs_check/2
    probably does use a different implementation of unification

    than find for (=)/2. Because it doesn't scale, I find:

    /* Scryer Prolog */
    ?- test3(25).
        % CPU time: 0.001s, 57 inferences
        true.

    ?- test4(25).
        % CPU time: 2.133s, 57 inferences
        true.

    Expectation would be that unify_with_occurs_check/2
    does just scale like it does in SWI-Prolog. In
    SWI-Prolog I find:

    /* SWI-Prolog 9.3.30 */
    ?- test3(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    ?- test4(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    The test case was simply a hydra variant. Actually the
    last hydra modification posted by @kuniaki, which I
    am currently ticking along now:

    hydra(0, _) :- !.
    hydra(N, h(X, X)):- N>0, N0 is N-1, hydra(N0, X).

    hydra(0, A, A) :- !.
    hydra(N, h(X, X), A):- N>0, N0 is N-1, hydra(N0, X, A).

    test3(N) :- hydra(N, X), hydra(N, Y, Y),
        time(X = Y).

    test4(N) :- hydra(N, X), hydra(N, Y, Y),
        time(unify_with_occurs_check(X, Y)).

    But of course there is a cut (!) in the first rules.

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Sep 25 02:06:44 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    I also tryed to measure Trealla Prolog. But the
    measurements are strange, always 0.001 secs or
    something. My suspicion is that Trealla Prolog

    might apply "frozeness" to cyclic terms. A form
    of hash consing, which gives Trealla Prolog
    enough information to turn certain operations

    practically into no-ops. I don't know yet how
    to proof my suspicion, and don't know how to
    deduce it from the source code.

    That there are kind two types of "frozen" terms,
    acylic and cyclic, emerge a few days ago in
    formerly Jekejeke Prolog. I can represent it

    inside the terms as null versus Variable[], in
    the variable spine. But I was not yet able to
    bring this feature to Dogelog Player. Because

    copy_term/2 does not yet attempt a "frozenness"
    analysis. Frozen Prolog terms are only produced
    during transpilation, consult or assert,

    but not yet during copy_term/2 in Dogelog Player.

    Bye

    Mild Shock schrieb:
    Hi,

    The facinating result was the Jaffar Unification
    beats Scryer Prolog even on the target JavaScript.
    Not to speak of the Java target, which also beat it.

    But I rejected Jaffar Unification, because it
    temporarily modifies my frozen terms, which might
    impede some future program sharing across

    premptive threads. So I rolled back Pointer based
    Jaffar Unification, and went back to Map Based
    Union Find. Overall the Map and a slightly bigger

    stack incures a factor 3x slowdown. So for Java I get now:

    /* Dogelog Player 2.1.1 for Java */

    % ?- bench, bench, bench.
    % [...]
    % % Zeit 469 ms, GC 0 ms, Lips 42, Uhr 24.09.2025 20:00
    % % Zeit 318 ms, GC 0 ms, Lips 62, Uhr 24.09.2025 20:00
    % % Zeit 329 ms, GC 0 ms, Lips 60, Uhr 24.09.2025 20:00
    % % Zeit 378 ms, GC 0 ms, Lips 52, Uhr 24.09.2025 20:00
    % true.

    % ?- bench2, bench2, bench2.
    % [...]
    % % Zeit 847 ms, GC 0 ms, Lips 23, Uhr 25.09.2025 01:04
    % % Zeit 506 ms, GC 0 ms, Lips 39, Uhr 25.09.2025 01:04
    % % Zeit 186 ms, GC 0 ms, Lips 118, Uhr 25.09.2025 01:04
    % % Zeit 418 ms, GC 0 ms, Lips 35, Uhr 25.09.2025 01:04
    % true.

    In the binary predicates (bench) the factor 3x is pretty
    much seen. But in the unary predicates (bench2) the
    factor is much higher , something 10x - 20x. And JavaScript

    doesn't help. But this might be the price to pay for
    a "non-intrusive" algorithm. Another name I have for my
    current take is "non-tainting" algorithms.

    Should put a closer eye what could be done "non-intrusive",
    or maybe device an algorithm that is a mixture of "non-
    intrusive" and "intrucive".

    Bye

    Mild Shock schrieb:
    Hi,

    Scryer Prologs unify_with_occurs_check/2 might have
    been fixed. I can now test the following:

    /* Scryer Prolog 0.9.4-660 */

    % ?- bench, bench, bench.
    % [...]
    %    % CPU time: 0.148s, 57 inferences
    %    % CPU time: 0.126s, 57 inferences
    %    % CPU time: 0.214s, 58 inferences
    %    % CPU time: 0.213s, 58 inferences
    %    true.

    % ?- bench2, bench2, bench2.
    % [...]
    %    % CPU time: 0.036s, 58 inferences
    %    % CPU time: 0.042s, 58 inferences
    %    % CPU time: 0.018s, 59 inferences
    %    % CPU time: 0.096s, 56 inferences
    %    true.

    This was the test case, it includes
    unify_with_occurs_check/2:

    hydra(0, _) :- !.
    hydra(N, h(X, X)) :- N > 0, N0 is N-1, hydra(N0, X).

    hydra(0, A, A) :- !.
    hydra(N, h(X, X), A) :- N > 0, N0 is N-1, hydra(N0, X, A).

    bench :-
        hydra(1048576, X), hydra(1048576, Y, Y),
        time(X = Y),
        time(unify_with_occurs_check(X, Y)),
        time(X == Y),
        time(compare(_, X, Y)), fail; true.

    bench2 :-
        hydra(1048576, X), hydra(1048576, Y, Y),
        time(copy_term(X-Y,_)),
        time(term_variables(X-Y,_)),
        time(\+ ground(X-Y)),
        time(acyclic_term(X-Y)),
        fail; true.

    Bye

    Mild Shock schrieb:
    Hi,

    Since some idiots blocked me on Scryer Prolog issues,
    I raise the issue here. Basically uniy_with_occurs_check/2
    probably does use a different implementation of unification

    than find for (=)/2. Because it doesn't scale, I find:

    /* Scryer Prolog */
    ?- test3(25).
        % CPU time: 0.001s, 57 inferences
        true.

    ?- test4(25).
        % CPU time: 2.133s, 57 inferences
        true.

    Expectation would be that unify_with_occurs_check/2
    does just scale like it does in SWI-Prolog. In
    SWI-Prolog I find:

    /* SWI-Prolog 9.3.30 */
    ?- test3(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    ?- test4(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    The test case was simply a hydra variant. Actually the
    last hydra modification posted by @kuniaki, which I
    am currently ticking along now:

    hydra(0, _) :- !.
    hydra(N, h(X, X)):- N>0, N0 is N-1, hydra(N0, X).

    hydra(0, A, A) :- !.
    hydra(N, h(X, X), A):- N>0, N0 is N-1, hydra(N0, X, A).

    test3(N) :- hydra(N, X), hydra(N, Y, Y),
        time(X = Y).

    test4(N) :- hydra(N, X), hydra(N, Y, Y),
        time(unify_with_occurs_check(X, Y)).

    But of course there is a cut (!) in the first rules.

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Sep 25 02:21:16 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    If there would be a subcategories "acyclic"
    and "cyclic" inside the "frozen" category.
    One could indeed use safely a hybrid algorithm

    that is non-intrusive for frozen terms, and
    intrusive for non-frozen terms. Actually calling
    it hybrid is a little overkill. It would just

    stop at frozen terms. If it has the subcategories,
    the built-in acyclic_term/1 could also stop, and
    draw its result from the subcategory. This works

    already in formerly Jekejeke Prolog now, but not
    yet in Dogelog Player. That the rollback also
    gave a 10x-20x factor slowdown for the unary

    predicates is a little annonying. Must find a compromise.

    Bye

    Mild Shock schrieb:
    Hi,

    I also tryed to measure Trealla Prolog. But the
    measurements are strange, always 0.001 secs or
    something. My suspicion is that Trealla Prolog

    might apply "frozeness" to cyclic terms. A form
    of hash consing, which gives Trealla Prolog
    enough information to turn certain operations

    practically into no-ops. I don't know yet how
    to proof my suspicion, and don't know how to
    deduce it from the source code.

    That there are kind two types of "frozen" terms,
    acylic and cyclic, emerge a few days ago in
    formerly Jekejeke Prolog. I can represent it

    inside the terms as null versus Variable[], in
    the variable spine. But I was not yet able to
    bring this feature to Dogelog Player. Because

    copy_term/2 does not yet attempt a "frozenness"
    analysis. Frozen Prolog terms are only produced
    during transpilation, consult or assert,

    but not yet during copy_term/2 in Dogelog Player.

    Bye

    Mild Shock schrieb:
    Hi,

    The facinating result was the Jaffar Unification
    beats Scryer Prolog even on the target JavaScript.
    Not to speak of the Java target, which also beat it.

    But I rejected Jaffar Unification, because it
    temporarily modifies my frozen terms, which might
    impede some future program sharing across

    premptive threads. So I rolled back Pointer based
    Jaffar Unification, and went back to Map Based
    Union Find. Overall the Map and a slightly bigger

    stack incures a factor 3x slowdown. So for Java I get now:

    /* Dogelog Player 2.1.1 for Java */

    % ?- bench, bench, bench.
    % [...]
    % % Zeit 469 ms, GC 0 ms, Lips 42, Uhr 24.09.2025 20:00
    % % Zeit 318 ms, GC 0 ms, Lips 62, Uhr 24.09.2025 20:00
    % % Zeit 329 ms, GC 0 ms, Lips 60, Uhr 24.09.2025 20:00
    % % Zeit 378 ms, GC 0 ms, Lips 52, Uhr 24.09.2025 20:00
    % true.

    % ?- bench2, bench2, bench2.
    % [...]
    % % Zeit 847 ms, GC 0 ms, Lips 23, Uhr 25.09.2025 01:04
    % % Zeit 506 ms, GC 0 ms, Lips 39, Uhr 25.09.2025 01:04
    % % Zeit 186 ms, GC 0 ms, Lips 118, Uhr 25.09.2025 01:04
    % % Zeit 418 ms, GC 0 ms, Lips 35, Uhr 25.09.2025 01:04
    % true.

    In the binary predicates (bench) the factor 3x is pretty
    much seen. But in the unary predicates (bench2) the
    factor is much higher , something 10x - 20x. And JavaScript

    doesn't help. But this might be the price to pay for
    a "non-intrusive" algorithm. Another name I have for my
    current take is "non-tainting" algorithms.

    Should put a closer eye what could be done "non-intrusive",
    or maybe device an algorithm that is a mixture of "non-
    intrusive" and "intrucive".

    Bye

    Mild Shock schrieb:
    Hi,

    Scryer Prologs unify_with_occurs_check/2 might have
    been fixed. I can now test the following:

    /* Scryer Prolog 0.9.4-660 */

    % ?- bench, bench, bench.
    % [...]
    %    % CPU time: 0.148s, 57 inferences
    %    % CPU time: 0.126s, 57 inferences
    %    % CPU time: 0.214s, 58 inferences
    %    % CPU time: 0.213s, 58 inferences
    %    true.

    % ?- bench2, bench2, bench2.
    % [...]
    %    % CPU time: 0.036s, 58 inferences
    %    % CPU time: 0.042s, 58 inferences
    %    % CPU time: 0.018s, 59 inferences
    %    % CPU time: 0.096s, 56 inferences
    %    true.

    This was the test case, it includes
    unify_with_occurs_check/2:

    hydra(0, _) :- !.
    hydra(N, h(X, X)) :- N > 0, N0 is N-1, hydra(N0, X).

    hydra(0, A, A) :- !.
    hydra(N, h(X, X), A) :- N > 0, N0 is N-1, hydra(N0, X, A).

    bench :-
        hydra(1048576, X), hydra(1048576, Y, Y),
        time(X = Y),
        time(unify_with_occurs_check(X, Y)),
        time(X == Y),
        time(compare(_, X, Y)), fail; true.

    bench2 :-
        hydra(1048576, X), hydra(1048576, Y, Y),
        time(copy_term(X-Y,_)),
        time(term_variables(X-Y,_)),
        time(\+ ground(X-Y)),
        time(acyclic_term(X-Y)),
        fail; true.

    Bye

    Mild Shock schrieb:
    Hi,

    Since some idiots blocked me on Scryer Prolog issues,
    I raise the issue here. Basically uniy_with_occurs_check/2
    probably does use a different implementation of unification

    than find for (=)/2. Because it doesn't scale, I find:

    /* Scryer Prolog */
    ?- test3(25).
        % CPU time: 0.001s, 57 inferences
        true.

    ?- test4(25).
        % CPU time: 2.133s, 57 inferences
        true.

    Expectation would be that unify_with_occurs_check/2
    does just scale like it does in SWI-Prolog. In
    SWI-Prolog I find:

    /* SWI-Prolog 9.3.30 */
    ?- test3(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    ?- test4(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    The test case was simply a hydra variant. Actually the
    last hydra modification posted by @kuniaki, which I
    am currently ticking along now:

    hydra(0, _) :- !.
    hydra(N, h(X, X)):- N>0, N0 is N-1, hydra(N0, X).

    hydra(0, A, A) :- !.
    hydra(N, h(X, X), A):- N>0, N0 is N-1, hydra(N0, X, A).

    test3(N) :- hydra(N, X), hydra(N, Y, Y),
        time(X = Y).

    test4(N) :- hydra(N, X), hydra(N, Y, Y),
        time(unify_with_occurs_check(X, Y)).

    But of course there is a cut (!) in the first rules.

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye





    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Sep 26 12:19:51 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Was first thinking the unify_with_occurs_check/2 is
    gone, when I tested this:

    % ?- bench, bench, bench.
    % [...]
    % % CPU time: 0.148s, 57 inferences
    % % CPU time: 0.126s, 57 inferences

    But I did the test wrongly, basically the preceeding
    (=)/2 did bind a variable, so that unify_with_occurs_check/2
    didn't have to perform an occurs check.

    If I undo the bind of a variable in (=)/2 before
    going into the testing of unify_with_occurs_check/2,
    I get the "bug" again:

    % ?- bench, bench, bench.
    % % CPU time: 0.203s, 37 inferences
    % %%% hangs

    But since ground/1 etc.. can do hydra, I suspect
    the Scryer Prolog team will sooner or later figure
    out how to do the occurs check so that it can

    also do hyda. This is the test case now:

    hydra(0, _) :- !.
    hydra(N, h(X, X)) :- N > 0, N0 is N-1, hydra(N0, X).

    hydra(0, A, A) :- !.
    hydra(N, h(X, X), A) :- N > 0, N0 is N-1, hydra(N0, X, A).

    bench :-
    hydra(1048576, X), hydra(1048576, Y, Y),
    time(\+ \+ X = Y),
    time(\+ \+ unify_with_occurs_check(X, Y)),
    time(\+ X == Y),
    time(compare(_, X, Y)), fail; true.

    Bye

    Mild Shock schrieb:
    Hi,

    Since some idiots blocked me on Scryer Prolog issues,
    I raise the issue here. Basically uniy_with_occurs_check/2
    probably does use a different implementation of unification

    than find for (=)/2. Because it doesn't scale, I find:

    /* Scryer Prolog */
    ?- test3(25).
       % CPU time: 0.001s, 57 inferences
       true.

    ?- test4(25).
       % CPU time: 2.133s, 57 inferences
       true.

    Expectation would be that unify_with_occurs_check/2
    does just scale like it does in SWI-Prolog. In
    SWI-Prolog I find:

    /* SWI-Prolog 9.3.30 */
    ?- test3(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    ?- test4(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    The test case was simply a hydra variant. Actually the
    last hydra modification posted by @kuniaki, which I
    am currently ticking along now:

    hydra(0, _) :- !.
    hydra(N, h(X, X)):- N>0, N0 is N-1, hydra(N0, X).

    hydra(0, A, A) :- !.
    hydra(N, h(X, X), A):- N>0, N0 is N-1, hydra(N0, X, A).

    test3(N) :- hydra(N, X), hydra(N, Y, Y),
       time(X = Y).

    test4(N) :- hydra(N, X), hydra(N, Y, Y),
       time(unify_with_occurs_check(X, Y)).

    But of course there is a cut (!) in the first rules.

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Oct 13 09:49:15 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Because GenX and later suffers from:

    Somehow methods and tools to realize
    efficient DCGs in Prolog are missing. Most
    DCG attempts that one sees succumb

    to some declarative nonsense, creating
    exponentially many spurious choice points,
    you find rarely somebody mastering the Art.

    Modern programmers fancy nothing else than
    throwing a set of foreign library to their
    Prolog system project. This is best seen in WebPL:

    LALRPOP MIT/Apache-2.0 Generate the parser https://github.com/w-henderson/WebPL/blob/main/dissertation.pdf

    So there is no aim at creating a self hosting
    Prolog system. There is a deep distrust in
    DCGs. But why build a Prolog system that will

    possibly ultimately have DCG, when you distrust
    in DCGs? The second problem of GenX and later
    is probably they don't know how to bootstrap

    a Prolog system B via another Prolog system A.

    Bye

    P.S.: The result of using a Parser Tool are
    often frustrating on the following levels:
    - No operator table
    - Directives are fixed
    - Introducong DCGs need rebuid

    Scryer Prolog has Operator Table, but mostlikely
    used a Parser Tool some time in the project,
    or programming templates borrow from Parser Tools.

    Probably the worst recent example building a
    Prolog system, which would have a reference for
    the Parsing in Rust itself. So we have 2025

    and there is not a single self hosting Prolog
    yet, while all other programming languages such
    as Java, golang, etc.. are self hosting.

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Oct 13 15:09:44 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Maybe it is with Prolog like with Dinosaurs,
    when they got extinct by a meteor crash.
    All that survived were some small rodents
    as the story goes. Their advantage:
    - Small Size
    - Burrowing Behavior
    - Omnivorous Diet
    - Reproductive Speed

    Now Scryer Prolog uses a Shift-Reduce under
    the hood, the small rodent. But it might possibly
    shove their end-users heavy Tabled DCG into
    the face? So that this left recursion can be solved:

    expr --> expr + factor

    "constraint programming" got already killed
    when ILOG was bought by IBM in 2008. ILOG's
    optimization solver, CPLEX, has its roots in
    the CHIP (Constraint Handling in Prolog)

    language, 1985 at the European Computer-Industry
    Research Centre (ECRC), initially using a Prolog
    language interface. So its even not a Fench product.
    By the time ILOG became a commercial powerhouse,

    Prolog largely disappeared from their product
    codebases. There was a Transition to C++ for
    Performance and Industry Adoption. I have the
    gut feeling that Tabled DCG is similarly dead,

    especially in the light of large languages models (LLM).
    But I cannot point the figure yet perfectly
    at the issues. Currently exploring the sad problem
    domain of this mostlikely dead horse.

    A problem could be the overkill of "Logic Grammars",
    that do not tolerate incorrect texts and that cannot
    be applied so easy partially. Mostlikely one has
    to scrutinize the assumptions behind Tabled DCG, and

    review again the possibly options beyond the beaten paths.

    Bye

    Mild Shock schrieb:
    Hi,

    Because GenX and later suffers from:

    Somehow methods and tools to realize
    efficient DCGs in Prolog are missing. Most
    DCG attempts that one sees succumb

    to some declarative nonsense, creating
    exponentially many spurious choice points,
    you find rarely somebody mastering the Art.

    Modern programmers fancy nothing else than
    throwing a set of foreign library to their
    Prolog system project. This is best seen in WebPL:

    LALRPOP MIT/Apache-2.0 Generate the parser https://github.com/w-henderson/WebPL/blob/main/dissertation.pdf

    So there is no aim at creating a self hosting
    Prolog system. There is a deep distrust in
    DCGs. But why build a Prolog system that will

    possibly ultimately have DCG, when you distrust
    in DCGs? The second problem of GenX and later
    is probably they don't know how to bootstrap

    a Prolog system B via another Prolog system A.

    Bye

    P.S.: The result of using a Parser Tool are
    often frustrating on the following levels:
    - No operator table
    - Directives are fixed
    - Introducong DCGs need rebuid

    Scryer Prolog has Operator Table, but mostlikely
    used a Parser Tool some time in the project,
    or programming templates borrow from Parser Tools.

    Probably the worst recent example building a
    Prolog system, which would have a reference for
    the Parsing in Rust itself. So we have 2025

    and there is not a single self hosting Prolog
    yet, while all other programming languages such
    as Java, golang, etc.. are self hosting.

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Oct 15 02:38:34 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    I spent some time thinking about my primes.pl
    test. And came to the conclusion that it
    mainly tests the Prolog ALU. Things like

    integer successor or integer modulo. Then
    I found that Java has Math.floorMod() which
    I wasn't using yet. And peng results are better:

    /* Dogelog Player 2.1.2 for Java, today */
    ?- time(test).
    % Zeit 286 ms, GC 1 ms, Lips 26302430, Uhr 15.10.2025 02:31
    true.

    Maybe the Java backend picks a CPU instruction
    for Math.floorMod() instead of executing the
    longer code sequence that is needed to correct

    rem/2 into mod/2. Who knows. I also reorganized
    the code a little bit, and eliminated an extra
    method call in all arithmetic functions, by

    inlining the arithmetic function body in the
    evaluable predicate definition code. Comparison
    to old measurements and some measurements of

    other Prolog systems:

    /* Dogelog Player 2.1.2 for Java, weeks ago */
    ?- time(test).
    % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
    true.

    /* SWI-Prolog 9.0.4 */
    ?- time(test).
    % 7,506,639 inferences, 0.363 CPU in 0.362 seconds
    (100% CPU, 20693560 Lips)
    true.

    /* Scryer Prolog 0.9.4-639 */
    ?- time(test).
    % CPU time: 0.365s, 7_517_613 inferences
    true.

    /* Trealla Prolog 2.82.23-3 */
    ?- time(test).
    % Time elapsed 0.868s, 11263917 Inferences, 12.983 MLips
    true.

    Bye

    P.S.: The code uses the hated mathematical mod/2,
    and not the cheaper rem/2 that CPUs usually have:

    test :-
    len(L, 1000),
    primes(L, _).

    primes([], 1).
    primes([J|L], J) :-
    primes(L, I),
    K is I+1,
    search(L, K, J).

    search(L, I, J) :-
    mem(X, L),
    I mod X =:= 0, !,
    K is I+1,
    search(L, K, J).
    search(_, I, I).

    mem(X, [X|_]).
    mem(X, [_|Y]) :-
    mem(X, Y).

    len([], 0) :- !.
    len([_|L], N) :-
    N > 0,
    M is N-1,
    len(L, M).

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Oct 15 04:33:12 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    The change from 378 ms to 286 ms is around 25-30%
    is insane. But I did both tests on a novel AI CPU.
    To be precise on a AMD Ryzen AI 7 350.

    But somehow I picked up rumors that AI CPUs now
    might do Neural Network Branch Prediction. The
    idea seems to exist in hardware at least since (2012):

    Machine learning and artificial intelligence are
    the current hype (again). In their new Ryzen
    processors, AMD advertises the Neural Net
    Prediction. It turns out this is was already
    used in their older (2012) Piledriver architecture
    used for example in the AMD A10-4600M. It is also
    present in recent Samsung processors such as the
    one powering the Galaxy S7. What is it really? https://chasethedevil.github.io/post/the_neural_network_in_your_cpu/

    It can be done with Convoluted Neural Networks (CNN):

    BranchNet: A Convolutional Neural Network to
    Predict Hard-To-Predict Branches
    To this end, Tarsa et al. proposed using convolutional
    neural networks (CNNs) that are trained at
    compiletime to accurately predict branches that
    TAGE cannot. Given enough profiling coverage, CNNs
    learn input-independent branch correlations. https://microarch.org/micro53/papers/738300a118.pdf

    Interstingly the above shows cases a PGO based
    Machine Learning for Branch Predictors. No clue
    how they construct the CPU, that they can feed

    it with offline constructed neural neutworks for
    their own execution. Maybe an optimizer uses it?
    But I guess a more modern solutions would not only

    use CNN, but also an Attention Mechanism.

    Bye

    Mild Shock schrieb:
    Hi,

    I spent some time thinking about my primes.pl
    test. And came to the conclusion that it
    mainly tests the Prolog ALU. Things like

    integer successor or integer modulo. Then
    I found that Java has Math.floorMod() which
    I wasn't using yet. And peng results are better:

    /* Dogelog Player 2.1.2 for Java, today */
    ?- time(test).
    % Zeit 286 ms, GC 1 ms, Lips 26302430, Uhr 15.10.2025 02:31
    true.

    Maybe the Java backend picks a CPU instruction
    for Math.floorMod() instead of executing the
    longer code sequence that is needed to correct

    rem/2 into mod/2. Who knows. I also reorganized
    the code a little bit, and eliminated an extra
    method call in all arithmetic functions, by

    inlining the arithmetic function body in the
    evaluable predicate definition code. Comparison
    to old measurements and some measurements of

    other Prolog systems:

    /* Dogelog Player 2.1.2 for Java, weeks ago */
    ?- time(test).
    % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
    true.

    /* SWI-Prolog 9.0.4 */
    ?- time(test).
    % 7,506,639 inferences, 0.363 CPU in 0.362 seconds
    (100% CPU, 20693560 Lips)
    true.

    /* Scryer Prolog 0.9.4-639 */
    ?- time(test).
    % CPU time: 0.365s, 7_517_613 inferences
    true.

    /* Trealla Prolog 2.82.23-3 */
    ?- time(test).
    % Time elapsed 0.868s, 11263917 Inferences, 12.983 MLips
    true.

    Bye

    P.S.: The code uses the hated mathematical mod/2,
    and not the cheaper rem/2 that CPUs usually have:

    test :-
       len(L, 1000),
       primes(L, _).

    primes([], 1).
    primes([J|L], J) :-
       primes(L, I),
       K is I+1,
       search(L, K, J).

    search(L, I, J) :-
       mem(X, L),
       I mod X =:= 0, !,
       K is I+1,
       search(L, K, J).
    search(_, I, I).

    mem(X, [X|_]).
    mem(X, [_|Y]) :-
       mem(X, Y).

    len([], 0) :- !.
    len([_|L], N) :-
       N > 0,
       M is N-1,
       len(L, M).

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Oct 15 16:04:08 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    It seems I am having problems pacing with
    all the new fancy toys. Wasn't able to really
    benchmark my NPU from a Desktop AI machine,

    picked the wrong driver. Need to try again.
    What worked was benchmarking Mobile AI machines.
    I just grabbed Geekbench AI and some devices:

    USA Fab, M4:

    sANN hANN qANN
    iPad CPU 4848 7947 6353
    iPad GPU 9752 11383 10051
    iPad NPU 4873 36544 *51634*

    China Fab, Snapdragon:

    sANN hANN qANN
    Redmi CPU 1044 950 1723
    Redmi GPU 480 905 737
    Redmi NNAPI 205 205 469
    Redmi QNN 226 226 *10221*

    Speed-Up via NPU is factor 10x. See the column
    qANN which means quantizised artificial neural
    networks, when NPU or QNN is picked.

    The mobile AI NPUs are optimized using
    mimimal amounts of energy, and minimal amounts
    of space squeezing (distilling) everything

    into INT8 and INT4.

    Bye

    Mild Shock schrieb:
    Hi,

    The change from 378 ms to 286 ms is around 25-30%
    is insane. But I did both tests on a novel AI CPU.
    To be precise on a AMD Ryzen AI 7 350.

    But somehow I picked up rumors that AI CPUs now
    might do Neural Network Branch Prediction. The
    idea seems to exist in hardware at least since (2012):

    Machine learning and artificial intelligence are
    the current hype (again). In their new Ryzen
    processors, AMD advertises the Neural Net
    Prediction. It turns out this is was already
    used in their older (2012) Piledriver architecture
    used for example in the AMD A10-4600M. It is also
    present in recent Samsung processors such as the
    one powering the Galaxy S7. What is it really? https://chasethedevil.github.io/post/the_neural_network_in_your_cpu/

    It can be done with Convoluted Neural Networks (CNN):

    BranchNet: A Convolutional Neural Network to
    Predict Hard-To-Predict Branches
    To this end, Tarsa et al. proposed using convolutional
    neural networks (CNNs) that are trained at
    compiletime to accurately predict branches that
    TAGE cannot. Given enough profiling coverage, CNNs
    learn input-independent branch correlations. https://microarch.org/micro53/papers/738300a118.pdf

    Interstingly the above shows cases a PGO based
    Machine Learning for Branch Predictors. No clue
    how they construct the CPU, that they can feed

    it with offline constructed neural neutworks for
    their own execution. Maybe an optimizer uses it?
    But I guess a more modern  solutions would not only

    use CNN, but also an Attention Mechanism.

    Bye

    Mild Shock schrieb:
    Hi,

    I spent some time thinking about my primes.pl
    test. And came to the conclusion that it
    mainly tests the Prolog ALU. Things like

    integer successor or integer modulo. Then
    I found that Java has Math.floorMod() which
    I wasn't using yet. And peng results are better:

    /* Dogelog Player 2.1.2 for Java, today */
    ?- time(test).
    % Zeit 286 ms, GC 1 ms, Lips 26302430, Uhr 15.10.2025 02:31
    true.

    Maybe the Java backend picks a CPU instruction
    for Math.floorMod() instead of executing the
    longer code sequence that is needed to correct

    rem/2 into mod/2. Who knows. I also reorganized
    the code a little bit, and eliminated an extra
    method call in all arithmetic functions, by

    inlining the arithmetic function body in the
    evaluable predicate definition code. Comparison
    to old measurements and some measurements of

    other Prolog systems:

    /* Dogelog Player 2.1.2 for Java, weeks ago */
    ?- time(test).
    % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
    true.

    /* SWI-Prolog 9.0.4 */
    ?- time(test).
    % 7,506,639 inferences, 0.363 CPU in 0.362 seconds
    (100% CPU, 20693560 Lips)
    true.

    /* Scryer Prolog 0.9.4-639 */
    ?- time(test).
    % CPU time: 0.365s, 7_517_613 inferences
    true.

    /* Trealla Prolog 2.82.23-3 */
    ?- time(test).
    % Time elapsed 0.868s, 11263917 Inferences, 12.983 MLips
    true.

    Bye

    P.S.: The code uses the hated mathematical mod/2,
    and not the cheaper rem/2 that CPUs usually have:

    test :-
        len(L, 1000),
        primes(L, _).

    primes([], 1).
    primes([J|L], J) :-
        primes(L, I),
        K is I+1,
        search(L, K, J).

    search(L, I, J) :-
        mem(X, L),
        I mod X =:= 0, !,
        K is I+1,
        search(L, K, J).
    search(_, I, I).

    mem(X, [X|_]).
    mem(X, [_|Y]) :-
        mem(X, Y).

    len([], 0) :- !.
    len([_|L], N) :-
        N > 0,
        M is N-1,
        len(L, M).

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Oct 15 16:10:42 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    But not only Mobie AI and Desktop AI are making
    a broader imprint now. We might also experience
    Workstation AI, with a 3'000.- USD price tag:

    You Can't Buy This... Yet! The NVIDIA GB10 from Dell
    The New Superchip that Terrifies the Cloud! https://www.youtube.com/watch?v=x1qViw4xyVo

    So whats going on? I was asking Phind, which is
    driven by a 70B model tailored towards developers:

    Q: Is there an AI inflection point right now ,
    with NPUs in mobile, desktop and workstation

    A: Evidence of the Inflection Point

    - Mobile Leadership
    NPUs originated in smartphones
    Now becoming ubiquitous across all device types
    Enabling sophisticated AI features at consumer price points

    - Desktop Revolution
    Major manufacturers implementing NPUs across product lines
    Apple's Neural Engine integrated into M-series chips
    Qualcomm, Intel, and AMD incorporating AI accelerators

    - Workstation Transformation
    Professional-grade NPUs in mobile workstations
    Demonstrated superior performance for AI-specific tasks
    Enabling local processing of previously cloud-dependent workloads

    https://www.phind.com/search/cmgs1s6jv00023h67g5z2aaa0

    Bye

    Mild Shock schrieb:
    Hi,

    It seems I am having problems pacing with
    all the new fancy toys. Wasn't able to really
    benchmark my NPU from a Desktop AI machine,

    picked the wrong driver. Need to try again.
    What worked was benchmarking Mobile AI machines.
    I just grabbed Geekbench AI and some devices:

    USA Fab, M4:

        sANN    hANN    qANN
    iPad CPU    4848    7947    6353
    iPad GPU    9752    11383    10051
    iPad NPU    4873    36544    *51634*

    China Fab, Snapdragon:

        sANN    hANN    qANN
    Redmi CPU    1044    950    1723
    Redmi GPU    480    905    737
    Redmi NNAPI    205    205    469
    Redmi QNN    226    226    *10221*

    Speed-Up via NPU is factor 10x. See the column
    qANN which means quantizised artificial neural
    networks, when NPU or QNN is picked.

    The mobile AI NPUs are optimized using
    mimimal amounts of energy, and minimal amounts
    of space squeezing (distilling) everything

    into INT8 and INT4.

    Bye

    Mild Shock schrieb:
    Hi,

    The change from 378 ms to 286 ms is around 25-30%
    is insane. But I did both tests on a novel AI CPU.
    To be precise on a AMD Ryzen AI 7 350.

    But somehow I picked up rumors that AI CPUs now
    might do Neural Network Branch Prediction. The
    idea seems to exist in hardware at least since (2012):

    Machine learning and artificial intelligence are
    the current hype (again). In their new Ryzen
    processors, AMD advertises the Neural Net
    Prediction. It turns out this is was already
    used in their older (2012) Piledriver architecture
    used for example in the AMD A10-4600M. It is also
    present in recent Samsung processors such as the
    one powering the Galaxy S7. What is it really?
    https://chasethedevil.github.io/post/the_neural_network_in_your_cpu/

    It can be done with Convoluted Neural Networks (CNN):

    BranchNet: A Convolutional Neural Network to
    Predict Hard-To-Predict Branches
    To this end, Tarsa et al. proposed using convolutional
    neural networks (CNNs) that are trained at
    compiletime to accurately predict branches that
    TAGE cannot. Given enough profiling coverage, CNNs
    learn input-independent branch correlations.
    https://microarch.org/micro53/papers/738300a118.pdf

    Interstingly the above shows cases a PGO based
    Machine Learning for Branch Predictors. No clue
    how they construct the CPU, that they can feed

    it with offline constructed neural neutworks for
    their own execution. Maybe an optimizer uses it?
    But I guess a more modern  solutions would not only

    use CNN, but also an Attention Mechanism.

    Bye

    Mild Shock schrieb:
    Hi,

    I spent some time thinking about my primes.pl
    test. And came to the conclusion that it
    mainly tests the Prolog ALU. Things like

    integer successor or integer modulo. Then
    I found that Java has Math.floorMod() which
    I wasn't using yet. And peng results are better:

    /* Dogelog Player 2.1.2 for Java, today */
    ?- time(test).
    % Zeit 286 ms, GC 1 ms, Lips 26302430, Uhr 15.10.2025 02:31
    true.

    Maybe the Java backend picks a CPU instruction
    for Math.floorMod() instead of executing the
    longer code sequence that is needed to correct

    rem/2 into mod/2. Who knows. I also reorganized
    the code a little bit, and eliminated an extra
    method call in all arithmetic functions, by

    inlining the arithmetic function body in the
    evaluable predicate definition code. Comparison
    to old measurements and some measurements of

    other Prolog systems:

    /* Dogelog Player 2.1.2 for Java, weeks ago */
    ?- time(test).
    % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
    true.

    /* SWI-Prolog 9.0.4 */
    ?- time(test).
    % 7,506,639 inferences, 0.363 CPU in 0.362 seconds
    (100% CPU, 20693560 Lips)
    true.

    /* Scryer Prolog 0.9.4-639 */
    ?- time(test).
    % CPU time: 0.365s, 7_517_613 inferences
    true.

    /* Trealla Prolog 2.82.23-3 */
    ?- time(test).
    % Time elapsed 0.868s, 11263917 Inferences, 12.983 MLips
    true.

    Bye

    P.S.: The code uses the hated mathematical mod/2,
    and not the cheaper rem/2 that CPUs usually have:

    test :-
        len(L, 1000),
        primes(L, _).

    primes([], 1).
    primes([J|L], J) :-
        primes(L, I),
        K is I+1,
        search(L, K, J).

    search(L, I, J) :-
        mem(X, L),
        I mod X =:= 0, !,
        K is I+1,
        search(L, K, J).
    search(_, I, I).

    mem(X, [X|_]).
    mem(X, [_|Y]) :-
        mem(X, Y).

    len([], 0) :- !.
    len([_|L], N) :-
        N > 0,
        M is N-1,
        len(L, M).

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sat Oct 18 15:57:36 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Thinks are definitively accelerating. I really would
    like to use an AI that knows about all the News of today.
    This bloody cut date is so annoying.

    Further indicative that AI is accelerating:

    In August 2025, Sam Altman dropped a bombshell:

    *months, not years: Rushing GPT-6*
    In August 2025, Sam Altman dropped a bombshell:
    GPT-6 is already in development and coming sooner
    than you think. Not in two years, but
    potentially in months.
    https://www.youtube.com/watch?v=44mJb5sKji0

    Karpathy, coined vibe coding, released in October 2025:

    *nanochat: The best ChatGPT that $100 can buy*
    This repo is a full-stack implementation of an
    LLM like ChatGPT in a single, clean, minimal,
    hackable, dependency-lite codebase. nanochat is
    designed to run on a single 8XH100 node via
    scripts like speedrun.sh, that run the
    entire pipeline start to end.
    https://github.com/karpathy/nanochat

    Bye

    Mild Shock schrieb:
    Hi,

    But not only Mobie AI and Desktop AI are making
    a broader imprint now. We might also experience
    Workstation AI, with a 3'000.- USD price tag:

    You Can't Buy This... Yet! The NVIDIA GB10 from Dell
    The New Superchip that Terrifies the Cloud! https://www.youtube.com/watch?v=x1qViw4xyVo

    So whats going on? I was asking Phind, which is
    driven by a 70B model tailored towards developers:

    Q: Is there an AI inflection point right now ,
       with NPUs in mobile, desktop and workstation

    A: Evidence of the Inflection Point

    - Mobile Leadership
      NPUs originated in smartphones
      Now becoming ubiquitous across all device types
      Enabling sophisticated AI features at consumer price points

    - Desktop Revolution
      Major manufacturers implementing NPUs across product lines
      Apple's Neural Engine integrated into M-series chips
      Qualcomm, Intel, and AMD incorporating AI accelerators

    - Workstation Transformation
      Professional-grade NPUs in mobile workstations
      Demonstrated superior performance for AI-specific tasks
      Enabling local processing of previously cloud-dependent workloads

    https://www.phind.com/search/cmgs1s6jv00023h67g5z2aaa0

    Bye

    Mild Shock schrieb:
    Hi,

    It seems I am having problems pacing with
    all the new fancy toys. Wasn't able to really
    benchmark my NPU from a Desktop AI machine,

    picked the wrong driver. Need to try again.
    What worked was benchmarking Mobile AI machines.
    I just grabbed Geekbench AI and some devices:

    USA Fab, M4:

         sANN    hANN    qANN
    iPad CPU    4848    7947    6353
    iPad GPU    9752    11383    10051
    iPad NPU    4873    36544    *51634*

    China Fab, Snapdragon:

         sANN    hANN    qANN
    Redmi CPU    1044    950    1723
    Redmi GPU    480    905    737
    Redmi NNAPI    205    205    469
    Redmi QNN    226    226    *10221*

    Speed-Up via NPU is factor 10x. See the column
    qANN which means quantizised artificial neural
    networks, when NPU or QNN is picked.

    The mobile AI NPUs are optimized using
    mimimal amounts of energy, and minimal amounts
    of space squeezing (distilling) everything

    into INT8 and INT4.

    Bye

    Mild Shock schrieb:
    Hi,

    The change from 378 ms to 286 ms is around 25-30%
    is insane. But I did both tests on a novel AI CPU.
    To be precise on a AMD Ryzen AI 7 350.

    But somehow I picked up rumors that AI CPUs now
    might do Neural Network Branch Prediction. The
    idea seems to exist in hardware at least since (2012):

    Machine learning and artificial intelligence are
    the current hype (again). In their new Ryzen
    processors, AMD advertises the Neural Net
    Prediction. It turns out this is was already
    used in their older (2012) Piledriver architecture
    used for example in the AMD A10-4600M. It is also
    present in recent Samsung processors such as the
    one powering the Galaxy S7. What is it really?
    https://chasethedevil.github.io/post/the_neural_network_in_your_cpu/

    It can be done with Convoluted Neural Networks (CNN):

    BranchNet: A Convolutional Neural Network to
    Predict Hard-To-Predict Branches
    To this end, Tarsa et al. proposed using convolutional
    neural networks (CNNs) that are trained at
    compiletime to accurately predict branches that
    TAGE cannot. Given enough profiling coverage, CNNs
    learn input-independent branch correlations.
    https://microarch.org/micro53/papers/738300a118.pdf

    Interstingly the above shows cases a PGO based
    Machine Learning for Branch Predictors. No clue
    how they construct the CPU, that they can feed

    it with offline constructed neural neutworks for
    their own execution. Maybe an optimizer uses it?
    But I guess a more modern  solutions would not only

    use CNN, but also an Attention Mechanism.

    Bye

    Mild Shock schrieb:
    Hi,

    I spent some time thinking about my primes.pl
    test. And came to the conclusion that it
    mainly tests the Prolog ALU. Things like

    integer successor or integer modulo. Then
    I found that Java has Math.floorMod() which
    I wasn't using yet. And peng results are better:

    /* Dogelog Player 2.1.2 for Java, today */
    ?- time(test).
    % Zeit 286 ms, GC 1 ms, Lips 26302430, Uhr 15.10.2025 02:31
    true.

    Maybe the Java backend picks a CPU instruction
    for Math.floorMod() instead of executing the
    longer code sequence that is needed to correct

    rem/2 into mod/2. Who knows. I also reorganized
    the code a little bit, and eliminated an extra
    method call in all arithmetic functions, by

    inlining the arithmetic function body in the
    evaluable predicate definition code. Comparison
    to old measurements and some measurements of

    other Prolog systems:

    /* Dogelog Player 2.1.2 for Java, weeks ago */
    ?- time(test).
    % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
    true.

    /* SWI-Prolog 9.0.4 */
    ?- time(test).
    % 7,506,639 inferences, 0.363 CPU in 0.362 seconds
    (100% CPU, 20693560 Lips)
    true.

    /* Scryer Prolog 0.9.4-639 */
    ?- time(test).
    % CPU time: 0.365s, 7_517_613 inferences
    true.

    /* Trealla Prolog 2.82.23-3 */
    ?- time(test).
    % Time elapsed 0.868s, 11263917 Inferences, 12.983 MLips
    true.

    Bye

    P.S.: The code uses the hated mathematical mod/2,
    and not the cheaper rem/2 that CPUs usually have:

    test :-
        len(L, 1000),
        primes(L, _).

    primes([], 1).
    primes([J|L], J) :-
        primes(L, I),
        K is I+1,
        search(L, K, J).

    search(L, I, J) :-
        mem(X, L),
        I mod X =:= 0, !,
        K is I+1,
        search(L, K, J).
    search(_, I, I).

    mem(X, [X|_]).
    mem(X, [_|Y]) :-
        mem(X, Y).

    len([], 0) :- !.
    len([_|L], N) :-
        N > 0,
        M is N-1,
        len(L, M).

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye





    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sat Oct 18 16:19:49 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Give Julio Di Egidio the bloody money. He is
    craving for 300 USD so that he can buy the
    ISO Prolog core standard. Just imagine he would want

    to build a MiniMind. Just lets put some more
    prespective on the current costs:

    This open-source project aims to train a super-small
    language model MiniMind with only 3 RMB cost and
    2 hours, starting completely from scratch. The
    MiniMind series is extremely lightweight, with the
    smallest version being 1/7000 the size of GPT-3,
    making it possible to train quickly on even the
    most ordinary personal GPUs. https://github.com/jingyaogong/minimind/blob/master/README_en.md

    ChatGPT tells me that most of the numbers
    are correct when you rent a GPU by the hour.
    But what about a 100% ownership of a GPU for

    a year. I find this might cost 12'000 USD.
    One has to separate platforms for execution from
    those platforms for training:

    GEX44: for AI inference
    Nvidia RTX™ 4000, 184 EUR / month

    GEX130: for AI training
    NVIDIA RTX™ 6000, 813 EUR / month https://www.hetzner.com/dedicated-rootserver/matrix-gpu/

    Bye

    Mild Shock schrieb:
    Hi,

    Thinks are definitively accelerating. I really would
    like to use an AI that knows about all the News of today.
    This bloody cut date is so annoying.

    Further indicative that AI is accelerating:

    In August 2025, Sam Altman dropped a bombshell:

    *months, not years: Rushing GPT-6*
    In August 2025, Sam Altman dropped a bombshell:
    GPT-6 is already in development and coming sooner
    than you think. Not in two years, but
    potentially in months.
    https://www.youtube.com/watch?v=44mJb5sKji0

    Karpathy, coined vibe coding, released in October 2025:

    *nanochat: The best ChatGPT that $100 can buy*
    This repo is a full-stack implementation of an
    LLM like ChatGPT in a single, clean, minimal,
    hackable, dependency-lite codebase. nanochat is
    designed to run on a single 8XH100 node via
    scripts like speedrun.sh, that run the
    entire pipeline start to end.
    https://github.com/karpathy/nanochat

    Bye

    Mild Shock schrieb:
    Hi,

    But not only Mobie AI and Desktop AI are making
    a broader imprint now. We might also experience
    Workstation AI, with a 3'000.- USD price tag:

    You Can't Buy This... Yet! The NVIDIA GB10 from Dell
    The New Superchip that Terrifies the Cloud!
    https://www.youtube.com/watch?v=x1qViw4xyVo

    So whats going on? I was asking Phind, which is
    driven by a 70B model tailored towards developers:

    Q: Is there an AI inflection point right now ,
        with NPUs in mobile, desktop and workstation

    A: Evidence of the Inflection Point

    - Mobile Leadership
       NPUs originated in smartphones
       Now becoming ubiquitous across all device types
       Enabling sophisticated AI features at consumer price points

    - Desktop Revolution
       Major manufacturers implementing NPUs across product lines
       Apple's Neural Engine integrated into M-series chips
       Qualcomm, Intel, and AMD incorporating AI accelerators

    - Workstation Transformation
       Professional-grade NPUs in mobile workstations
       Demonstrated superior performance for AI-specific tasks
       Enabling local processing of previously cloud-dependent workloads

    https://www.phind.com/search/cmgs1s6jv00023h67g5z2aaa0

    Bye

    Mild Shock schrieb:
    Hi,

    It seems I am having problems pacing with
    all the new fancy toys. Wasn't able to really
    benchmark my NPU from a Desktop AI machine,

    picked the wrong driver. Need to try again.
    What worked was benchmarking Mobile AI machines.
    I just grabbed Geekbench AI and some devices:

    USA Fab, M4:

         sANN    hANN    qANN
    iPad CPU    4848    7947    6353
    iPad GPU    9752    11383    10051
    iPad NPU    4873    36544    *51634*

    China Fab, Snapdragon:

         sANN    hANN    qANN
    Redmi CPU    1044    950    1723
    Redmi GPU    480    905    737
    Redmi NNAPI    205    205    469
    Redmi QNN    226    226    *10221*

    Speed-Up via NPU is factor 10x. See the column
    qANN which means quantizised artificial neural
    networks, when NPU or QNN is picked.

    The mobile AI NPUs are optimized using
    mimimal amounts of energy, and minimal amounts
    of space squeezing (distilling) everything

    into INT8 and INT4.

    Bye

    Mild Shock schrieb:
    Hi,

    The change from 378 ms to 286 ms is around 25-30%
    is insane. But I did both tests on a novel AI CPU.
    To be precise on a AMD Ryzen AI 7 350.

    But somehow I picked up rumors that AI CPUs now
    might do Neural Network Branch Prediction. The
    idea seems to exist in hardware at least since (2012):

    Machine learning and artificial intelligence are
    the current hype (again). In their new Ryzen
    processors, AMD advertises the Neural Net
    Prediction. It turns out this is was already
    used in their older (2012) Piledriver architecture
    used for example in the AMD A10-4600M. It is also
    present in recent Samsung processors such as the
    one powering the Galaxy S7. What is it really?
    https://chasethedevil.github.io/post/the_neural_network_in_your_cpu/

    It can be done with Convoluted Neural Networks (CNN):

    BranchNet: A Convolutional Neural Network to
    Predict Hard-To-Predict Branches
    To this end, Tarsa et al. proposed using convolutional
    neural networks (CNNs) that are trained at
    compiletime to accurately predict branches that
    TAGE cannot. Given enough profiling coverage, CNNs
    learn input-independent branch correlations.
    https://microarch.org/micro53/papers/738300a118.pdf

    Interstingly the above shows cases a PGO based
    Machine Learning for Branch Predictors. No clue
    how they construct the CPU, that they can feed

    it with offline constructed neural neutworks for
    their own execution. Maybe an optimizer uses it?
    But I guess a more modern  solutions would not only

    use CNN, but also an Attention Mechanism.

    Bye

    Mild Shock schrieb:
    Hi,

    I spent some time thinking about my primes.pl
    test. And came to the conclusion that it
    mainly tests the Prolog ALU. Things like

    integer successor or integer modulo. Then
    I found that Java has Math.floorMod() which
    I wasn't using yet. And peng results are better:

    /* Dogelog Player 2.1.2 for Java, today */
    ?- time(test).
    % Zeit 286 ms, GC 1 ms, Lips 26302430, Uhr 15.10.2025 02:31
    true.

    Maybe the Java backend picks a CPU instruction
    for Math.floorMod() instead of executing the
    longer code sequence that is needed to correct

    rem/2 into mod/2. Who knows. I also reorganized
    the code a little bit, and eliminated an extra
    method call in all arithmetic functions, by

    inlining the arithmetic function body in the
    evaluable predicate definition code. Comparison
    to old measurements and some measurements of

    other Prolog systems:

    /* Dogelog Player 2.1.2 for Java, weeks ago */
    ?- time(test).
    % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
    true.

    /* SWI-Prolog 9.0.4 */
    ?- time(test).
    % 7,506,639 inferences, 0.363 CPU in 0.362 seconds
    (100% CPU, 20693560 Lips)
    true.

    /* Scryer Prolog 0.9.4-639 */
    ?- time(test).
    % CPU time: 0.365s, 7_517_613 inferences
    true.

    /* Trealla Prolog 2.82.23-3 */
    ?- time(test).
    % Time elapsed 0.868s, 11263917 Inferences, 12.983 MLips
    true.

    Bye

    P.S.: The code uses the hated mathematical mod/2,
    and not the cheaper rem/2 that CPUs usually have:

    test :-
        len(L, 1000),
        primes(L, _).

    primes([], 1).
    primes([J|L], J) :-
        primes(L, I),
        K is I+1,
        search(L, K, J).

    search(L, I, J) :-
        mem(X, L),
        I mod X =:= 0, !,
        K is I+1,
        search(L, K, J).
    search(_, I, I).

    mem(X, [X|_]).
    mem(X, [_|Y]) :-
        mem(X, Y).

    len([], 0) :- !.
    len([_|L], N) :-
        N > 0,
        M is N-1,
        len(L, M).

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye






    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sat Oct 18 18:59:11 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    This is probably what my AI Laptop can do. The
    demo shows AMD Ryzen AI 7 340 micro. I have a
    AMD Ryzen AI 7 350 laptop.

    100% Powered by AMD Ryzen™ AI NPU https://www.youtube.com/watch?v=0t8ijUPg4A0

    Only I am too stupid / too lazy to dig up the right
    drivers and install FastFlowLM. But one sees in
    the video how the NPU gets 30% - 60% occupied,

    and it does the transcription of a YouTube video,
    into text (via Whisper-large-v3-turbo from OpenAI).
    The demo then switches to summarize mode

    (via GPT-OSS-20B from OpenAI). And boom the,
    NPU goes to 100%!

    Bye

    P.S.: I didn't know about the OpenAI and AMD partnership,
    also buying an AMD AI Laptop was not motivated by
    this development. Not sure whether its really a big thing:

    AMD and OpenAI announce partnership https://openai.com/index/openai-amd-strategic-partnership/

    It might be a good thing for end users like me, if
    Edge uses cases like the above become more common.
    But they still depend on models trained not on the Edge,

    but in a Data Center. The AMD Instinct product line directly
    competes with Nvidia's Tesla and Intel's Xeon Phi and Data
    Center GPU lines of machine learning and GPGPU cards.

    Mild Shock schrieb:
    Hi,

    It seems I am having problems pacing with
    all the new fancy toys. Wasn't able to really
    benchmark my NPU from a Desktop AI machine,

    picked the wrong driver. Need to try again.
    What worked was benchmarking Mobile AI machines.
    I just grabbed Geekbench AI and some devices:

    USA Fab, M4:

        sANN    hANN    qANN
    iPad CPU    4848    7947    6353
    iPad GPU    9752    11383    10051
    iPad NPU    4873    36544    *51634*

    China Fab, Snapdragon:

        sANN    hANN    qANN
    Redmi CPU    1044    950    1723
    Redmi GPU    480    905    737
    Redmi NNAPI    205    205    469
    Redmi QNN    226    226    *10221*

    Speed-Up via NPU is factor 10x. See the column
    qANN which means quantizised artificial neural
    networks, when NPU or QNN is picked.

    The mobile AI NPUs are optimized using
    mimimal amounts of energy, and minimal amounts
    of space squeezing (distilling) everything

    into INT8 and INT4.

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Tue Oct 21 00:32:55 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Vertex AI Training is more expensive:

    "Vertex AI provides a managed training
    service that enables you to operationalize
    large scale model training. You can use
    Vertex AI to run training applications
    based on any machine learning (ML) framework
    on Google Cloud infrastructure. Vertex AI
    also has integrated support that simplifies
    the preparation process for model training
    and serving for the PyTorch, TensorFlow,
    scikit-learn, and XGBoost frameworks. https://cloud.google.com/products/calculator

    For 10 Jobs à 10 hours, it wants 1000 USD
    from me. The accelerator is TPU V3. Nevertheless
    might be the cheaper option to use the RTX™ 6000

    for 720 hours. ChatGPT gives me this calculation:

    Scenario I: Single TPU, RTX 6000 wins
    4.43×10^19 FLOPs versus ~2.36×10^20 FLOPs

    Scenario II: 8 chips TPU, RTX 6000 looses
    ~3.54×10^20 FLOPs versus ~2.36×10^20 FLOPs

    Bye

    Mild Shock schrieb:
    Hi,

    Give Julio Di Egidio the bloody money. He is
    craving for 300 USD so that he can buy the
    ISO Prolog core standard. Just imagine he would want

    to build a MiniMind. Just lets put some more
    prespective on the current costs:

    This open-source project aims to train a super-small
    language model MiniMind with only 3 RMB cost and
    2 hours, starting completely from scratch. The
    MiniMind series is extremely lightweight, with the
    smallest version being 1/7000 the size of GPT-3,
    making it possible to train quickly on even the
    most ordinary personal GPUs. https://github.com/jingyaogong/minimind/blob/master/README_en.md

    ChatGPT tells me that most of the numbers
    are correct when you rent a GPU by the hour.
    But what about a 100% ownership of a GPU for

    a year. I find this might cost 12'000 USD.
    One has to separate platforms for execution from
    those platforms for training:

    GEX44: for AI inference
    Nvidia RTX™ 4000, 184 EUR / month

    GEX130: for AI training
    NVIDIA RTX™ 6000, 813 EUR / month https://www.hetzner.com/dedicated-rootserver/matrix-gpu/

    Bye

    Mild Shock schrieb:
    Hi,

    Thinks are definitively accelerating. I really would
    like to use an AI that knows about all the News of today.
    This bloody cut date is so annoying.

    Further indicative that AI is accelerating:

    In August 2025, Sam Altman dropped a bombshell:

    *months, not years: Rushing GPT-6*
    In August 2025, Sam Altman dropped a bombshell:
    GPT-6 is already in development and coming sooner
    than you think. Not in two years, but
    potentially in months.
    https://www.youtube.com/watch?v=44mJb5sKji0

    Karpathy, coined vibe coding, released in October 2025:

    *nanochat: The best ChatGPT that $100 can buy*
    This repo is a full-stack implementation of an
    LLM like ChatGPT in a single, clean, minimal,
    hackable, dependency-lite codebase. nanochat is
    designed to run on a single 8XH100 node via
    scripts like speedrun.sh, that run the
    entire pipeline start to end.
    https://github.com/karpathy/nanochat

    Bye

    Mild Shock schrieb:
    Hi,

    But not only Mobie AI and Desktop AI are making
    a broader imprint now. We might also experience
    Workstation AI, with a 3'000.- USD price tag:

    You Can't Buy This... Yet! The NVIDIA GB10 from Dell
    The New Superchip that Terrifies the Cloud!
    https://www.youtube.com/watch?v=x1qViw4xyVo

    So whats going on? I was asking Phind, which is
    driven by a 70B model tailored towards developers:

    Q: Is there an AI inflection point right now ,
        with NPUs in mobile, desktop and workstation

    A: Evidence of the Inflection Point

    - Mobile Leadership
       NPUs originated in smartphones
       Now becoming ubiquitous across all device types
       Enabling sophisticated AI features at consumer price points

    - Desktop Revolution
       Major manufacturers implementing NPUs across product lines
       Apple's Neural Engine integrated into M-series chips
       Qualcomm, Intel, and AMD incorporating AI accelerators

    - Workstation Transformation
       Professional-grade NPUs in mobile workstations
       Demonstrated superior performance for AI-specific tasks
       Enabling local processing of previously cloud-dependent workloads

    https://www.phind.com/search/cmgs1s6jv00023h67g5z2aaa0

    Bye

    Mild Shock schrieb:
    Hi,

    It seems I am having problems pacing with
    all the new fancy toys. Wasn't able to really
    benchmark my NPU from a Desktop AI machine,

    picked the wrong driver. Need to try again.
    What worked was benchmarking Mobile AI machines.
    I just grabbed Geekbench AI and some devices:

    USA Fab, M4:

         sANN    hANN    qANN
    iPad CPU    4848    7947    6353
    iPad GPU    9752    11383    10051
    iPad NPU    4873    36544    *51634*

    China Fab, Snapdragon:

         sANN    hANN    qANN
    Redmi CPU    1044    950    1723
    Redmi GPU    480    905    737
    Redmi NNAPI    205    205    469
    Redmi QNN    226    226    *10221*

    Speed-Up via NPU is factor 10x. See the column
    qANN which means quantizised artificial neural
    networks, when NPU or QNN is picked.

    The mobile AI NPUs are optimized using
    mimimal amounts of energy, and minimal amounts
    of space squeezing (distilling) everything

    into INT8 and INT4.

    Bye

    Mild Shock schrieb:
    Hi,

    The change from 378 ms to 286 ms is around 25-30%
    is insane. But I did both tests on a novel AI CPU.
    To be precise on a AMD Ryzen AI 7 350.

    But somehow I picked up rumors that AI CPUs now
    might do Neural Network Branch Prediction. The
    idea seems to exist in hardware at least since (2012):

    Machine learning and artificial intelligence are
    the current hype (again). In their new Ryzen
    processors, AMD advertises the Neural Net
    Prediction. It turns out this is was already
    used in their older (2012) Piledriver architecture
    used for example in the AMD A10-4600M. It is also
    present in recent Samsung processors such as the
    one powering the Galaxy S7. What is it really?
    https://chasethedevil.github.io/post/the_neural_network_in_your_cpu/ >>>>>
    It can be done with Convoluted Neural Networks (CNN):

    BranchNet: A Convolutional Neural Network to
    Predict Hard-To-Predict Branches
    To this end, Tarsa et al. proposed using convolutional
    neural networks (CNNs) that are trained at
    compiletime to accurately predict branches that
    TAGE cannot. Given enough profiling coverage, CNNs
    learn input-independent branch correlations.
    https://microarch.org/micro53/papers/738300a118.pdf

    Interstingly the above shows cases a PGO based
    Machine Learning for Branch Predictors. No clue
    how they construct the CPU, that they can feed

    it with offline constructed neural neutworks for
    their own execution. Maybe an optimizer uses it?
    But I guess a more modern  solutions would not only

    use CNN, but also an Attention Mechanism.

    Bye

    Mild Shock schrieb:
    Hi,

    I spent some time thinking about my primes.pl
    test. And came to the conclusion that it
    mainly tests the Prolog ALU. Things like

    integer successor or integer modulo. Then
    I found that Java has Math.floorMod() which
    I wasn't using yet. And peng results are better:

    /* Dogelog Player 2.1.2 for Java, today */
    ?- time(test).
    % Zeit 286 ms, GC 1 ms, Lips 26302430, Uhr 15.10.2025 02:31
    true.

    Maybe the Java backend picks a CPU instruction
    for Math.floorMod() instead of executing the
    longer code sequence that is needed to correct

    rem/2 into mod/2. Who knows. I also reorganized
    the code a little bit, and eliminated an extra
    method call in all arithmetic functions, by

    inlining the arithmetic function body in the
    evaluable predicate definition code. Comparison
    to old measurements and some measurements of

    other Prolog systems:

    /* Dogelog Player 2.1.2 for Java, weeks ago */
    ?- time(test).
    % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
    true.

    /* SWI-Prolog 9.0.4 */
    ?- time(test).
    % 7,506,639 inferences, 0.363 CPU in 0.362 seconds
    (100% CPU, 20693560 Lips)
    true.

    /* Scryer Prolog 0.9.4-639 */
    ?- time(test).
    % CPU time: 0.365s, 7_517_613 inferences
    true.

    /* Trealla Prolog 2.82.23-3 */
    ?- time(test).
    % Time elapsed 0.868s, 11263917 Inferences, 12.983 MLips
    true.

    Bye

    P.S.: The code uses the hated mathematical mod/2,
    and not the cheaper rem/2 that CPUs usually have:

    test :-
        len(L, 1000),
        primes(L, _).

    primes([], 1).
    primes([J|L], J) :-
        primes(L, I),
        K is I+1,
        search(L, K, J).

    search(L, I, J) :-
        mem(X, L),
        I mod X =:= 0, !,
        K is I+1,
        search(L, K, J).
    search(_, I, I).

    mem(X, [X|_]).
    mem(X, [_|Y]) :-
        mem(X, Y).

    len([], 0) :- !.
    len([_|L], N) :-
        N > 0,
        M is N-1,
        len(L, M).

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye







    --- Synchronet 3.21a-Linux NewsLink 1.2