• Re: 50 Years of Prolog Nonsense

    From Mostowski Collapse@bursejan@gmail.com to comp.lang.prolog on Sun Mar 12 09:12:34 2023
    From Newsgroup: comp.lang.prolog

    Wasn't sure whether this works:

    } else if (count > GC_MAX_TRAIL) {
    gc();
    if (count > GC_MAX_TRAIL)
    throw make_error(new Compound("system_error",["stack_overflow"]));

    Seems fine:

    len([], N, N).
    len([_|L], N, M) :- H is N+1, (true; fail), len(L, H, M).

    ?- X = [_|X], len(X, 0, N).
    Error: system_error(stack_overflow)
    user:12
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mostowski Collapse@bursejan@gmail.com to comp.lang.prolog on Sun Mar 12 09:14:51 2023
    From Newsgroup: comp.lang.prolog

    Scryer Prolog doesn't like this user friendlyness,
    especially for users that like to toy with infinite loops.
    $ target/release/scryer-prolog -v
    "v0.9.1-194-gecd77f75"
    $ target/release/scryer-prolog
    ?- [user].
    len([], N, N).
    len([_|L], N, M) :- H is N+1, (true; fail), len(L, H, M).
    ?- X = [_|X], len(X, 0, N).
    Killed
    I guess there is also no Prolog flag stack limit?
    Mostowski Collapse schrieb am Sonntag, 12. März 2023 um 17:12:35 UTC+1:
    Wasn't sure whether this works:

    } else if (count > GC_MAX_TRAIL) {
    gc();
    if (count > GC_MAX_TRAIL)
    throw make_error(new Compound("system_error",["stack_overflow"]));

    Seems fine:

    len([], N, N).
    len([_|L], N, M) :- H is N+1, (true; fail), len(L, H, M).

    ?- X = [_|X], len(X, 0, N).
    Error: system_error(stack_overflow)
    user:12
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mostowski Collapse@bursejan@gmail.com to comp.lang.prolog on Tue Mar 14 17:57:51 2023
    From Newsgroup: comp.lang.prolog

    I guess Scryer Prologs argument indexing could be
    improved. Take the computation of Munchhausen numbers:

    /* Scryer Prolog
    ?- time((canonball, munchhausen(_), fail; true)).
    % CPU time: 49.769s
    true.

    The bottleneck is really the cache computation, which
    uses assertz/1:

    ?- time(canonball).
    % CPU time: 50.158s
    true.

    Once the cache is in place, its fine:

    ?- time((munchhausen(R), write(R), nl, fail; true)).
    0
    1
    3435
    438579088
    % CPU time: 0.280s
    true.

    BTW: This is the Prolog text:

    canonball :-
    retractall(cache(_,_)),
    between(0, 99999, N), map(N, Y), C is Y-N,
    assertz(cache(C, N)), fail; true.

    munchhausen(R) :-
    between(0, 99999, M), map(M, X), B is 100000*M-X,
    cache(B, N), R is 100000*M+N.

    map(0, X) :- !, X = 0.
    map(N, X) :-
    M is N//10,
    map(M, Y),
    D is N mod 10,
    (D = 0 -> X=Y; X is Y+D^D).
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mostowski Collapse@janburse@fastmail.fm to comp.lang.prolog on Wed Mar 15 02:01:59 2023
    From Newsgroup: comp.lang.prolog

    Other Prolog systems fare much better:

    /* SWI-Prolog 9.1.4 */
    ?- time((canonball, munchhausen(_), fail; true)).
    % 4,633,344 inferences, 0.594 CPU in 0.589 seconds (101% CPU, 7803527 Lips) true.

    /* Trealla Prolog 2.13.10 */
    ?- time((canonball, munchhausen(_), fail; true)).
    % Time elapsed 0.815s, 5888901 Inferences, 7.222 MLips)
    true.

    Even my own new Prolog system, which does the assertz/1
    clause compilation in Prolog itself, is faster than
    Scryer Prolog, not as fast as the other ones though:

    /* Dogelog Player 1.0.5 */
    ?- time((canonball, munchhausen(_), fail; true)).
    % Time 6024 ms, gc 15 ms, 1966296 lips
    true.


    Mostowski Collapse schrieb:
    I guess Scryer Prologs argument indexing could be
    improved. Take the computation of Munchhausen numbers:

    /* Scryer Prolog 0.9.1-194 */
    ?- time((canonball, munchhausen(_), fail; true)).
    % CPU time: 49.769s
    true.

    The bottleneck is really the cache computation, which
    uses assertz/1:

    ?- time(canonball).
    % CPU time: 50.158s
    true.

    Once the cache is in place, its fine:

    ?- time((munchhausen(R), write(R), nl, fail; true)).
    0
    1
    3435
    438579088
    % CPU time: 0.280s
    true.

    BTW: This is the Prolog text:

    canonball :-
    retractall(cache(_,_)),
    between(0, 99999, N), map(N, Y), C is Y-N,
    assertz(cache(C, N)), fail; true.

    munchhausen(R) :-
    between(0, 99999, M), map(M, X), B is 100000*M-X,
    cache(B, N), R is 100000*M+N.

    map(0, X) :- !, X = 0.
    map(N, X) :-
    M is N//10,
    map(M, Y),
    D is N mod 10,
    (D = 0 -> X=Y; X is Y+D^D).


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mostowski Collapse@janburse@fastmail.fm to comp.lang.prolog on Wed Mar 15 02:09:55 2023
    From Newsgroup: comp.lang.prolog

    ECLiPSe Prolog even performs worse than
    Scryer Prolog. I get this timing:

    ?- canonball, munchhausen(_), fail; true.
    Yes (144.27s cpu)

    Woa! It seems to be difficult to find the
    balance between good code generation for
    static predicates, and nevertheless

    performant handling of dynamic predicates.
    Maybe I should add this test case to
    a new test suite. This could draw

    a new picture of various Prolog systems.

    Mostowski Collapse schrieb:
    Other Prolog systems fare much better:

    /* SWI-Prolog 9.1.4 */
    ?- time((canonball, munchhausen(_), fail; true)).
    % 4,633,344 inferences, 0.594 CPU in 0.589 seconds (101% CPU, 7803527 Lips) true.

    /* Trealla Prolog 2.13.10 */
    ?- time((canonball, munchhausen(_), fail; true)).
    % Time elapsed 0.815s, 5888901 Inferences, 7.222 MLips)
          true.

    Even my own new Prolog system, which does the assertz/1
    clause compilation in Prolog itself, is faster than
    Scryer Prolog, not as fast as the other ones though:

    /* Dogelog Player 1.0.5 */
    ?- time((canonball, munchhausen(_), fail; true)).
    % Time 6024 ms, gc 15 ms, 1966296 lips
    true.


    Mostowski Collapse schrieb:
    I guess Scryer Prologs argument indexing could be
    improved. Take the computation of Munchhausen numbers:

    /* Scryer Prolog 0.9.1-194 */
    ?- time((canonball, munchhausen(_), fail; true)).
        % CPU time: 49.769s
        true.

    The bottleneck is really the cache computation, which
    uses assertz/1:

    ?- time(canonball).
        % CPU time: 50.158s
        true.

    Once the cache is in place, its fine:

    ?- time((munchhausen(R), write(R), nl, fail; true)).
    0
    1
    3435
    438579088
        % CPU time: 0.280s
        true.

    BTW: This is the Prolog text:

    canonball :-
        retractall(cache(_,_)),
        between(0, 99999, N), map(N, Y), C is Y-N,
        assertz(cache(C, N)), fail; true.

    munchhausen(R) :-
        between(0, 99999, M), map(M, X), B is 100000*M-X,
        cache(B, N), R is 100000*M+N.

    map(0, X) :- !, X = 0.
    map(N, X) :-
        M is N//10,
        map(M, Y),
        D is N mod 10,
        (D = 0 -> X=Y; X is Y+D^D).



    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mostowski Collapse@bursejan@gmail.com to comp.lang.prolog on Tue Mar 28 06:07:23 2023
    From Newsgroup: comp.lang.prolog

    Hurray, we can leave behind us the lexical order discussion. There
    is a more fundamental flaw in the compare/3 implementation.

    /* Scryer Prolog 0.9.1-207 and SWI-Prolog 9.1.7 */
    ?- X = X-0-9-7-6-5-4-3-2-1, Y = Y-7-5-8-2-4-1, X @< Y.
    true.

    ?- H = H-9-7-6-5-4-3-2-1-0, Z = H-9-7-6-5-4-3-2-1,
    Y = Y-7-5-8-2-4-1, Z @< Y.
    false.

    But X and Z are the same ground terms:

    ?- X = X-0-9-7-6-5-4-3-2-1, H = H-9-7-6-5-4-3-2-1-0,
    Z = H-9-7-6-5-4-3-2-1, X == Z.
    true.

    So there is a violation of substitution of equals for equals,
    in that X == Z and X @< Y did not imply Z @< Y.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mostowski Collapse@bursejan@gmail.com to comp.lang.prolog on Tue Mar 28 06:30:55 2023
    From Newsgroup: comp.lang.prolog

    An alternative name for substitution of equals for equals
    is the part indiscernibility of identicals from Leibniz’s law:
    A(s) & s = t => A(t)
    https://en.wikipedia.org/wiki/Identity_of_indiscernibles
    Currently violated by SWI-Prolog and Scryer Prolog
    Mostowski Collapse schrieb am Dienstag, 28. März 2023 um 15:07:25 UTC+2:
    Hurray, we can leave behind us the lexical order discussion. There
    is a more fundamental flaw in the compare/3 implementation.

    /* Scryer Prolog 0.9.1-207 and SWI-Prolog 9.1.7 */
    ?- X = X-0-9-7-6-5-4-3-2-1, Y = Y-7-5-8-2-4-1, X @< Y.
    true.

    ?- H = H-9-7-6-5-4-3-2-1-0, Z = H-9-7-6-5-4-3-2-1,
    Y = Y-7-5-8-2-4-1, Z @< Y.
    false.

    But X and Z are the same ground terms:

    ?- X = X-0-9-7-6-5-4-3-2-1, H = H-9-7-6-5-4-3-2-1-0,
    Z = H-9-7-6-5-4-3-2-1, X == Z.
    true.

    So there is a violation of substitution of equals for equals,
    in that X == Z and X @< Y did not imply Z @< Y.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mostowski Collapse@bursejan@gmail.com to comp.lang.prolog on Tue Mar 28 10:04:13 2023
    From Newsgroup: comp.lang.prolog

    I also don't find anything like a compare in neo4j. Its even the case
    that the database stumbles over cycles in general, an article from 2019
    it reports a performance penalty.

    So if a Prolog system could do better and also
    offer a compare, that would be really great news!

    **Avoid cycles in Cypher queries**
    [https://graphaware.com/neo4j/2019/04/26/avoid-cycles-in-cypher-queries.html](https://graphaware.com/neo4j/2019/04/26/avoid-cycles-in-cypher-queries.html)
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mostowski Collapse@bursejan@gmail.com to comp.lang.prolog on Tue Mar 28 10:05:03 2023
    From Newsgroup: comp.lang.prolog

    I think Kuniaki Mukai mentioned that already, we can order regular
    expressions? So we should be also able to order graphs, potentially
    with cycles, since a graph can be represented by its Kleene form as
    a regular expression? The bug here of SWI-Prolog and Scryer Prolog, is
    related to two different regular expressions for the same thing. The
    period (_) of a rational number is the star operator _* in a Kleene algebra.
    To construct the test case, where SWI-Prolog and Scryer Prolog stumbled
    I used two different regular expressions for the same rational number.
    So I guess this little bug can be cheaply fixed? Or can it not?
    10/81 = 0.(123456790) = 0.12345679(012345679)
    Kleene form:
    https://en.wikipedia.org/wiki/Kleene%27s_algorithm
    Mostowski Collapse schrieb am Dienstag, 28. März 2023 um 19:04:15 UTC+2:
    I also don't find anything like a compare in neo4j. Its even the case
    that the database stumbles over cycles in general, an article from 2019
    it reports a performance penalty.

    So if a Prolog system could do better and also
    offer a compare, that would be really great news!

    **Avoid cycles in Cypher queries**
    [https://graphaware.com/neo4j/2019/04/26/avoid-cycles-in-cypher-queries.html](https://graphaware.com/neo4j/2019/04/26/avoid-cycles-in-cypher-queries.html)
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mostowski Collapse@bursejan@gmail.com to comp.lang.prolog on Sun Apr 2 15:10:45 2023
    From Newsgroup: comp.lang.prolog

    Discussion was transitivity violation. Now I found also a counter
    example for Scryer Prolog. Interestingly the triple is ok in
    SWI-Prolog, its only nok in Scryer Prolog.
    /* Scryer Prolog 0.9.1-209 */
    ?- A = s(s(A, _), A),
    B = s(B, 0),
    C = s(_S1, _), % where
    _S1 = s(_S1, 1), A @< B, B @< C, \+ A @< C.
    A = s(s(A,_A),A), B = s(B,0), C = s(s(_S1,1),_B), _S1 = s(_S1,1).
    Mostowski Collapse schrieb am Dienstag, 28. März 2023 um 19:05:05 UTC+2:
    I think Kuniaki Mukai mentioned that already, we can order regular expressions? So we should be also able to order graphs, potentially
    with cycles, since a graph can be represented by its Kleene form as

    a regular expression? The bug here of SWI-Prolog and Scryer Prolog, is related to two different regular expressions for the same thing. The
    period (_) of a rational number is the star operator _* in a Kleene algebra.

    To construct the test case, where SWI-Prolog and Scryer Prolog stumbled
    I used two different regular expressions for the same rational number.
    So I guess this little bug can be cheaply fixed? Or can it not?

    10/81 = 0.(123456790) = 0.12345679(012345679)

    Kleene form:
    https://en.wikipedia.org/wiki/Kleene%27s_algorithm
    Mostowski Collapse schrieb am Dienstag, 28. März 2023 um 19:04:15 UTC+2:
    I also don't find anything like a compare in neo4j. Its even the case
    that the database stumbles over cycles in general, an article from 2019
    it reports a performance penalty.

    So if a Prolog system could do better and also
    offer a compare, that would be really great news!

    **Avoid cycles in Cypher queries**
    [https://graphaware.com/neo4j/2019/04/26/avoid-cycles-in-cypher-queries.html](https://graphaware.com/neo4j/2019/04/26/avoid-cycles-in-cypher-queries.html)
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mostowski Collapse@janburse@fastmail.fm to comp.lang.prolog on Sat May 20 14:09:23 2023
    From Newsgroup: comp.lang.prolog

    Panic on the Titanic? This paper here
    references s(CASP) and LLM:

    Prolog: Past, Present, and Future https://personal.utdallas.edu/~gupta/prolog-next-50-years.pdf

    But when I lookup the reference, its
    just some to appear thingy:

    A. Rajasekharan, Y. Zeng, P. Padalkar, and G. Gupta.
    Reliable Natural Language Understanding with Large
    Language Models and Answer Set Programming. Preprint
    arXiv:2302.03780; to appear in Proc. ICLP’23 (Tech. Comm.) 2023. https://arxiv.org/abs/2302.03780

    LoL

    Well its never too late to jump on a
    Bandwagon, even if some bones might crash.

    Bye

    Mostowski Collapse schrieb:
    What would make sense, is a ISO core standard working
    group, that would draft these stream creation properties:

    - bom(Bool)
    Specify detecting or writing a BOM.
    - encoding(Atom)
    Specify a file encoding.

    After all we have already 2022 and 50 years of Prolog. But
    can we be sure that Prolog texts are exchangeable, if
    they use Unicode code points?

    What if a UTF-16 file, handy for CJK, comes along?

    Mostowski Collapse schrieb am Freitag, 10. Juni 2022 um 14:54:07 UTC+2:
    Its 2022 and Prolog is among the top 20

    TIOBE Index for June 2022
    https://www.tiobe.com/tiobe-index/

    Woa!

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mostowski Collapse@bursejan@gmail.com to comp.lang.prolog on Sat May 20 05:46:30 2023
    From Newsgroup: comp.lang.prolog

    Knock, Knock, any idea whats going on via OpenAI?
    Will computers communicate with each other via
    Prolog Goals, Telescript Agents, SPARQL Queries?
    What about "natural language" with tons of context
    as the interfacing currency between Computers and
    Humans, and between Computers and Computers?
    Interesting paper here, with modes LM size goals:
    "[...]
    Although the second path is simpler and apparently
    capable of earlier realization, it has been relatively neglected.
    Fredkin's trie memory provides a promising paradigm.
    We may in due course see a serious effort to develop
    computer programs that can be connected together
    like the words and phrases of speech to do whatever
    computation or control is required at the moment. The
    consideration that holds back such an effort, apparently,
    is that the effort would produce nothing that, would be of
    great value in the context of existing computers. It would
    be unrewarding to develop the language before there are
    any computing machines capable of responding meaningfully to it.
    [...]
    For real-time interaction on a truly symbiotic level, however,
    a vocabulary of about 2000 words, e.g. 1000 words of
    something like basic English and 1000 technical terms,
    would probably be required. That constitutes a challenging
    problem. In the consensus of acousticians and linguists,
    construction of a recognizer of 2000 words cannot be
    accomplished now. However, there are several organizations
    that would happily undertake to develop and automatie recognizer
    for such a vocabulary on a five-year basis.
    [...]"
    LICKLIDER, J. C. R. 1960. Man-computer symbiosis.
    IRE Transactions on Human Factors in Electronics. HFE-1: 4-11, (March 1). http://worrydream.com/refs/Licklider%20-%20Man-Computer%20Symbiosis.pdf Mostowski Collapse schrieb am Samstag, 20. Mai 2023 um 14:09:26 UTC+2:
    Panic on the Titanic? This paper here
    references s(CASP) and LLM:

    Prolog: Past, Present, and Future https://personal.utdallas.edu/~gupta/prolog-next-50-years.pdf

    But when I lookup the reference, its
    just some to appear thingy:

    A. Rajasekharan, Y. Zeng, P. Padalkar, and G. Gupta.
    Reliable Natural Language Understanding with Large
    Language Models and Answer Set Programming. Preprint
    arXiv:2302.03780; to appear in Proc. ICLP’23 (Tech. Comm.) 2023. https://arxiv.org/abs/2302.03780

    LoL

    Well its never too late to jump on a
    Bandwagon, even if some bones might crash.

    Bye

    Mostowski Collapse schrieb:
    What would make sense, is a ISO core standard working
    group, that would draft these stream creation properties:

    - bom(Bool)
    Specify detecting or writing a BOM.
    - encoding(Atom)
    Specify a file encoding.

    After all we have already 2022 and 50 years of Prolog. But
    can we be sure that Prolog texts are exchangeable, if
    they use Unicode code points?

    What if a UTF-16 file, handy for CJK, comes along?

    Mostowski Collapse schrieb am Freitag, 10. Juni 2022 um 14:54:07 UTC+2:
    Its 2022 and Prolog is among the top 20

    TIOBE Index for June 2022
    https://www.tiobe.com/tiobe-index/

    Woa!
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mostowski Collapse@bursejan@gmail.com to comp.lang.prolog on Sat May 20 05:48:53 2023
    From Newsgroup: comp.lang.prolog

    Corr.: Typo
    Interesting paper here, with modest LM size goals:
    Mostowski Collapse schrieb am Samstag, 20. Mai 2023 um 14:46:32 UTC+2:
    Knock, Knock, any idea whats going on via OpenAI?
    Will computers communicate with each other via
    Prolog Goals, Telescript Agents, SPARQL Queries?

    What about "natural language" with tons of context
    as the interfacing currency between Computers and
    Humans, and between Computers and Computers?

    Interesting paper here, with modes LM size goals:

    "[...]
    Although the second path is simpler and apparently
    capable of earlier realization, it has been relatively neglected.
    Fredkin's trie memory provides a promising paradigm.
    We may in due course see a serious effort to develop
    computer programs that can be connected together
    like the words and phrases of speech to do whatever
    computation or control is required at the moment. The
    consideration that holds back such an effort, apparently,
    is that the effort would produce nothing that, would be of
    great value in the context of existing computers. It would
    be unrewarding to develop the language before there are
    any computing machines capable of responding meaningfully to it.
    [...]
    For real-time interaction on a truly symbiotic level, however,
    a vocabulary of about 2000 words, e.g. 1000 words of
    something like basic English and 1000 technical terms,
    would probably be required. That constitutes a challenging
    problem. In the consensus of acousticians and linguists,
    construction of a recognizer of 2000 words cannot be
    accomplished now. However, there are several organizations
    that would happily undertake to develop and automatie recognizer
    for such a vocabulary on a five-year basis.
    [...]"

    LICKLIDER, J. C. R. 1960. Man-computer symbiosis.
    IRE Transactions on Human Factors in Electronics. HFE-1: 4-11, (March 1). http://worrydream.com/refs/Licklider%20-%20Man-Computer%20Symbiosis.pdf Mostowski Collapse schrieb am Samstag, 20. Mai 2023 um 14:09:26 UTC+2:
    Panic on the Titanic? This paper here
    references s(CASP) and LLM:

    Prolog: Past, Present, and Future https://personal.utdallas.edu/~gupta/prolog-next-50-years.pdf

    But when I lookup the reference, its
    just some to appear thingy:

    A. Rajasekharan, Y. Zeng, P. Padalkar, and G. Gupta.
    Reliable Natural Language Understanding with Large
    Language Models and Answer Set Programming. Preprint
    arXiv:2302.03780; to appear in Proc. ICLP’23 (Tech. Comm.) 2023. https://arxiv.org/abs/2302.03780

    LoL

    Well its never too late to jump on a
    Bandwagon, even if some bones might crash.

    Bye

    Mostowski Collapse schrieb:
    What would make sense, is a ISO core standard working
    group, that would draft these stream creation properties:

    - bom(Bool)
    Specify detecting or writing a BOM.
    - encoding(Atom)
    Specify a file encoding.

    After all we have already 2022 and 50 years of Prolog. But
    can we be sure that Prolog texts are exchangeable, if
    they use Unicode code points?

    What if a UTF-16 file, handy for CJK, comes along?

    Mostowski Collapse schrieb am Freitag, 10. Juni 2022 um 14:54:07 UTC+2:
    Its 2022 and Prolog is among the top 20

    TIOBE Index for June 2022
    https://www.tiobe.com/tiobe-index/

    Woa!
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mostowski Collapse@bursejan@gmail.com to comp.lang.prolog on Sat May 20 06:16:45 2023
    From Newsgroup: comp.lang.prolog


    ChatGPT uses something like a 50'000 or much larger
    vocabulary, for more recent models. And has also mechanisms
    to handle out-of-vocabulary (OOV) words.
    Mostowski Collapse schrieb am Samstag, 20. Mai 2023 um 14:48:54 UTC+2:
    Corr.: Typo

    Interesting paper here, with modest LM size goals:
    Mostowski Collapse schrieb am Samstag, 20. Mai 2023 um 14:46:32 UTC+2:
    Knock, Knock, any idea whats going on via OpenAI?
    Will computers communicate with each other via
    Prolog Goals, Telescript Agents, SPARQL Queries?

    What about "natural language" with tons of context
    as the interfacing currency between Computers and
    Humans, and between Computers and Computers?

    Interesting paper here, with modes LM size goals:

    "[...]
    Although the second path is simpler and apparently
    capable of earlier realization, it has been relatively neglected. Fredkin's trie memory provides a promising paradigm.
    We may in due course see a serious effort to develop
    computer programs that can be connected together
    like the words and phrases of speech to do whatever
    computation or control is required at the moment. The
    consideration that holds back such an effort, apparently,
    is that the effort would produce nothing that, would be of
    great value in the context of existing computers. It would
    be unrewarding to develop the language before there are
    any computing machines capable of responding meaningfully to it.
    [...]
    For real-time interaction on a truly symbiotic level, however,
    a vocabulary of about 2000 words, e.g. 1000 words of
    something like basic English and 1000 technical terms,
    would probably be required. That constitutes a challenging
    problem. In the consensus of acousticians and linguists,
    construction of a recognizer of 2000 words cannot be
    accomplished now. However, there are several organizations
    that would happily undertake to develop and automatie recognizer
    for such a vocabulary on a five-year basis.
    [...]"

    LICKLIDER, J. C. R. 1960. Man-computer symbiosis.
    IRE Transactions on Human Factors in Electronics. HFE-1: 4-11, (March 1). http://worrydream.com/refs/Licklider%20-%20Man-Computer%20Symbiosis.pdf Mostowski Collapse schrieb am Samstag, 20. Mai 2023 um 14:09:26 UTC+2:
    Panic on the Titanic? This paper here
    references s(CASP) and LLM:

    Prolog: Past, Present, and Future https://personal.utdallas.edu/~gupta/prolog-next-50-years.pdf

    But when I lookup the reference, its
    just some to appear thingy:

    A. Rajasekharan, Y. Zeng, P. Padalkar, and G. Gupta.
    Reliable Natural Language Understanding with Large
    Language Models and Answer Set Programming. Preprint
    arXiv:2302.03780; to appear in Proc. ICLP’23 (Tech. Comm.) 2023. https://arxiv.org/abs/2302.03780

    LoL

    Well its never too late to jump on a
    Bandwagon, even if some bones might crash.

    Bye

    Mostowski Collapse schrieb:
    What would make sense, is a ISO core standard working
    group, that would draft these stream creation properties:

    - bom(Bool)
    Specify detecting or writing a BOM.
    - encoding(Atom)
    Specify a file encoding.

    After all we have already 2022 and 50 years of Prolog. But
    can we be sure that Prolog texts are exchangeable, if
    they use Unicode code points?

    What if a UTF-16 file, handy for CJK, comes along?

    Mostowski Collapse schrieb am Freitag, 10. Juni 2022 um 14:54:07 UTC+2:
    Its 2022 and Prolog is among the top 20

    TIOBE Index for June 2022
    https://www.tiobe.com/tiobe-index/

    Woa!
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mostowski Collapse@bursejan@gmail.com to comp.lang.prolog on Thu May 25 08:48:54 2023
    From Newsgroup: comp.lang.prolog

    If they would increase the price money from:
    The winner receives a certificate and cash support of up to 2,000 Euros https://logicprogramming.org/the-alp-alain-colmerauer-prize/
    to like for example 500’000 € this could help the recipients enterprise or pension.
    Maybe they can fork the price into a “lifetime archivement award”, besides some “recent practical accomplishments”.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mostowski Collapse@bursejan@gmail.com to comp.lang.prolog on Thu May 25 08:53:18 2023
    From Newsgroup: comp.lang.prolog


    I am 100% serious. Just knock on the door of a few
    crypto billionaires. They take it from the confiture jar.
    LoL
    Mostowski Collapse schrieb am Donnerstag, 25. Mai 2023 um 17:48:56 UTC+2:
    If they would increase the price money from:

    The winner receives a certificate and cash support of up to 2,000 Euros https://logicprogramming.org/the-alp-alain-colmerauer-prize/

    to like for example 500’000 € this could help the recipients enterprise or pension.
    Maybe they can fork the price into a “lifetime archivement award”, besides
    some “recent practical accomplishments”.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mostowski Collapse@bursejan@gmail.com to comp.lang.prolog on Thu May 25 09:02:20 2023
    From Newsgroup: comp.lang.prolog

    Oopsie, this guy possibly doesn't qualify anymore:
    18. Sam Bankman-Fried
    Net worth: Estimated at less than $10 million (down from $24 billion) https://www.forbes.com/sites/johnhyatt/2023/04/07/bitcoin-crypto-billionaires-lost-110-billion-in-past-year
    Mostowski Collapse schrieb am Donnerstag, 25. Mai 2023 um 17:53:21 UTC+2:
    I am 100% serious. Just knock on the door of a few
    crypto billionaires. They take it from the confiture jar.

    LoL
    Mostowski Collapse schrieb am Donnerstag, 25. Mai 2023 um 17:48:56 UTC+2:
    If they would increase the price money from:

    The winner receives a certificate and cash support of up to 2,000 Euros https://logicprogramming.org/the-alp-alain-colmerauer-prize/

    to like for example 500’000 € this could help the recipients enterprise or pension.
    Maybe they can fork the price into a “lifetime archivement award”, besides
    some “recent practical accomplishments”.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@bursejan@gmail.com to comp.lang.prolog on Sat May 27 04:51:08 2023
    From Newsgroup: comp.lang.prolog

    More Knock, Knock. I guess the future of the internet
    is indeed a collection of books that talk to each other!
    Who had this vision again, I don't remember.
    Ok, let some AI intelligence, like ChatGPT, do the
    low level plumbing. Interesting paper BTW:
    "Supporting a web scale collection of potentially millions
    of changing APIs requires rethinking our approach to how
    we integrate tools. It is not longer possible to describe
    the full set of APIs in a single context.
    Many of the APIs will have overlapping functionality with
    nuanced limitations and constraints. Simply evaluating
    LLMs in this new setting requires new benchmarks.
    In this paper, we explore the use of self-instruct fine-tuning
    and retrieval to enable LLMs to accurately select from
    a large, overlapping, and changing set tools expressed
    using their APIs and API documentation.
    We construct, APIBench, a large corpus of APIs with
    complex and often overlapping functionality by scraping
    ML APIs (models) from public model hubs."
    Gorilla: Large Language Model Connected with Massive APIs https://arxiv.org/pdf/2305.15334.pdf
    Mostowski Collapse schrieb am Samstag, 20. Mai 2023 um 14:46:32 UTC+2:
    Knock, Knock, any idea whats going on via OpenAI?
    Will computers communicate with each other via
    Prolog Goals, Telescript Agents, SPARQL Queries?

    What about "natural language" with tons of context
    as the interfacing currency between Computers and
    Humans, and between Computers and Computers?

    Interesting paper here, with modes LM size goals:

    "[...]
    Although the second path is simpler and apparently
    capable of earlier realization, it has been relatively neglected.
    Fredkin's trie memory provides a promising paradigm.
    We may in due course see a serious effort to develop
    computer programs that can be connected together
    like the words and phrases of speech to do whatever
    computation or control is required at the moment. The
    consideration that holds back such an effort, apparently,
    is that the effort would produce nothing that, would be of
    great value in the context of existing computers. It would
    be unrewarding to develop the language before there are
    any computing machines capable of responding meaningfully to it.
    [...]
    For real-time interaction on a truly symbiotic level, however,
    a vocabulary of about 2000 words, e.g. 1000 words of
    something like basic English and 1000 technical terms,
    would probably be required. That constitutes a challenging
    problem. In the consensus of acousticians and linguists,
    construction of a recognizer of 2000 words cannot be
    accomplished now. However, there are several organizations
    that would happily undertake to develop and automatie recognizer
    for such a vocabulary on a five-year basis.
    [...]"

    LICKLIDER, J. C. R. 1960. Man-computer symbiosis.
    IRE Transactions on Human Factors in Electronics. HFE-1: 4-11, (March 1). http://worrydream.com/refs/Licklider%20-%20Man-Computer%20Symbiosis.pdf Mostowski Collapse schrieb am Samstag, 20. Mai 2023 um 14:09:26 UTC+2:
    Panic on the Titanic? This paper here
    references s(CASP) and LLM:

    Prolog: Past, Present, and Future https://personal.utdallas.edu/~gupta/prolog-next-50-years.pdf

    But when I lookup the reference, its
    just some to appear thingy:

    A. Rajasekharan, Y. Zeng, P. Padalkar, and G. Gupta.
    Reliable Natural Language Understanding with Large
    Language Models and Answer Set Programming. Preprint
    arXiv:2302.03780; to appear in Proc. ICLP’23 (Tech. Comm.) 2023. https://arxiv.org/abs/2302.03780

    LoL

    Well its never too late to jump on a
    Bandwagon, even if some bones might crash.

    Bye

    Mostowski Collapse schrieb:
    What would make sense, is a ISO core standard working
    group, that would draft these stream creation properties:

    - bom(Bool)
    Specify detecting or writing a BOM.
    - encoding(Atom)
    Specify a file encoding.

    After all we have already 2022 and 50 years of Prolog. But
    can we be sure that Prolog texts are exchangeable, if
    they use Unicode code points?

    What if a UTF-16 file, handy for CJK, comes along?

    Mostowski Collapse schrieb am Freitag, 10. Juni 2022 um 14:54:07 UTC+2:
    Its 2022 and Prolog is among the top 20

    TIOBE Index for June 2022
    https://www.tiobe.com/tiobe-index/

    Woa!
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@bursejan@gmail.com to comp.lang.prolog on Thu Jun 1 14:10:32 2023
    From Newsgroup: comp.lang.prolog


    June, 2023 Update: It might be the case, that ChatGPT has improved
    in logic. Here it does even modal logic, and you can ask it to produce
    proofs without LEM. ChatGPT does the following tasks:
    Here’s how you can translate the proof into natural deduction:
    Here’s an alternative proof that does not rely on LEM:
    Here’s the translation of the proof into Fitch-style natural deduction: Here’s the translation of the proof into Gentzen’s tree-style natural deduction:
    Here’s the translation of the proof into sequent-style natural deduction: https://chat.openai.com/share/79ae4f02-fd07-4786-800b-305bc9eed143
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@bursejan@gmail.com to comp.lang.prolog on Thu Jun 1 14:11:48 2023
    From Newsgroup: comp.lang.prolog


    Today I had for some minutes a strong feeling
    of obsolence, was even imagining that these could
    be my last days where I write some "program code".
    This happened after I saw ChatGPT doing logic.
    Although was reading about "Low Code / No Code"
    already for a while. So which profession gets hit first?
    Profiles of the future : an inquiry into the limits of the possible
    Arthur C. Clarke - 1962, Chapter 18: The Obsolence of Man https://archive.org/details/profilesoffuture00clar/page/222/mode/2up
    Arthur C. Clarke talks
    A Space Odyssey and artificial intelligence, 1968 https://www.youtube.com/watch?v=zNJbUYD-pfo
    Mild Shock schrieb am Donnerstag, 1. Juni 2023 um 23:10:34 UTC+2:
    June, 2023 Update: It might be the case, that ChatGPT has improved
    in logic. Here it does even modal logic, and you can ask it to produce proofs without LEM. ChatGPT does the following tasks:

    Here’s how you can translate the proof into natural deduction:
    Here’s an alternative proof that does not rely on LEM:
    Here’s the translation of the proof into Fitch-style natural deduction: Here’s the translation of the proof into Gentzen’s tree-style natural deduction:
    Here’s the translation of the proof into sequent-style natural deduction: https://chat.openai.com/share/79ae4f02-fd07-4786-800b-305bc9eed143
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@bursejan@gmail.com to comp.lang.prolog on Thu Jun 1 14:17:46 2023
    From Newsgroup: comp.lang.prolog


    Wao! I love coding so much, maybe should jump
    into no-coding. How would I setup my computer
    and have myself better skills, so that I would
    do no-coding. Like the current project I am
    wroking on. A ChatGPT AI would first need to
    have a model/context of my current project.
    And then maybe I could sit back, ask it:
    Please do this for me, please do that for me.
    Which would be on second thought quite swell!
    Mild Shock schrieb am Donnerstag, 1. Juni 2023 um 23:11:50 UTC+2:
    Today I had for some minutes a strong feeling
    of obsolence, was even imagining that these could
    be my last days where I write some "program code".

    This happened after I saw ChatGPT doing logic.
    Although was reading about "Low Code / No Code"
    already for a while. So which profession gets hit first?

    Profiles of the future : an inquiry into the limits of the possible
    Arthur C. Clarke - 1962, Chapter 18: The Obsolence of Man https://archive.org/details/profilesoffuture00clar/page/222/mode/2up

    Arthur C. Clarke talks
    A Space Odyssey and artificial intelligence, 1968 https://www.youtube.com/watch?v=zNJbUYD-pfo
    Mild Shock schrieb am Donnerstag, 1. Juni 2023 um 23:10:34 UTC+2:
    June, 2023 Update: It might be the case, that ChatGPT has improved
    in logic. Here it does even modal logic, and you can ask it to produce proofs without LEM. ChatGPT does the following tasks:

    Here’s how you can translate the proof into natural deduction:
    Here’s an alternative proof that does not rely on LEM:
    Here’s the translation of the proof into Fitch-style natural deduction: Here’s the translation of the proof into Gentzen’s tree-style natural deduction:
    Here’s the translation of the proof into sequent-style natural deduction:
    https://chat.openai.com/share/79ae4f02-fd07-4786-800b-305bc9eed143
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@bursejan@gmail.com to comp.lang.prolog on Fri Jun 2 00:44:39 2023
    From Newsgroup: comp.lang.prolog


    The ChatGPT template above was not done by me. Credits
    go to Joseph Vidal-Rosset. That conversations and thus interaction
    specific context and mini learnt model extensions can be
    shared via share links seems to be a new feature of ChatGPT.
    I saw this feature appear only yesterday in ChatGPT.
    See also:
    https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq
    Mild Shock schrieb am Donnerstag, 1. Juni 2023 um 23:10:34 UTC+2:
    June, 2023 Update: It might be the case, that ChatGPT has improved
    in logic. Here it does even modal logic, and you can ask it to produce proofs without LEM. ChatGPT does the following tasks:

    Here’s how you can translate the proof into natural deduction:
    Here’s an alternative proof that does not rely on LEM:
    Here’s the translation of the proof into Fitch-style natural deduction: Here’s the translation of the proof into Gentzen’s tree-style natural deduction:
    Here’s the translation of the proof into sequent-style natural deduction: https://chat.openai.com/share/79ae4f02-fd07-4786-800b-305bc9eed143
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@bursejan@gmail.com to comp.lang.prolog on Sat Jun 3 05:20:04 2023
    From Newsgroup: comp.lang.prolog

    So its just a matter of time, like months or weeks,
    and we have ChatGPT integrated in IDEs at
    our desktop, coding help at our fingertips:
    "In line with our iterative deployment philosophy,
    we are gradually rolling out plugins in ChatGPT
    so we can study their real-world use, impact, and
    safety and alignment challenges—all of which
    we’ll have to get right in order to achieve our mission." https://openai.com/blog/chatgpt-plugins
    They are quite on mission. This will suplant GitHub
    Copilot? Well doesn't matter GitHub Copilot uses
    also OpenAI Codex. But in March 2023, OpenAI shut
    down access to Codex, but I guess they didn't do
    it for some moratorium, they have a better replacement:
    "On March 23rd, we will discontinue support for the
    Codex API. All customers will have to transition to a
    different model. Codex was initially introduced as a
    free limited beta in 2021, and has maintained
    that status to date. Given the advancements of our
    newest GPT-3.5 models for coding tasks, we will no
    longer be supporting Codex and encourage all customers
    to transition to GPT-3.5-Turbo.
    About GPT-3.5-Turbo GPT-3.5-Turbo is the most
    cost effective and performant model in the GPT-3.5
    family. It can both do coding tasks while also being
    complemented with flexible natural language capabilities." https://news.ycombinator.com/item?id=35242069
    Mild Shock schrieb am Donnerstag, 1. Juni 2023 um 23:17:48 UTC+2:
    Wao! I love coding so much, maybe should jump
    into no-coding. How would I setup my computer
    and have myself better skills, so that I would

    do no-coding. Like the current project I am
    wroking on. A ChatGPT AI would first need to
    have a model/context of my current project.

    And then maybe I could sit back, ask it:
    Please do this for me, please do that for me.
    Which would be on second thought quite swell!
    Mild Shock schrieb am Donnerstag, 1. Juni 2023 um 23:11:50 UTC+2:
    Today I had for some minutes a strong feeling
    of obsolence, was even imagining that these could
    be my last days where I write some "program code".

    This happened after I saw ChatGPT doing logic.
    Although was reading about "Low Code / No Code"
    already for a while. So which profession gets hit first?

    Profiles of the future : an inquiry into the limits of the possible
    Arthur C. Clarke - 1962, Chapter 18: The Obsolence of Man https://archive.org/details/profilesoffuture00clar/page/222/mode/2up

    Arthur C. Clarke talks
    A Space Odyssey and artificial intelligence, 1968 https://www.youtube.com/watch?v=zNJbUYD-pfo
    Mild Shock schrieb am Donnerstag, 1. Juni 2023 um 23:10:34 UTC+2:
    June, 2023 Update: It might be the case, that ChatGPT has improved
    in logic. Here it does even modal logic, and you can ask it to produce proofs without LEM. ChatGPT does the following tasks:

    Here’s how you can translate the proof into natural deduction: Here’s an alternative proof that does not rely on LEM:
    Here’s the translation of the proof into Fitch-style natural deduction:
    Here’s the translation of the proof into Gentzen’s tree-style natural deduction:
    Here’s the translation of the proof into sequent-style natural deduction:
    https://chat.openai.com/share/79ae4f02-fd07-4786-800b-305bc9eed143
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@bursejan@gmail.com to comp.lang.prolog on Sat Jun 3 09:10:44 2023
    From Newsgroup: comp.lang.prolog


    Interestingly there are supposedly ChatGPT
    plugins with real-time information integration.
    A browser ChatGPT plugin for example:
    Unleashing the Power of AI Conversations https://www.youtube.com/watch?v=L2RW4qx-45Q
    Mild Shock schrieb am Samstag, 3. Juni 2023 um 14:20:06 UTC+2:
    So its just a matter of time, like months or weeks,
    and we have ChatGPT integrated in IDEs at
    our desktop, coding help at our fingertips:

    "In line with our iterative deployment philosophy,
    we are gradually rolling out plugins in ChatGPT
    so we can study their real-world use, impact, and
    safety and alignment challenges—all of which
    we’ll have to get right in order to achieve our mission." https://openai.com/blog/chatgpt-plugins

    They are quite on mission. This will suplant GitHub
    Copilot? Well doesn't matter GitHub Copilot uses
    also OpenAI Codex. But in March 2023, OpenAI shut
    down access to Codex, but I guess they didn't do

    it for some moratorium, they have a better replacement:

    "On March 23rd, we will discontinue support for the
    Codex API. All customers will have to transition to a
    different model. Codex was initially introduced as a
    free limited beta in 2021, and has maintained

    that status to date. Given the advancements of our
    newest GPT-3.5 models for coding tasks, we will no
    longer be supporting Codex and encourage all customers
    to transition to GPT-3.5-Turbo.

    About GPT-3.5-Turbo GPT-3.5-Turbo is the most
    cost effective and performant model in the GPT-3.5
    family. It can both do coding tasks while also being
    complemented with flexible natural language capabilities." https://news.ycombinator.com/item?id=35242069
    Mild Shock schrieb am Donnerstag, 1. Juni 2023 um 23:17:48 UTC+2:
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@bursejan@gmail.com to comp.lang.prolog on Sat Jul 29 04:57:13 2023
    From Newsgroup: comp.lang.prolog

    Can SWI-Prolog lean back concerning multi-threading? The
    Python store looks like a nice piece of darwinism. So there is some evolutionary pressure through some selection mechanism:
    Allen Goodman, author of CellProfiler and staff engineer at
    Prescient Design and Genentech, describes how the GIL makes
    biological methods research more difficult in Python.
    So basically Python starts lacking behind as the datascience language.
    Oh the irony. But I would not blame it so much on the GIL. Deep down many programming languages have still a GIL,
    for example in malloc(). I don’t know whether SWI-Prologs tcmalloc() integration
    even squeezes the lemon. From >JDK 9 Java had a slower GC single-threaded because they started optimizing their virtual machine for multi-threaded.
    Such optiminzations do not only consists of removing the GIL, you
    need optimize malloc(). Some approaches uses thread affine memory
    areas, but this is also tricky, since not all objects have a clear thread affinity.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@bursejan@gmail.com to comp.lang.prolog on Sat Jul 29 05:10:58 2023
    From Newsgroup: comp.lang.prolog

    In as far, concerning thread affinity, one has to also watch what happens concerning JavaScript Worker concept adoptions in Python. Multi-threading
    can be optimized even more If you have such isolation concepts.
    In this respect there is also PEP 683 – Immortal Objects, which on the surface might not be related, but it also relates to the effort to better handle
    strings and make a GIL per-interpreter, the later could underly Workers.
    Mild Shock schrieb am Samstag, 29. Juli 2023 um 13:57:15 UTC+2:
    Can SWI-Prolog lean back concerning multi-threading? The
    Python store looks like a nice piece of darwinism. So there is some evolutionary pressure through some selection mechanism:

    Allen Goodman, author of CellProfiler and staff engineer at
    Prescient Design and Genentech, describes how the GIL makes
    biological methods research more difficult in Python.

    So basically Python starts lacking behind as the datascience language.
    Oh the irony. But I would not blame it so much on the GIL. Deep down many programming languages have still a GIL,

    for example in malloc(). I don’t know whether SWI-Prologs tcmalloc() integration
    even squeezes the lemon. From >JDK 9 Java had a slower GC single-threaded because they started optimizing their virtual machine for multi-threaded.

    Such optiminzations do not only consists of removing the GIL, you
    need optimize malloc(). Some approaches uses thread affine memory
    areas, but this is also tricky, since not all objects have a clear thread affinity.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sat Jul 29 23:16:37 2023
    From Newsgroup: comp.lang.prolog

    There are a couple of non-GIL Pythons already
    around. For example Jython 2.7.3. But they are
    currently busy with migrating from Python 2 to Python 3.

    For example I cannot use it, it didn’t understand
    the “async” keyword. Async/await was introduced in
    Python version 3.5. There are more such no-GIL Pythons,

    like IronPython (for CLR) and GraalVM Python (for JVM).
    GraalVM Python is farther ahead, it supports Python 3.8,
    but is slower than PyPy. But with IronPython, one would

    also have less luck, its only Python 3.4 now.

    Mild Shock schrieb:
    In as far, concerning thread affinity, one has to also watch what happens concerning JavaScript Worker concept adoptions in Python. Multi-threading
    can be optimized even more If you have such isolation concepts.

    In this respect there is also PEP 683 – Immortal Objects, which on the surface might not be related, but it also relates to the effort to better handle
    strings and make a GIL per-interpreter, the later could underly Workers.

    Mild Shock schrieb am Samstag, 29. Juli 2023 um 13:57:15 UTC+2:
    Can SWI-Prolog lean back concerning multi-threading? The
    Python store looks like a nice piece of darwinism. So there is some
    evolutionary pressure through some selection mechanism:

    Allen Goodman, author of CellProfiler and staff engineer at
    Prescient Design and Genentech, describes how the GIL makes
    biological methods research more difficult in Python.

    So basically Python starts lacking behind as the datascience language.
    Oh the irony. But I would not blame it so much on the GIL. Deep down many
    programming languages have still a GIL,

    for example in malloc(). I don’t know whether SWI-Prologs tcmalloc() integration
    even squeezes the lemon. From >JDK 9 Java had a slower GC single-threaded
    because they started optimizing their virtual machine for multi-threaded.

    Such optiminzations do not only consists of removing the GIL, you
    need optimize malloc(). Some approaches uses thread affine memory
    areas, but this is also tricky, since not all objects have a clear thread affinity.

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@bursejan@gmail.com to comp.lang.prolog on Sun Jul 30 05:06:06 2023
    From Newsgroup: comp.lang.prolog

    Whats quite interesting, is that PyPy, one of the fastest Pythons,
    isn’t based on ARC LLVM. i.e. Automatic Reference Counting (ARC).
    One effect is that dead objects, might be detected a little later
    than via reference counting, so it is recommended to explicitly
    close resources such as files, and not rely on reference counting.
    Having no reference counting, does also help multi-threading.
    So in PyPy there is no PL_register_atom or PL_unregister_atom.
    Memory systems without reference counting are usually associated
    with tracing garbage collectors, based on two color mark sweep.
    But I guess they get more bang out of it, if incremental garbage
    collection is deployed and if some escape analysis of the code
    is performed as well. Have to find a paper.
    Mild Shock schrieb am Samstag, 29. Juli 2023 um 23:16:41 UTC+2:
    There are a couple of non-GIL Pythons already
    around. For example Jython 2.7.3. But they are
    currently busy with migrating from Python 2 to Python 3.

    For example I cannot use it, it didn’t understand
    the “async” keyword. Async/await was introduced in
    Python version 3.5. There are more such no-GIL Pythons,

    like IronPython (for CLR) and GraalVM Python (for JVM).
    GraalVM Python is farther ahead, it supports Python 3.8,
    but is slower than PyPy. But with IronPython, one would

    also have less luck, its only Python 3.4 now.

    Mild Shock schrieb:
    In as far, concerning thread affinity, one has to also watch what happens concerning JavaScript Worker concept adoptions in Python. Multi-threading can be optimized even more If you have such isolation concepts.

    In this respect there is also PEP 683 – Immortal Objects, which on the surface might not be related, but it also relates to the effort to better handle
    strings and make a GIL per-interpreter, the later could underly Workers.

    Mild Shock schrieb am Samstag, 29. Juli 2023 um 13:57:15 UTC+2:
    Can SWI-Prolog lean back concerning multi-threading? The
    Python store looks like a nice piece of darwinism. So there is some
    evolutionary pressure through some selection mechanism:

    Allen Goodman, author of CellProfiler and staff engineer at
    Prescient Design and Genentech, describes how the GIL makes
    biological methods research more difficult in Python.

    So basically Python starts lacking behind as the datascience language.
    Oh the irony. But I would not blame it so much on the GIL. Deep down many >> programming languages have still a GIL,

    for example in malloc(). I don’t know whether SWI-Prologs tcmalloc() integration
    even squeezes the lemon. From >JDK 9 Java had a slower GC single-threaded >> because they started optimizing their virtual machine for multi-threaded. >>
    Such optiminzations do not only consists of removing the GIL, you
    need optimize malloc(). Some approaches uses thread affine memory
    areas, but this is also tricky, since not all objects have a clear thread affinity.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@bursejan@gmail.com to comp.lang.prolog on Tue Aug 1 01:09:18 2023
    From Newsgroup: comp.lang.prolog

    Why will this never fly?

    https://www.swi-prolog.org/howto/http/

    Because its too static. For example the
    recent SVG example:

    reply_html_page(title('SVG circle'),
    Etc...

    Looks a little pointless to me. Since its static,
    You could store an `.svg` file on the server.
    How do you draw 12 circles into SVG?

    Computed into a grid of 3 x 4 via Prolog?
    Make the number of columns and rows
    a parameter to a Prolog predicate.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Markus Triska@triska@logic.at to comp.lang.prolog on Thu Aug 31 21:32:34 2023
    From Newsgroup: comp.lang.prolog

    Mostowski Collapse <bursejan@gmail.com> writes:

    For Scryer Prolog the struggle is minutely documented:

    Compiling and running scryer as a WebAssembly binary? https://github.com/mthom/scryer-prolog/issues/615

    Good news everyone: It's now possible to compile Scryer Prolog to WASM,
    the build instructions are here:

    https://github.com/mthom/scryer-prolog/pull/1966#issuecomment-1697974614

    Enjoy!

    All the best,
    Markus
    --
    comp.lang.prolog FAQ: http://www.logic.at/prolog/faq/
    The Power of Prolog: https://www.metalevel.at/prolog
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@bursejan@gmail.com to comp.lang.prolog on Fri Sep 1 12:02:54 2023
    From Newsgroup: comp.lang.prolog

    What can you demonstrate with your Scryer WASM?

    Ok, I just see, I have nevertheless published my new Chinese
    Remainder Theorem CLP(FD) solver, it even runs in a web page.
    As a Dogelog Player program its a little slower than the 0.5 secs

    in formerly Jekejeke Prolog, but its still faster than the ordinary
    CLP(FD) in SWI-Prolog, which takes around 5 seconds. The web
    page with the new CLP(FD) takes around 3 seconds,

    you can try it here in JS FIiddle, it should also use Dogelog Player 1.1.1:

    Example 71: Diophantine Modular
    X = 216, Y = 52, Z = 217;
    X = 52, Y = 216, Z = 217;
    fail.
    % Zeit 3574 ms, GC 7 ms, Lips 1696084, Uhr 01.09.2023 20:56 true. https://jsfiddle.net/Jean_Luc_Picard_2021/d2njehtp/3/

    Woa! It still runs unchanged, the code from 12 Months ago.

    Markus Triska schrieb am Donnerstag, 31. August 2023 um 21:25:51 UTC+2:
    Mostowski Collapse <burs...@gmail.com> writes:

    For Scryer Prolog the struggle is minutely documented:

    Compiling and running scryer as a WebAssembly binary? https://github.com/mthom/scryer-prolog/issues/615

    Good news everyone: It's now possible to compile Scryer Prolog to WASM,
    the build instructions are here:

    https://github.com/mthom/scryer-prolog/pull/1966#issuecomment-1697974614

    Enjoy!

    All the best,
    Markus

    --
    comp.lang.prolog FAQ: http://www.logic.at/prolog/faq/
    The Power of Prolog: https://www.metalevel.at/prolog
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Mar 13 14:45:59 2024
    From Newsgroup: comp.lang.prolog

    Yeah today I created a ticket in GitHub issues.
    Now I need to take a SPA nap:

    Cute Kitten Really Enjoys SPA
    https://www.youtube.com/watch?v=L7lVrWY9zQE

    P.S.: No wonder Scryer Prolog has 242 tickets: https://github.com/mthom/scryer-prolog/issues

    Mostowski Collapse schrieb:
    Rounding still not fixed in Scryer Prolog. Look
    what a nice test case I am using:

    ?- atom_integer(X, 2, 166153499473114502559719956244594689).
    X = '1000000000000000000000000000000000000000 000000000000010000000000000000000000000000000 000000000000000000000000000000001'.

    And whats the result:

    $ target/release/scryer-prolog -v
    "v0.9.1-151-g17450520"
    $ target/release/scryer-prolog
    ?- X is float(166153499473114502559719956244594689).
    X = 1.661534994731145e35.
    ?- Y = 1.6615349947311452e+35.
    Y = 1.6615349947311452e35.
    ?- X is float(166153499473114502559719956244594689)-1.6615349947311452e+35.
    X = -3.6893488147419103e19.
    ?-

    Its not correctly rounded!


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Mar 13 15:05:34 2024
    From Newsgroup: comp.lang.prolog


    Or maybe its a seasonal effect related to Springtime lethargy. https://en.wikipedia.org/wiki/Springtime_lethargy.

    Mild Shock schrieb:
    Yeah today I created a ticket in GitHub issues.
    Now I need to take a SPA nap:

    Cute Kitten Really Enjoys SPA
    https://www.youtube.com/watch?v=L7lVrWY9zQE

    P.S.: No wonder Scryer Prolog has 242 tickets: https://github.com/mthom/scryer-prolog/issues

    Mostowski Collapse schrieb:
    Rounding still not fixed in Scryer Prolog. Look
    what a nice test case I am using:

    ?- atom_integer(X, 2, 166153499473114502559719956244594689).
    X = '1000000000000000000000000000000000000000
    000000000000010000000000000000000000000000000
    000000000000000000000000000000001'.

    And whats the result:

    $ target/release/scryer-prolog -v
    "v0.9.1-151-g17450520"
    $ target/release/scryer-prolog
    ?- X is float(166153499473114502559719956244594689).
        X = 1.661534994731145e35.
    ?- Y = 1.6615349947311452e+35.
        Y = 1.6615349947311452e35.
    ?- X is
    float(166153499473114502559719956244594689)-1.6615349947311452e+35.
        X = -3.6893488147419103e19.
    ?-

    Its not correctly rounded!



    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Tue Apr 9 00:15:04 2024
    From Newsgroup: comp.lang.prolog

    Remembering Joe Armstrong - 20. April 2024: https://www.heise.de/news/Hello-Mike-hello-Robert-goodbye-Joe-Zum-Tode-von-Joe-Armstrong-4404170.html

    Some quote:

    "Make it work, then make it beautiful, then
    if you really, really have to, make it fast.

    90% of the time, if you make it beautiful,
    it will already be fast.

    So really, just make it beautiful!
    -Joe Armstrong, Erlang"


    Now I have a couple of questions:
    - Was Jekejeke beautiful? [No! LoL]
    - Is Trealla beautiful?
    - Is Scryer beautiful?
    - Is Dogelog beautiful? [Yes! LoL]
    - Is SWI-Prolog beautiful?
    - Is GNU Prolog beautiful?
    - Is ECLiPSe Prolog beautiful?
    - Is XSB Prolog beautiful?
    - Is SICStus Prolog beautiful?
    - Etc..



    --- Synchronet 3.20a-Linux NewsLink 1.114