Wasn't sure whether this works:--- Synchronet 3.20a-Linux NewsLink 1.114
} else if (count > GC_MAX_TRAIL) {
gc();
if (count > GC_MAX_TRAIL)
throw make_error(new Compound("system_error",["stack_overflow"]));
Seems fine:
len([], N, N).
len([_|L], N, M) :- H is N+1, (true; fail), len(L, H, M).
?- X = [_|X], len(X, 0, N).
Error: system_error(stack_overflow)
user:12
I guess Scryer Prologs argument indexing could be
improved. Take the computation of Munchhausen numbers:
/* Scryer Prolog 0.9.1-194 */
?- time((canonball, munchhausen(_), fail; true)).
% CPU time: 49.769s
true.
The bottleneck is really the cache computation, which
uses assertz/1:
?- time(canonball).
% CPU time: 50.158s
true.
Once the cache is in place, its fine:
?- time((munchhausen(R), write(R), nl, fail; true)).
0
1
3435
438579088
% CPU time: 0.280s
true.
BTW: This is the Prolog text:
canonball :-
retractall(cache(_,_)),
between(0, 99999, N), map(N, Y), C is Y-N,
assertz(cache(C, N)), fail; true.
munchhausen(R) :-
between(0, 99999, M), map(M, X), B is 100000*M-X,
cache(B, N), R is 100000*M+N.
map(0, X) :- !, X = 0.
map(N, X) :-
M is N//10,
map(M, Y),
D is N mod 10,
(D = 0 -> X=Y; X is Y+D^D).
Other Prolog systems fare much better:
/* SWI-Prolog 9.1.4 */
?- time((canonball, munchhausen(_), fail; true)).
% 4,633,344 inferences, 0.594 CPU in 0.589 seconds (101% CPU, 7803527 Lips) true.
/* Trealla Prolog 2.13.10 */
?- time((canonball, munchhausen(_), fail; true)).
% Time elapsed 0.815s, 5888901 Inferences, 7.222 MLips)
true.
Even my own new Prolog system, which does the assertz/1
clause compilation in Prolog itself, is faster than
Scryer Prolog, not as fast as the other ones though:
/* Dogelog Player 1.0.5 */
?- time((canonball, munchhausen(_), fail; true)).
% Time 6024 ms, gc 15 ms, 1966296 lips
true.
Mostowski Collapse schrieb:
I guess Scryer Prologs argument indexing could be
improved. Take the computation of Munchhausen numbers:
/* Scryer Prolog 0.9.1-194 */
?- time((canonball, munchhausen(_), fail; true)).
% CPU time: 49.769s
true.
The bottleneck is really the cache computation, which
uses assertz/1:
?- time(canonball).
% CPU time: 50.158s
true.
Once the cache is in place, its fine:
?- time((munchhausen(R), write(R), nl, fail; true)).
0
1
3435
438579088
% CPU time: 0.280s
true.
BTW: This is the Prolog text:
canonball :-
retractall(cache(_,_)),
between(0, 99999, N), map(N, Y), C is Y-N,
assertz(cache(C, N)), fail; true.
munchhausen(R) :-
between(0, 99999, M), map(M, X), B is 100000*M-X,
cache(B, N), R is 100000*M+N.
map(0, X) :- !, X = 0.
map(N, X) :-
M is N//10,
map(M, Y),
D is N mod 10,
(D = 0 -> X=Y; X is Y+D^D).
Hurray, we can leave behind us the lexical order discussion. There--- Synchronet 3.20a-Linux NewsLink 1.114
is a more fundamental flaw in the compare/3 implementation.
/* Scryer Prolog 0.9.1-207 and SWI-Prolog 9.1.7 */
?- X = X-0-9-7-6-5-4-3-2-1, Y = Y-7-5-8-2-4-1, X @< Y.
true.
?- H = H-9-7-6-5-4-3-2-1-0, Z = H-9-7-6-5-4-3-2-1,
Y = Y-7-5-8-2-4-1, Z @< Y.
false.
But X and Z are the same ground terms:
?- X = X-0-9-7-6-5-4-3-2-1, H = H-9-7-6-5-4-3-2-1-0,
Z = H-9-7-6-5-4-3-2-1, X == Z.
true.
So there is a violation of substitution of equals for equals,
in that X == Z and X @< Y did not imply Z @< Y.
**Avoid cycles in Cypher queries**[https://graphaware.com/neo4j/2019/04/26/avoid-cycles-in-cypher-queries.html](https://graphaware.com/neo4j/2019/04/26/avoid-cycles-in-cypher-queries.html)
I also don't find anything like a compare in neo4j. Its even the case--- Synchronet 3.20a-Linux NewsLink 1.114
that the database stumbles over cycles in general, an article from 2019
it reports a performance penalty.
So if a Prolog system could do better and also
offer a compare, that would be really great news!
**Avoid cycles in Cypher queries**[https://graphaware.com/neo4j/2019/04/26/avoid-cycles-in-cypher-queries.html](https://graphaware.com/neo4j/2019/04/26/avoid-cycles-in-cypher-queries.html)
I think Kuniaki Mukai mentioned that already, we can order regular expressions? So we should be also able to order graphs, potentially--- Synchronet 3.20a-Linux NewsLink 1.114
with cycles, since a graph can be represented by its Kleene form as
a regular expression? The bug here of SWI-Prolog and Scryer Prolog, is related to two different regular expressions for the same thing. The
period (_) of a rational number is the star operator _* in a Kleene algebra.
To construct the test case, where SWI-Prolog and Scryer Prolog stumbled
I used two different regular expressions for the same rational number.
So I guess this little bug can be cheaply fixed? Or can it not?
10/81 = 0.(123456790) = 0.12345679(012345679)
Kleene form:
https://en.wikipedia.org/wiki/Kleene%27s_algorithm
Mostowski Collapse schrieb am Dienstag, 28. März 2023 um 19:04:15 UTC+2:
I also don't find anything like a compare in neo4j. Its even the case
that the database stumbles over cycles in general, an article from 2019
it reports a performance penalty.
So if a Prolog system could do better and also
offer a compare, that would be really great news!
**Avoid cycles in Cypher queries**[https://graphaware.com/neo4j/2019/04/26/avoid-cycles-in-cypher-queries.html](https://graphaware.com/neo4j/2019/04/26/avoid-cycles-in-cypher-queries.html)
What would make sense, is a ISO core standard working
group, that would draft these stream creation properties:
- bom(Bool)
Specify detecting or writing a BOM.
- encoding(Atom)
Specify a file encoding.
After all we have already 2022 and 50 years of Prolog. But
can we be sure that Prolog texts are exchangeable, if
they use Unicode code points?
What if a UTF-16 file, handy for CJK, comes along?
Mostowski Collapse schrieb am Freitag, 10. Juni 2022 um 14:54:07 UTC+2:
Its 2022 and Prolog is among the top 20
TIOBE Index for June 2022
https://www.tiobe.com/tiobe-index/
Woa!
Panic on the Titanic? This paper here--- Synchronet 3.20a-Linux NewsLink 1.114
references s(CASP) and LLM:
Prolog: Past, Present, and Future https://personal.utdallas.edu/~gupta/prolog-next-50-years.pdf
But when I lookup the reference, its
just some to appear thingy:
A. Rajasekharan, Y. Zeng, P. Padalkar, and G. Gupta.
Reliable Natural Language Understanding with Large
Language Models and Answer Set Programming. Preprint
arXiv:2302.03780; to appear in Proc. ICLP’23 (Tech. Comm.) 2023. https://arxiv.org/abs/2302.03780
LoL
Well its never too late to jump on a
Bandwagon, even if some bones might crash.
Bye
Mostowski Collapse schrieb:
What would make sense, is a ISO core standard working
group, that would draft these stream creation properties:
- bom(Bool)
Specify detecting or writing a BOM.
- encoding(Atom)
Specify a file encoding.
After all we have already 2022 and 50 years of Prolog. But
can we be sure that Prolog texts are exchangeable, if
they use Unicode code points?
What if a UTF-16 file, handy for CJK, comes along?
Mostowski Collapse schrieb am Freitag, 10. Juni 2022 um 14:54:07 UTC+2:
Its 2022 and Prolog is among the top 20
TIOBE Index for June 2022
https://www.tiobe.com/tiobe-index/
Woa!
Knock, Knock, any idea whats going on via OpenAI?--- Synchronet 3.20a-Linux NewsLink 1.114
Will computers communicate with each other via
Prolog Goals, Telescript Agents, SPARQL Queries?
What about "natural language" with tons of context
as the interfacing currency between Computers and
Humans, and between Computers and Computers?
Interesting paper here, with modes LM size goals:
"[...]
Although the second path is simpler and apparently
capable of earlier realization, it has been relatively neglected.
Fredkin's trie memory provides a promising paradigm.
We may in due course see a serious effort to develop
computer programs that can be connected together
like the words and phrases of speech to do whatever
computation or control is required at the moment. The
consideration that holds back such an effort, apparently,
is that the effort would produce nothing that, would be of
great value in the context of existing computers. It would
be unrewarding to develop the language before there are
any computing machines capable of responding meaningfully to it.
[...]
For real-time interaction on a truly symbiotic level, however,
a vocabulary of about 2000 words, e.g. 1000 words of
something like basic English and 1000 technical terms,
would probably be required. That constitutes a challenging
problem. In the consensus of acousticians and linguists,
construction of a recognizer of 2000 words cannot be
accomplished now. However, there are several organizations
that would happily undertake to develop and automatie recognizer
for such a vocabulary on a five-year basis.
[...]"
LICKLIDER, J. C. R. 1960. Man-computer symbiosis.
IRE Transactions on Human Factors in Electronics. HFE-1: 4-11, (March 1). http://worrydream.com/refs/Licklider%20-%20Man-Computer%20Symbiosis.pdf Mostowski Collapse schrieb am Samstag, 20. Mai 2023 um 14:09:26 UTC+2:
Panic on the Titanic? This paper here
references s(CASP) and LLM:
Prolog: Past, Present, and Future https://personal.utdallas.edu/~gupta/prolog-next-50-years.pdf
But when I lookup the reference, its
just some to appear thingy:
A. Rajasekharan, Y. Zeng, P. Padalkar, and G. Gupta.
Reliable Natural Language Understanding with Large
Language Models and Answer Set Programming. Preprint
arXiv:2302.03780; to appear in Proc. ICLP’23 (Tech. Comm.) 2023. https://arxiv.org/abs/2302.03780
LoL
Well its never too late to jump on a
Bandwagon, even if some bones might crash.
Bye
Mostowski Collapse schrieb:
What would make sense, is a ISO core standard working
group, that would draft these stream creation properties:
- bom(Bool)
Specify detecting or writing a BOM.
- encoding(Atom)
Specify a file encoding.
After all we have already 2022 and 50 years of Prolog. But
can we be sure that Prolog texts are exchangeable, if
they use Unicode code points?
What if a UTF-16 file, handy for CJK, comes along?
Mostowski Collapse schrieb am Freitag, 10. Juni 2022 um 14:54:07 UTC+2:
Its 2022 and Prolog is among the top 20
TIOBE Index for June 2022
https://www.tiobe.com/tiobe-index/
Woa!
Corr.: Typo--- Synchronet 3.20a-Linux NewsLink 1.114
Interesting paper here, with modest LM size goals:
Mostowski Collapse schrieb am Samstag, 20. Mai 2023 um 14:46:32 UTC+2:
Knock, Knock, any idea whats going on via OpenAI?
Will computers communicate with each other via
Prolog Goals, Telescript Agents, SPARQL Queries?
What about "natural language" with tons of context
as the interfacing currency between Computers and
Humans, and between Computers and Computers?
Interesting paper here, with modes LM size goals:
"[...]
Although the second path is simpler and apparently
capable of earlier realization, it has been relatively neglected. Fredkin's trie memory provides a promising paradigm.
We may in due course see a serious effort to develop
computer programs that can be connected together
like the words and phrases of speech to do whatever
computation or control is required at the moment. The
consideration that holds back such an effort, apparently,
is that the effort would produce nothing that, would be of
great value in the context of existing computers. It would
be unrewarding to develop the language before there are
any computing machines capable of responding meaningfully to it.
[...]
For real-time interaction on a truly symbiotic level, however,
a vocabulary of about 2000 words, e.g. 1000 words of
something like basic English and 1000 technical terms,
would probably be required. That constitutes a challenging
problem. In the consensus of acousticians and linguists,
construction of a recognizer of 2000 words cannot be
accomplished now. However, there are several organizations
that would happily undertake to develop and automatie recognizer
for such a vocabulary on a five-year basis.
[...]"
LICKLIDER, J. C. R. 1960. Man-computer symbiosis.
IRE Transactions on Human Factors in Electronics. HFE-1: 4-11, (March 1). http://worrydream.com/refs/Licklider%20-%20Man-Computer%20Symbiosis.pdf Mostowski Collapse schrieb am Samstag, 20. Mai 2023 um 14:09:26 UTC+2:
Panic on the Titanic? This paper here
references s(CASP) and LLM:
Prolog: Past, Present, and Future https://personal.utdallas.edu/~gupta/prolog-next-50-years.pdf
But when I lookup the reference, its
just some to appear thingy:
A. Rajasekharan, Y. Zeng, P. Padalkar, and G. Gupta.
Reliable Natural Language Understanding with Large
Language Models and Answer Set Programming. Preprint
arXiv:2302.03780; to appear in Proc. ICLP’23 (Tech. Comm.) 2023. https://arxiv.org/abs/2302.03780
LoL
Well its never too late to jump on a
Bandwagon, even if some bones might crash.
Bye
Mostowski Collapse schrieb:
What would make sense, is a ISO core standard working
group, that would draft these stream creation properties:
- bom(Bool)
Specify detecting or writing a BOM.
- encoding(Atom)
Specify a file encoding.
After all we have already 2022 and 50 years of Prolog. But
can we be sure that Prolog texts are exchangeable, if
they use Unicode code points?
What if a UTF-16 file, handy for CJK, comes along?
Mostowski Collapse schrieb am Freitag, 10. Juni 2022 um 14:54:07 UTC+2:
Its 2022 and Prolog is among the top 20
TIOBE Index for June 2022
https://www.tiobe.com/tiobe-index/
Woa!
If they would increase the price money from:--- Synchronet 3.20a-Linux NewsLink 1.114
The winner receives a certificate and cash support of up to 2,000 Euros https://logicprogramming.org/the-alp-alain-colmerauer-prize/
to like for example 500’000 € this could help the recipients enterprise or pension.
Maybe they can fork the price into a “lifetime archivement award”, besides
some “recent practical accomplishments”.
I am 100% serious. Just knock on the door of a few--- Synchronet 3.20a-Linux NewsLink 1.114
crypto billionaires. They take it from the confiture jar.
LoL
Mostowski Collapse schrieb am Donnerstag, 25. Mai 2023 um 17:48:56 UTC+2:
If they would increase the price money from:
The winner receives a certificate and cash support of up to 2,000 Euros https://logicprogramming.org/the-alp-alain-colmerauer-prize/
to like for example 500’000 € this could help the recipients enterprise or pension.
Maybe they can fork the price into a “lifetime archivement award”, besides
some “recent practical accomplishments”.
Knock, Knock, any idea whats going on via OpenAI?--- Synchronet 3.20a-Linux NewsLink 1.114
Will computers communicate with each other via
Prolog Goals, Telescript Agents, SPARQL Queries?
What about "natural language" with tons of context
as the interfacing currency between Computers and
Humans, and between Computers and Computers?
Interesting paper here, with modes LM size goals:
"[...]
Although the second path is simpler and apparently
capable of earlier realization, it has been relatively neglected.
Fredkin's trie memory provides a promising paradigm.
We may in due course see a serious effort to develop
computer programs that can be connected together
like the words and phrases of speech to do whatever
computation or control is required at the moment. The
consideration that holds back such an effort, apparently,
is that the effort would produce nothing that, would be of
great value in the context of existing computers. It would
be unrewarding to develop the language before there are
any computing machines capable of responding meaningfully to it.
[...]
For real-time interaction on a truly symbiotic level, however,
a vocabulary of about 2000 words, e.g. 1000 words of
something like basic English and 1000 technical terms,
would probably be required. That constitutes a challenging
problem. In the consensus of acousticians and linguists,
construction of a recognizer of 2000 words cannot be
accomplished now. However, there are several organizations
that would happily undertake to develop and automatie recognizer
for such a vocabulary on a five-year basis.
[...]"
LICKLIDER, J. C. R. 1960. Man-computer symbiosis.
IRE Transactions on Human Factors in Electronics. HFE-1: 4-11, (March 1). http://worrydream.com/refs/Licklider%20-%20Man-Computer%20Symbiosis.pdf Mostowski Collapse schrieb am Samstag, 20. Mai 2023 um 14:09:26 UTC+2:
Panic on the Titanic? This paper here
references s(CASP) and LLM:
Prolog: Past, Present, and Future https://personal.utdallas.edu/~gupta/prolog-next-50-years.pdf
But when I lookup the reference, its
just some to appear thingy:
A. Rajasekharan, Y. Zeng, P. Padalkar, and G. Gupta.
Reliable Natural Language Understanding with Large
Language Models and Answer Set Programming. Preprint
arXiv:2302.03780; to appear in Proc. ICLP’23 (Tech. Comm.) 2023. https://arxiv.org/abs/2302.03780
LoL
Well its never too late to jump on a
Bandwagon, even if some bones might crash.
Bye
Mostowski Collapse schrieb:
What would make sense, is a ISO core standard working
group, that would draft these stream creation properties:
- bom(Bool)
Specify detecting or writing a BOM.
- encoding(Atom)
Specify a file encoding.
After all we have already 2022 and 50 years of Prolog. But
can we be sure that Prolog texts are exchangeable, if
they use Unicode code points?
What if a UTF-16 file, handy for CJK, comes along?
Mostowski Collapse schrieb am Freitag, 10. Juni 2022 um 14:54:07 UTC+2:
Its 2022 and Prolog is among the top 20
TIOBE Index for June 2022
https://www.tiobe.com/tiobe-index/
Woa!
June, 2023 Update: It might be the case, that ChatGPT has improved--- Synchronet 3.20a-Linux NewsLink 1.114
in logic. Here it does even modal logic, and you can ask it to produce proofs without LEM. ChatGPT does the following tasks:
Here’s how you can translate the proof into natural deduction:
Here’s an alternative proof that does not rely on LEM:
Here’s the translation of the proof into Fitch-style natural deduction: Here’s the translation of the proof into Gentzen’s tree-style natural deduction:
Here’s the translation of the proof into sequent-style natural deduction: https://chat.openai.com/share/79ae4f02-fd07-4786-800b-305bc9eed143
Today I had for some minutes a strong feeling--- Synchronet 3.20a-Linux NewsLink 1.114
of obsolence, was even imagining that these could
be my last days where I write some "program code".
This happened after I saw ChatGPT doing logic.
Although was reading about "Low Code / No Code"
already for a while. So which profession gets hit first?
Profiles of the future : an inquiry into the limits of the possible
Arthur C. Clarke - 1962, Chapter 18: The Obsolence of Man https://archive.org/details/profilesoffuture00clar/page/222/mode/2up
Arthur C. Clarke talks
A Space Odyssey and artificial intelligence, 1968 https://www.youtube.com/watch?v=zNJbUYD-pfo
Mild Shock schrieb am Donnerstag, 1. Juni 2023 um 23:10:34 UTC+2:
June, 2023 Update: It might be the case, that ChatGPT has improved
in logic. Here it does even modal logic, and you can ask it to produce proofs without LEM. ChatGPT does the following tasks:
Here’s how you can translate the proof into natural deduction:
Here’s an alternative proof that does not rely on LEM:
Here’s the translation of the proof into Fitch-style natural deduction: Here’s the translation of the proof into Gentzen’s tree-style natural deduction:
Here’s the translation of the proof into sequent-style natural deduction:
https://chat.openai.com/share/79ae4f02-fd07-4786-800b-305bc9eed143
June, 2023 Update: It might be the case, that ChatGPT has improved--- Synchronet 3.20a-Linux NewsLink 1.114
in logic. Here it does even modal logic, and you can ask it to produce proofs without LEM. ChatGPT does the following tasks:
Here’s how you can translate the proof into natural deduction:
Here’s an alternative proof that does not rely on LEM:
Here’s the translation of the proof into Fitch-style natural deduction: Here’s the translation of the proof into Gentzen’s tree-style natural deduction:
Here’s the translation of the proof into sequent-style natural deduction: https://chat.openai.com/share/79ae4f02-fd07-4786-800b-305bc9eed143
Wao! I love coding so much, maybe should jump--- Synchronet 3.20a-Linux NewsLink 1.114
into no-coding. How would I setup my computer
and have myself better skills, so that I would
do no-coding. Like the current project I am
wroking on. A ChatGPT AI would first need to
have a model/context of my current project.
And then maybe I could sit back, ask it:
Please do this for me, please do that for me.
Which would be on second thought quite swell!
Mild Shock schrieb am Donnerstag, 1. Juni 2023 um 23:11:50 UTC+2:
Today I had for some minutes a strong feeling
of obsolence, was even imagining that these could
be my last days where I write some "program code".
This happened after I saw ChatGPT doing logic.
Although was reading about "Low Code / No Code"
already for a while. So which profession gets hit first?
Profiles of the future : an inquiry into the limits of the possible
Arthur C. Clarke - 1962, Chapter 18: The Obsolence of Man https://archive.org/details/profilesoffuture00clar/page/222/mode/2up
Arthur C. Clarke talks
A Space Odyssey and artificial intelligence, 1968 https://www.youtube.com/watch?v=zNJbUYD-pfo
Mild Shock schrieb am Donnerstag, 1. Juni 2023 um 23:10:34 UTC+2:
June, 2023 Update: It might be the case, that ChatGPT has improved
in logic. Here it does even modal logic, and you can ask it to produce proofs without LEM. ChatGPT does the following tasks:
Here’s how you can translate the proof into natural deduction: Here’s an alternative proof that does not rely on LEM:
Here’s the translation of the proof into Fitch-style natural deduction:
Here’s the translation of the proof into Gentzen’s tree-style natural deduction:
Here’s the translation of the proof into sequent-style natural deduction:
https://chat.openai.com/share/79ae4f02-fd07-4786-800b-305bc9eed143
So its just a matter of time, like months or weeks,--- Synchronet 3.20a-Linux NewsLink 1.114
and we have ChatGPT integrated in IDEs at
our desktop, coding help at our fingertips:
"In line with our iterative deployment philosophy,
we are gradually rolling out plugins in ChatGPT
so we can study their real-world use, impact, and
safety and alignment challenges—all of which
we’ll have to get right in order to achieve our mission." https://openai.com/blog/chatgpt-plugins
They are quite on mission. This will suplant GitHub
Copilot? Well doesn't matter GitHub Copilot uses
also OpenAI Codex. But in March 2023, OpenAI shut
down access to Codex, but I guess they didn't do
it for some moratorium, they have a better replacement:
"On March 23rd, we will discontinue support for the
Codex API. All customers will have to transition to a
different model. Codex was initially introduced as a
free limited beta in 2021, and has maintained
that status to date. Given the advancements of our
newest GPT-3.5 models for coding tasks, we will no
longer be supporting Codex and encourage all customers
to transition to GPT-3.5-Turbo.
About GPT-3.5-Turbo GPT-3.5-Turbo is the most
cost effective and performant model in the GPT-3.5
family. It can both do coding tasks while also being
complemented with flexible natural language capabilities." https://news.ycombinator.com/item?id=35242069
Mild Shock schrieb am Donnerstag, 1. Juni 2023 um 23:17:48 UTC+2:
Allen Goodman, author of CellProfiler and staff engineer atPrescient Design and Genentech, describes how the GIL makes
Can SWI-Prolog lean back concerning multi-threading? The--- Synchronet 3.20a-Linux NewsLink 1.114
Python store looks like a nice piece of darwinism. So there is some evolutionary pressure through some selection mechanism:
Allen Goodman, author of CellProfiler and staff engineer atPrescient Design and Genentech, describes how the GIL makes
biological methods research more difficult in Python.
So basically Python starts lacking behind as the datascience language.
Oh the irony. But I would not blame it so much on the GIL. Deep down many programming languages have still a GIL,
for example in malloc(). I don’t know whether SWI-Prologs tcmalloc() integration
even squeezes the lemon. From >JDK 9 Java had a slower GC single-threaded because they started optimizing their virtual machine for multi-threaded.
Such optiminzations do not only consists of removing the GIL, you
need optimize malloc(). Some approaches uses thread affine memory
areas, but this is also tricky, since not all objects have a clear thread affinity.
In as far, concerning thread affinity, one has to also watch what happens concerning JavaScript Worker concept adoptions in Python. Multi-threading
can be optimized even more If you have such isolation concepts.
In this respect there is also PEP 683 – Immortal Objects, which on the surface might not be related, but it also relates to the effort to better handle
strings and make a GIL per-interpreter, the later could underly Workers.
Mild Shock schrieb am Samstag, 29. Juli 2023 um 13:57:15 UTC+2:
Can SWI-Prolog lean back concerning multi-threading? The
Python store looks like a nice piece of darwinism. So there is some
evolutionary pressure through some selection mechanism:
Allen Goodman, author of CellProfiler and staff engineer atPrescient Design and Genentech, describes how the GIL makes
biological methods research more difficult in Python.
So basically Python starts lacking behind as the datascience language.
Oh the irony. But I would not blame it so much on the GIL. Deep down many
programming languages have still a GIL,
for example in malloc(). I don’t know whether SWI-Prologs tcmalloc() integration
even squeezes the lemon. From >JDK 9 Java had a slower GC single-threaded
because they started optimizing their virtual machine for multi-threaded.
Such optiminzations do not only consists of removing the GIL, you
need optimize malloc(). Some approaches uses thread affine memory
areas, but this is also tricky, since not all objects have a clear thread affinity.
There are a couple of non-GIL Pythons already--- Synchronet 3.20a-Linux NewsLink 1.114
around. For example Jython 2.7.3. But they are
currently busy with migrating from Python 2 to Python 3.
For example I cannot use it, it didn’t understand
the “async” keyword. Async/await was introduced in
Python version 3.5. There are more such no-GIL Pythons,
like IronPython (for CLR) and GraalVM Python (for JVM).
GraalVM Python is farther ahead, it supports Python 3.8,
but is slower than PyPy. But with IronPython, one would
also have less luck, its only Python 3.4 now.
Mild Shock schrieb:
In as far, concerning thread affinity, one has to also watch what happens concerning JavaScript Worker concept adoptions in Python. Multi-threading can be optimized even more If you have such isolation concepts.
In this respect there is also PEP 683 – Immortal Objects, which on the surface might not be related, but it also relates to the effort to better handle
strings and make a GIL per-interpreter, the later could underly Workers.
Mild Shock schrieb am Samstag, 29. Juli 2023 um 13:57:15 UTC+2:
Can SWI-Prolog lean back concerning multi-threading? The
Python store looks like a nice piece of darwinism. So there is some
evolutionary pressure through some selection mechanism:
Allen Goodman, author of CellProfiler and staff engineer atPrescient Design and Genentech, describes how the GIL makes
biological methods research more difficult in Python.
So basically Python starts lacking behind as the datascience language.
Oh the irony. But I would not blame it so much on the GIL. Deep down many >> programming languages have still a GIL,
for example in malloc(). I don’t know whether SWI-Prologs tcmalloc() integration
even squeezes the lemon. From >JDK 9 Java had a slower GC single-threaded >> because they started optimizing their virtual machine for multi-threaded. >>
Such optiminzations do not only consists of removing the GIL, you
need optimize malloc(). Some approaches uses thread affine memory
areas, but this is also tricky, since not all objects have a clear thread affinity.
For Scryer Prolog the struggle is minutely documented:
Compiling and running scryer as a WebAssembly binary? https://github.com/mthom/scryer-prolog/issues/615
Mostowski Collapse <burs...@gmail.com> writes:--- Synchronet 3.20a-Linux NewsLink 1.114
For Scryer Prolog the struggle is minutely documented:
Compiling and running scryer as a WebAssembly binary? https://github.com/mthom/scryer-prolog/issues/615
Good news everyone: It's now possible to compile Scryer Prolog to WASM,
the build instructions are here:
https://github.com/mthom/scryer-prolog/pull/1966#issuecomment-1697974614
Enjoy!
All the best,
Markus
--
comp.lang.prolog FAQ: http://www.logic.at/prolog/faq/
The Power of Prolog: https://www.metalevel.at/prolog
Rounding still not fixed in Scryer Prolog. Look
what a nice test case I am using:
?- atom_integer(X, 2, 166153499473114502559719956244594689).
X = '1000000000000000000000000000000000000000 000000000000010000000000000000000000000000000 000000000000000000000000000000001'.
And whats the result:
$ target/release/scryer-prolog -v
"v0.9.1-151-g17450520"
$ target/release/scryer-prolog
?- X is float(166153499473114502559719956244594689).
X = 1.661534994731145e35.
?- Y = 1.6615349947311452e+35.
Y = 1.6615349947311452e35.
?- X is float(166153499473114502559719956244594689)-1.6615349947311452e+35.
X = -3.6893488147419103e19.
?-
Its not correctly rounded!
Yeah today I created a ticket in GitHub issues.
Now I need to take a SPA nap:
Cute Kitten Really Enjoys SPA
https://www.youtube.com/watch?v=L7lVrWY9zQE
P.S.: No wonder Scryer Prolog has 242 tickets: https://github.com/mthom/scryer-prolog/issues
Mostowski Collapse schrieb:
Rounding still not fixed in Scryer Prolog. Look
what a nice test case I am using:
?- atom_integer(X, 2, 166153499473114502559719956244594689).
X = '1000000000000000000000000000000000000000
000000000000010000000000000000000000000000000
000000000000000000000000000000001'.
And whats the result:
$ target/release/scryer-prolog -v
"v0.9.1-151-g17450520"
$ target/release/scryer-prolog
?- X is float(166153499473114502559719956244594689).
X = 1.661534994731145e35.
?- Y = 1.6615349947311452e+35.
Y = 1.6615349947311452e35.
?- X is
float(166153499473114502559719956244594689)-1.6615349947311452e+35.
X = -3.6893488147419103e19.
?-
Its not correctly rounded!
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 920 |
Nodes: | 10 (1 / 9) |
Uptime: | 76:57:48 |
Calls: | 12,187 |
Calls today: | 2 |
Files: | 186,526 |
Messages: | 2,236,867 |