Prolog Class Signpost - American Style 2018 https://www.youtube.com/watch?v=CxQKltWI0NA--- Synchronet 3.20a-Linux NewsLink 1.114
To hell with GPUs. Here come the FPGA qubits:--- Synchronet 3.20a-Linux NewsLink 1.114
Iran’s Military Quantum Claim: It’s Only 99.4% Ridiculous https://hackaday.com/2023/06/15/irans-quantum-computing-on-fpga-claim-its-kinda-a-thing/
The superposition property enables a quantum computer
to be in multiple states at once. https://www.techtarget.com/whatis/definition/qubit
Maybe their new board is even less suited for hitting
a ship with a torpedo than some machine learning?
How it started:--- Synchronet 3.20a-Linux NewsLink 1.114
Remember in 2013 a failed AI stack attemp, people making fun:
Faked Artificial Intelligence like in Game Development https://area51.meta.stackexchange.com/q/11658/100686
How its going:
Take notd in 2023 sounds like total paniking now:
Announcing OverflowAI, Projects: a bunch of crap, Slack
chatbot and We’ve launched the GenAI Stack Exchange site https://stackoverflow.co/labs/
This is also quite memorable:--- Synchronet 3.20a-Linux NewsLink 1.114
"Don't give it a 'Hollywood' title like 'artificial intelligence'.
Call it "Machine Learning and Intelligent Computation"". https://area51.meta.stackexchange.com/a/13109/100686
LoL
Mild Shock schrieb am Dienstag, 1. August 2023 um 15:48:32 UTC+2:
How it started:
Remember in 2013 a failed AI stack attemp, people making fun:
Faked Artificial Intelligence like in Game Development https://area51.meta.stackexchange.com/q/11658/100686
How its going:
Take notd in 2023 sounds like total paniking now:
Announcing OverflowAI, Projects: a bunch of crap, Slack
chatbot and We’ve launched the GenAI Stack Exchange site https://stackoverflow.co/labs/
Experiment by Terrence Tao using ChatGPT - June, 2023 https://mathstodon.xyz/@tao/110601051375142142--- Synchronet 3.20a-Linux NewsLink 1.114
by way of Rainer Rosenthal on de.sci.mathematik
Simple theory why stackoverflow is dead. Most--- Synchronet 3.20a-Linux NewsLink 1.114
of the answers on stackoverflow are any RTFM answers.
And LLM that has done its homework, indexing all
the fucking manuals, performs as well, there is no
need for the "experts" on stackoverflow that are anyway
not real "experts", mostly they are people who can read
and know the relevant sources, they don't recall
solutions from some genuine memory. So I guess
this intermediary, this middleman stackoverflow,
is not needed in the future. ChatGPT and similar
bots will serve as ready help for those too lazy.
And we are all lazy, aren't we?
Abbreviation for ‘Read The Fucking Manual’. http://www.catb.org/jargon/html/R/RTFM.html
Mild Shock schrieb am Mittwoch, 2. August 2023 um 11:02:05 UTC+2:
Experiment by Terrence Tao using ChatGPT - June, 2023 https://mathstodon.xyz/@tao/110601051375142142
by way of Rainer Rosenthal on de.sci.mathematik
Prolog Class Signpost - American Style 2018 https://www.youtube.com/watch?v=CxQKltWI0NA--- Synchronet 3.20a-Linux NewsLink 1.114
Now its clear, the Corona vaccine has had a side effect,--- Synchronet 3.20a-Linux NewsLink 1.114
everbody got Alzheimer over the last months. The
SWI-Prolog discourse is a typical example, it has become
a retirement home for some self-talking veterans.
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018 https://www.youtube.com/watch?v=CxQKltWI0NA
Don't do the LIPS.--- Synchronet 3.20a-Linux NewsLink 1.114
/* SWI-Prolog 9.1.16 */
?- time(tarai(12, 11, 0, X)).
% 54,182,800 inferences, 2.625 CPU in 2.616 seconds (100% CPU, 20641067 Lips)
X = 12.
/* Guarded Horn Clauses */
$ ./tarai 12 11 0
% 196412655 inferences, 3.34256 CPU seconds (58761215.967661 Lips)
12
tadashi9e, 2023
https://qiita.com/tadashi9e/items/45cef62cda6d38dda0c7
Sanity No More - Only Chaos Here
Mild Shock schrieb am Samstag, 23. September 2023 um 22:51:13 UTC+2:
Now its clear, the Corona vaccine has had a side effect,
everbody got Alzheimer over the last months. The
SWI-Prolog discourse is a typical example, it has become
a retirement home for some self-talking veterans.
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018 https://www.youtube.com/watch?v=CxQKltWI0NA
Now its clear, the Corona vaccine has had a side effect,--- Synchronet 3.20a-Linux NewsLink 1.114
everbody got Alzheimer over the last months. The
SWI-Prolog discourse is a typical example, it has become
a retirement home for some self-talking veterans.
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018 https://www.youtube.com/watch?v=CxQKltWI0NA
So what is the issue that should be solved proactively.--- Synchronet 3.20a-Linux NewsLink 1.114
A single person cannot indefinitely maintain a Prolog system.
Why? Not because the person will be dead at some time in
the future, the person might also get unable to continue
naintaining a Prolog system. Same holds for a community,
it cannot age indefinitely. See also:
Memory Loss, Alzheimer's Disease and Dementia, 3rd Edition
by Andrew E. Budson, MD and Paul R. Solomon, PhD https://evolve.elsevier.com/cs/product/9780323795449
So what are the option? Exit strategies? Generational change
strategies? Where are the youngsters that will take over?
This is a call for action, for people < 30 years old:
- Please show us your Prolog interpreter
Mild Shock schrieb am Samstag, 23. September 2023 um 22:51:13 UTC+2:
Now its clear, the Corona vaccine has had a side effect,
everbody got Alzheimer over the last months. The
SWI-Prolog discourse is a typical example, it has become
a retirement home for some self-talking veterans.
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018 https://www.youtube.com/watch?v=CxQKltWI0NA
Not only the speed doesn't double every year anymore,--- Synchronet 3.20a-Linux NewsLink 1.114
also the density of transistors doesn't double
every year anymore. See also:
‘Moore’s Law’s dead,’ Nvidia CEO https://www.marketwatch.com/story/moores-laws-dead-nvidia-ceo-jensen-says-in-justifying-gaming-card-price-hike-11663798618
So there is some hope in FPGAs. The article writes:
"In the latter paper, which includes a great overview of
the state of the art, Pilch and colleagues summarize
this as shifting the processing from time to space —
from using slow sequential CPU processing to hardware
complexity, using the FPGA’s configurable fabric
and inherent parallelism."
In reference to (no pay wall):
An FPGA-based real quantum computer emulator
15 December 2018 - Pilch et al. https://link.springer.com/article/10.1007/s10825-018-1287-5
Mild Shock schrieb am Dienstag, 20. Juni 2023 um 17:20:27 UTC+2:
To hell with GPUs. Here come the FPGA qubits:
Iran’s Military Quantum Claim: It’s Only 99.4% Ridiculous https://hackaday.com/2023/06/15/irans-quantum-computing-on-fpga-claim-its-kinda-a-thing/
The superposition property enables a quantum computer
to be in multiple states at once. https://www.techtarget.com/whatis/definition/qubit
Maybe their new board is even less suited for hitting
a ship with a torpedo than some machine learning?
Ok, OpenAI is dead. But we need to get out of the claws--- Synchronet 3.20a-Linux NewsLink 1.114
of the computing cloud. We need the spirit of Niklaus
Wirth, who combined computer science and
electronics. We need to solve the problem of
parallel slilicon. Should have a look again at these
quantum computers. Can we have them on the Edge?
Mild Shock schrieb am Freitag, 23. Juni 2023 um 11:14:15 UTC+2:
Not only the speed doesn't double every year anymore,
also the density of transistors doesn't double
every year anymore. See also:
‘Moore’s Law’s dead,’ Nvidia CEO https://www.marketwatch.com/story/moores-laws-dead-nvidia-ceo-jensen-says-in-justifying-gaming-card-price-hike-11663798618
So there is some hope in FPGAs. The article writes:
"In the latter paper, which includes a great overview of
the state of the art, Pilch and colleagues summarize
this as shifting the processing from time to space —
from using slow sequential CPU processing to hardware
complexity, using the FPGA’s configurable fabric
and inherent parallelism."
In reference to (no pay wall):
An FPGA-based real quantum computer emulator
15 December 2018 - Pilch et al. https://link.springer.com/article/10.1007/s10825-018-1287-5
Mild Shock schrieb am Dienstag, 20. Juni 2023 um 17:20:27 UTC+2:
To hell with GPUs. Here come the FPGA qubits:
Iran’s Military Quantum Claim: It’s Only 99.4% Ridiculous https://hackaday.com/2023/06/15/irans-quantum-computing-on-fpga-claim-its-kinda-a-thing/
The superposition property enables a quantum computer
to be in multiple states at once. https://www.techtarget.com/whatis/definition/qubit
Maybe their new board is even less suited for hitting
a ship with a torpedo than some machine learning?
How my Dogelog Player garbage collector works:--- Synchronet 3.20a-Linux NewsLink 1.114
Ashes to ashes, funk to funky
We know Major Tom's a junkie
Strung out in heaven's high
Hitting an all-time low
https://www.youtube.com/watch?v=CMThz7eQ6K0
Unfortunately no generational garbage collector yet. :-(
To advance the State of the Art and track performance improvements,--- Synchronet 3.20a-Linux NewsLink 1.114
some automatization would be helpful. I can test manually WASM via
this here https://dev.swi-prolog.org/wasm/shell . Since my recent
performance tuning of Dogelog Player for JavaScript I beat 32-bit WASM SWI-Prolog. This holds not yet for the SAT Solver test cases, that need
GC improvements but for the core test cases. I only tested my Ryzen.
Don’t know yet results for Yoga:
dog swi
nrev 1247 1223
crypt 894 2351
deriv 960 1415
poly 959 1475
sortq 1313 1825
tictac 1587 2400
queens 1203 2316
query 1919 4565
mtak 1376 1584
perfect 1020 1369
calc 1224 1583
Total 13702 22106
LoL
Mild Shock schrieb am Samstag, 25. November 2023 um 16:23:07 UTC+1:
How my Dogelog Player garbage collector works:
Ashes to ashes, funk to funky
We know Major Tom's a junkie
Strung out in heaven's high
Hitting an all-time low
https://www.youtube.com/watch?v=CMThz7eQ6K0
Unfortunately no generational garbage collector yet. :-(
Scryer Prolog has made amazing leaps recently concerning--- Synchronet 3.20a-Linux NewsLink 1.114
performance, its now only like 2-3 times slower than
SWI-Prolog! What does it prevent to get faster than SWI-Prolog?
See for yourself. Here some testing with a very recent version. Interestingly tictac shows it has some problems with negation-
as-failure and/or call/1. Maybe they should allocate more
time to these areas instead of inference counting formatting:
$ target/release/scryer-prolog -v
v0.9.3-50-gb8ef3678
nrev % CPU time: 0.304s, 3_024_548 inferences
crypt % CPU time: 0.422s, 4_392_537 inferences
deriv % CPU time: 0.462s, 3_150_149 inferences
poly % CPU time: 0.394s, 3_588_369 inferences
sortq % CPU time: 0.481s, 3_654_653 inferences
tictac % CPU time: 1.591s, 3_285_766 inferences
queens % CPU time: 0.517s, 5_713_596 inferences
query % CPU time: 0.909s, 8_678_936 inferences
mtak % CPU time: 0.425s, 6_901_822 inferences
perfect % CPU time: 0.763s, 5_321_436 inferences
calc % CPU time: 0.626s, 6_700_379 inferences
true.
Compared to SWI-Prolog on the same machine:
$ swipl --version
SWI-Prolog version 9.1.18 for x86_64-linux
nrev % 2,994,497 inferences, 0.067 CPU in 0.067 seconds
crypt % 4,166,441 inferences, 0.288 CPU in 0.287 seconds
deriv % 2,100,068 inferences, 0.139 CPU in 0.139 seconds
poly % 2,087,479 inferences, 0.155 CPU in 0.155 seconds
sortq % 3,624,602 inferences, 0.173 CPU in 0.173 seconds
tictac % 1,012,615 inferences, 0.184 CPU in 0.184 seconds
queens % 4,596,063 inferences, 0.266 CPU in 0.266 seconds
query % 8,639,878 inferences, 0.622 CPU in 0.622 seconds
mtak % 3,943,818 inferences, 0.162 CPU in 0.162 seconds
perfect % 3,241,199 inferences, 0.197 CPU in 0.197 seconds
calc % 3,060,151 inferences, 0.180 CPU in 0.180 seconds
Mild Shock schrieb am Samstag, 25. November 2023 um 22:10:20 UTC+1:
To advance the State of the Art and track performance improvements,
some automatization would be helpful. I can test manually WASM via
this here https://dev.swi-prolog.org/wasm/shell . Since my recent
performance tuning of Dogelog Player for JavaScript I beat 32-bit WASM SWI-Prolog. This holds not yet for the SAT Solver test cases, that need
GC improvements but for the core test cases. I only tested my Ryzen.
Don’t know yet results for Yoga:
dog swi
nrev 1247 1223
crypt 894 2351
deriv 960 1415
poly 959 1475
sortq 1313 1825
tictac 1587 2400
queens 1203 2316
query 1919 4565
mtak 1376 1584
perfect 1020 1369
calc 1224 1583
Total 13702 22106
LoL
Mild Shock schrieb am Samstag, 25. November 2023 um 16:23:07 UTC+1:
How my Dogelog Player garbage collector works:
Ashes to ashes, funk to funky
We know Major Tom's a junkie
Strung out in heaven's high
Hitting an all-time low
https://www.youtube.com/watch?v=CMThz7eQ6K0
Unfortunately no generational garbage collector yet. :-(
Testing scryer-prolog doesn’t make any sense. Its not a--- Synchronet 3.20a-Linux NewsLink 1.114
Prolog system. It has memory leaks somewhere.
Just try my SAT solver test suite:
?- between(1,100,_), suite_quiet, fail; true.
VSZ and RSS memory is going up and up, with no end.
Clogging my machine. I don’t think that this should happen,
that a failure driven loop eats all memory?
Thats just a fraud. How do you set some limits?
Mild Shock schrieb am Montag, 27. November 2023 um 18:31:25 UTC+1:
Scryer Prolog has made amazing leaps recently concerning
performance, its now only like 2-3 times slower than
SWI-Prolog! What does it prevent to get faster than SWI-Prolog?
See for yourself. Here some testing with a very recent version. Interestingly tictac shows it has some problems with negation-
as-failure and/or call/1. Maybe they should allocate more
time to these areas instead of inference counting formatting:
$ target/release/scryer-prolog -v
v0.9.3-50-gb8ef3678
nrev % CPU time: 0.304s, 3_024_548 inferences
crypt % CPU time: 0.422s, 4_392_537 inferences
deriv % CPU time: 0.462s, 3_150_149 inferences
poly % CPU time: 0.394s, 3_588_369 inferences
sortq % CPU time: 0.481s, 3_654_653 inferences
tictac % CPU time: 1.591s, 3_285_766 inferences
queens % CPU time: 0.517s, 5_713_596 inferences
query % CPU time: 0.909s, 8_678_936 inferences
mtak % CPU time: 0.425s, 6_901_822 inferences
perfect % CPU time: 0.763s, 5_321_436 inferences
calc % CPU time: 0.626s, 6_700_379 inferences
true.
Compared to SWI-Prolog on the same machine:
$ swipl --version
SWI-Prolog version 9.1.18 for x86_64-linux
nrev % 2,994,497 inferences, 0.067 CPU in 0.067 seconds
crypt % 4,166,441 inferences, 0.288 CPU in 0.287 seconds
deriv % 2,100,068 inferences, 0.139 CPU in 0.139 seconds
poly % 2,087,479 inferences, 0.155 CPU in 0.155 seconds
sortq % 3,624,602 inferences, 0.173 CPU in 0.173 seconds
tictac % 1,012,615 inferences, 0.184 CPU in 0.184 seconds
queens % 4,596,063 inferences, 0.266 CPU in 0.266 seconds
query % 8,639,878 inferences, 0.622 CPU in 0.622 seconds
mtak % 3,943,818 inferences, 0.162 CPU in 0.162 seconds
perfect % 3,241,199 inferences, 0.197 CPU in 0.197 seconds
calc % 3,060,151 inferences, 0.180 CPU in 0.180 seconds
Mild Shock schrieb am Samstag, 25. November 2023 um 22:10:20 UTC+1:
To advance the State of the Art and track performance improvements,
some automatization would be helpful. I can test manually WASM via
this here https://dev.swi-prolog.org/wasm/shell . Since my recent
performance tuning of Dogelog Player for JavaScript I beat 32-bit WASM SWI-Prolog. This holds not yet for the SAT Solver test cases, that need GC improvements but for the core test cases. I only tested my Ryzen.
Don’t know yet results for Yoga:
dog swi
nrev 1247 1223
crypt 894 2351
deriv 960 1415
poly 959 1475
sortq 1313 1825
tictac 1587 2400
queens 1203 2316
query 1919 4565
mtak 1376 1584
perfect 1020 1369
calc 1224 1583
Total 13702 22106
LoL
Mild Shock schrieb am Samstag, 25. November 2023 um 16:23:07 UTC+1:
How my Dogelog Player garbage collector works:
Ashes to ashes, funk to funky
We know Major Tom's a junkie
Strung out in heaven's high
Hitting an all-time low
https://www.youtube.com/watch?v=CMThz7eQ6K0
Unfortunately no generational garbage collector yet. :-(
With limits I get this result:--- Synchronet 3.20a-Linux NewsLink 1.114
$ target/release/scryer-prolog -v
v0.9.3-57-ge8d8b09e
$ ulimit -m 2000000
$ ulimit -v 2000000
$ target/release/scryer-prolog
?- ['program2.p'].
true.
?- between(1,100,_), suite_quiet, fail; true.
Segmentation fault
Not ok! Should continue running till the end.
Mild Shock schrieb am Mittwoch, 29. November 2023 um 06:47:15 UTC+1:
Testing scryer-prolog doesn’t make any sense. Its not a
Prolog system. It has memory leaks somewhere.
Just try my SAT solver test suite:
?- between(1,100,_), suite_quiet, fail; true.
VSZ and RSS memory is going up and up, with no end.
Clogging my machine. I don’t think that this should happen,
that a failure driven loop eats all memory?
Thats just a fraud. How do you set some limits?
Mild Shock schrieb am Montag, 27. November 2023 um 18:31:25 UTC+1:
Don't buy your Pearls in Honk Kong. They are all fake.--- Synchronet 3.20a-Linux NewsLink 1.114
So what do you prefer, this Haskell monster: https://www.cs.nott.ac.uk/~pszgmh/countdown.pdf
Terence Tao, "Machine Assisted Proof" https://www.youtube.com/watch?v=AayZuuDDKP0
Mostowski Collapse schrieb:
Don't buy your Pearls in Honk Kong. They are all fake.
So what do you prefer, this Haskell monster:
https://www.cs.nott.ac.uk/~pszgmh/countdown.pdf
I didn't make all my homework yet.
For example just fiddling around with CLP(FD), I get:
?- maplist(in, Vs, [1\/3..4, 1..2\/4, 1..2\/4,
1..3, 1..3, 1..6]), all_distinct(Vs).
false.
Does Scryer Prolog CLP(Z) have some explanator for that?
What is exactly the conflict that it fails?
Mild Shock schrieb:
Terence Tao, "Machine Assisted Proof"
https://www.youtube.com/watch?v=AayZuuDDKP0
Mostowski Collapse schrieb:
Don't buy your Pearls in Honk Kong. They are all fake.
So what do you prefer, this Haskell monster:
https://www.cs.nott.ac.uk/~pszgmh/countdown.pdf
Or a more striking example, Peter Norvig's impossible
Sudoku, which he claims took him 1439 seconds
to show that it is unsolvable:
/* Peter Norvig */
problem(9, [[_,_,_,_,_,5,_,8,_],
[_,_,_,6,_,1,_,4,3],
[_,_,_,_,_,_,_,_,_],
[_,1,_,5,_,_,_,_,_],
[_,_,_,1,_,6,_,_,_],
[3,_,_,_,_,_,_,_,5],
[5,3,_,_,_,_,_,6,1],
[_,_,_,_,_,_,_,_,4],
[_,_,_,_,_,_,_,_,_]]).
https://norvig.com/sudoku.html
whereby SWI-Prolog with all_distinct/1 does
it in a blink, even without labeling:
?- problem(9, M), time(sudoku(M)).
% 316,054 inferences, 0.016 CPU in 0.020 seconds
(80% CPU, 20227456 Lips)
false.
Pretty cool!
Mild Shock schrieb:
I didn't make all my homework yet.
For example just fiddling around with CLP(FD), I get:
?- maplist(in, Vs, [1\/3..4, 1..2\/4, 1..2\/4,
1..3, 1..3, 1..6]), all_distinct(Vs).
false.
Does Scryer Prolog CLP(Z) have some explanator for that?
What is exactly the conflict that it fails?
Mild Shock schrieb:
Terence Tao, "Machine Assisted Proof"
https://www.youtube.com/watch?v=AayZuuDDKP0
Mostowski Collapse schrieb:
Don't buy your Pearls in Honk Kong. They are all fake.
So what do you prefer, this Haskell monster:
https://www.cs.nott.ac.uk/~pszgmh/countdown.pdf
Now I have the feeling there are no difficult 9x9--- Synchronet 3.20a-Linux NewsLink 1.114
Sudokus for the computer. At least not for computers
running SWI-Prolog and using CLP(FD) with the global
constraint all_distinct/1.
I was fishing among the 17-clue Sudokus, and the
hardest I could find so far was this one:
/* Gordon Royle #3668 */
problem(11,[[_,_,_,_,_,_,_,_,_],
[_,_,_,_,_,_,_,1,2],
[_,_,3,_,_,4,_,_,_],
[_,_,_,_,_,_,_,_,3],
[_,1,_,2,5,_,_,_,_],
[6,_,_,_,_,_,7,_,_],
[_,_,_,_,2,_,_,_,_],
[_,_,7,_,_,_,4,_,_],
[5,_,_,1,6,_,_,8,_]]).
But SWI-Prolog still does it in around 3 seconds.
SWI-Prolog does other 17-clue Sudokus in less than 100ms.
Are there any 17-clue Sudokus that take more time?
https://academic.timwylie.com/17CSCI4341/sudoku.pdf
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 920 |
Nodes: | 10 (1 / 9) |
Uptime: | 77:38:28 |
Calls: | 12,187 |
Calls today: | 2 |
Files: | 186,526 |
Messages: | 2,236,888 |