Hi!
There is no "correct" result for std::fmod across all IEEE 754 representations.
Why?
#include <cmath>
#include <iostream>
int main()
{
std::cout << "Result of std::fmod(1.0e42, 1000.0): " <<
std::fmod(1.0e42, 1000.0);
}
Result of std::fmod(1.0e42, 1000.0): 312
The correct answer mathematically is 0 not 312 but as IEEE 754 has a
fixed width mantissa, x is inprecise so there is no correct "precise"
result for std::fmod(x, y) for all x & y as Bonita Montero claims
rendering her test case utter bollocks.
Given these FACTS please present actual problems with:
double my_fmod(double x, double y)
{
if (y == 0.0)
return x / y;
return x - std::trunc(x / y) * y;
}
/Flibble
On Sat, 01 Mar 2025 19:27:46 +0000
Mr Flibble <leigh@i42.co.uk> wrote:
Hi!
There is no "correct" result for std::fmod across all IEEE 754
representations.
Why?
#include <cmath>
#include <iostream>
int main()
{
std::cout << "Result of std::fmod(1.0e42, 1000.0): " <<
std::fmod(1.0e42, 1000.0);
}
Result of std::fmod(1.0e42, 1000.0): 312
The correct answer mathematically is 0 not 312 but as IEEE 754 has a
fixed width mantissa, x is inprecise so there is no correct "precise"
result for std::fmod(x, y) for all x & y as Bonita Montero claims
rendering her test case utter bollocks.
Given these FACTS please present actual problems with:
double my_fmod(double x, double y)
{
if (y == 0.0)
return x / y;
return x - std::trunc(x / y) * y;
}
/Flibble
For x that is a closest IEEE-754 binary64 approximation of 1e42, i.e.
for 0x1.6f578c4e0a061p+139, 312 is an exact answer.
On Sat, 1 Mar 2025 23:03:20 +0200, Michael S
<already5chosen@yahoo.com> wrote:
On Sat, 01 Mar 2025 19:27:46 +0000
Mr Flibble <leigh@i42.co.uk> wrote:
Hi!
There is no "correct" result for std::fmod across all IEEE 754
representations.
Why?
#include <cmath>
#include <iostream>
int main()
{
std::cout << "Result of std::fmod(1.0e42, 1000.0): " <<
std::fmod(1.0e42, 1000.0);
}
Result of std::fmod(1.0e42, 1000.0): 312
The correct answer mathematically is 0 not 312 but as IEEE 754 has
a fixed width mantissa, x is inprecise so there is no correct
"precise" result for std::fmod(x, y) for all x & y as Bonita
Montero claims rendering her test case utter bollocks.
Given these FACTS please present actual problems with:
double my_fmod(double x, double y)
{
if (y == 0.0)
return x / y;
return x - std::trunc(x / y) * y;
}
/Flibble
For x that is a closest IEEE-754 binary64 approximation of 1e42, i.e.
for 0x1.6f578c4e0a061p+139, 312 is an exact answer.
The 312 doesn't come directly from the mantissa though, if it did it
would be 0 not 312.
1e+42x=1e42
x
1000000000000000044885712678075916785549312int(x)
My point is std::fmod is almost useless
MATHEMATICALLY unless the arguments don't involve approximation, but
with IEEE 754 floating point you get what you pay for.
/Flibble
On Sat, 01 Mar 2025 21:57:13 +0000
Mr Flibble <leigh@i42.co.uk> wrote:
My point is std::fmod is almost useless
MATHEMATICALLY unless the arguments don't involve approximation, but
with IEEE 754 floating point you get what you pay for.
/Flibble
If you were saying that fmod(x/y) for big values of x/y is almost
useless physically / engineerially then I'd tend to agree.
When physicist or engineer says 'double x= bar;' where bar is a number
that can be represented exactly in binary64 then what he means (if he
cared at all to think about what he means, which is rare) is that he
believes that x is in range (bar-ULP:bar+ULP] and he hopes that
probability of x being within range [bar-ULP/2:bar+ULP/2] is much
higher that 0.5. So, obviously, as long as y is abs(y)<ULP(x) for
physicist fmod(x/y) is meaningless.
On Sun, 2 Mar 2025 11:07:10 +0200
Michael S <already5chosen@yahoo.com> wibbled:
On Sat, 01 Mar 2025 21:57:13 +0000
Mr Flibble <leigh@i42.co.uk> wrote:
My point is std::fmod is almost useless
MATHEMATICALLY unless the arguments don't involve approximation,
but with IEEE 754 floating point you get what you pay for.
/Flibble
If you were saying that fmod(x/y) for big values of x/y is almost
useless physically / engineerially then I'd tend to agree.
When physicist or engineer says 'double x= bar;' where bar is a
number that can be represented exactly in binary64 then what he
means (if he cared at all to think about what he means, which is
rare) is that he believes that x is in range (bar-ULP:bar+ULP] and
he hopes that probability of x being within range
[bar-ULP/2:bar+ULP/2] is much higher that 0.5. So, obviously, as
long as y is abs(y)<ULP(x) for physicist fmod(x/y) is meaningless.
The whole argument is somewhat moot anyway - for serious physics or engineering calculations you simply don't use IEE754. These sort of applications either require everything to be done in integer (see
also Fintec) or to use a number library such as GMP where the
precision is only limited by memory and the speed you require.
On Sat, 01 Mar 2025 21:57:13 +0000
Mr Flibble <leigh@i42.co.uk> wrote:
On Sat, 1 Mar 2025 23:03:20 +0200, Michael S
<already5chosen@yahoo.com> wrote:
On Sat, 01 Mar 2025 19:27:46 +0000
Mr Flibble <leigh@i42.co.uk> wrote:
Hi!
There is no "correct" result for std::fmod across all IEEE 754
representations.
Why?
#include <cmath>
#include <iostream>
int main()
{
std::cout << "Result of std::fmod(1.0e42, 1000.0): " <<
std::fmod(1.0e42, 1000.0);
}
Result of std::fmod(1.0e42, 1000.0): 312
The correct answer mathematically is 0 not 312 but as IEEE 754 has
a fixed width mantissa, x is inprecise so there is no correct
"precise" result for std::fmod(x, y) for all x & y as Bonita
Montero claims rendering her test case utter bollocks.
Given these FACTS please present actual problems with:
double my_fmod(double x, double y)
{
if (y == 0.0)
return x / y;
return x - std::trunc(x / y) * y;
}
/Flibble
For x that is a closest IEEE-754 binary64 approximation of 1e42, i.e.
for 0x1.6f578c4e0a061p+139, 312 is an exact answer.
The 312 doesn't come directly from the mantissa though, if it did it
would be 0 not 312.
I don't quite understand what you mean by that. That it does not come
from *decimal* mantissa that you wrote in the source code?
But how could it? IEEE-754 binary64 is, well, binary.
On Sun, 2 Mar 2025 09:17:54 -0000 (UTC)
Muttley@DastardlyHQ.org wrote:
The whole argument is somewhat moot anyway - for serious physics or
engineering calculations you simply don't use IEE754. These sort of
applications either require everything to be done in integer (see
also Fintec) or to use a number library such as GMP where the
precision is only limited by memory and the speed you require.
That's wrong.
Overwhelming majority of worlds scientific and engineering calculations
is performed with IEEE-754 binary64.
You can call all this people non-serious.
Me, personally, I don't think that Fintech people are not serious, but
at the same time I think that most of them are crooks.
In other field, like signal processing and graphics, the default is
IEEE-754 binary32.
In the modern turn of AI they use even lower precision, like IEEE-754 >binary16 and so-called bfloat. And the trend (DeepSeek) is further down
in precision.
On Sat, 01 Mar 2025 21:57:13 +0000...
Mr Flibble <leigh@i42.co.uk> wrote:
My point is std::fmod is almost useless
MATHEMATICALLY unless the arguments don't involve approximation, but
with IEEE 754 floating point you get what you pay for.
/Flibble
If you were saying that fmod(x/y) for big values of x/y is almost
useless physically / engineerially then I'd tend to agree.
When physicist or engineer says 'double x= bar;' where bar is a number
that can be represented exactly in binary64 then what he means (if he
cared at all to think about what he means, which is rare) is that he
believes that x is in range (bar-ULP:bar+ULP] and he hopes that
probability of x being within range [bar-ULP/2:bar+ULP/2] is much
higher that 0.5. So, obviously, as long as y is abs(y)<ULP(x) for
physicist fmod(x/y) is meaningless.
On 3/2/25 04:07, Michael S wrote:
On Sat, 01 Mar 2025 21:57:13 +0000...
Mr Flibble <leigh@i42.co.uk> wrote:
My point is std::fmod is almost useless
MATHEMATICALLY unless the arguments don't involve approximation,
but with IEEE 754 floating point you get what you pay for.
/Flibble
If you were saying that fmod(x/y) for big values of x/y is almost
fmod(x,y)?
useless physically / engineerially then I'd tend to agree.
When physicist or engineer says 'double x= bar;' where bar is a
number that can be represented exactly in binary64 then what he
means (if he cared at all to think about what he means, which is
rare) is that he believes that x is in range (bar-ULP:bar+ULP] and
he hopes that probability of x being within range
[bar-ULP/2:bar+ULP/2] is much higher that 0.5. So, obviously, as
long as y is abs(y)<ULP(x) for physicist fmod(x/y) is meaningless.
Actually, most numbers of relevance to science and engineering have a uncertainty which is determined by how the raw measurements were made
from which the number was calculated, which are usually much bigger
than 1 ULP. The only exceptions are mathematical constants. Despite
that fact, it is in fact quite commonplace for the uncertainties to
be small enough to make the return value of fmod() meaningful. As a
general rule, if that weren't the case, there wouldn't be any desire
to use fmod().
The correct answer mathematically is 0 not 312 but as IEEE 754 has a
fixed width mantissa, x is inprecise so there is no correct "precise"
result for std::fmod(x, y) for all x & y as Bonita Montero claims
rendering her test case utter bollocks.
The correct answer mathematically is 0 not 312 but as IEEE 754 has a
fixed width mantissa, x is inprecise so there is no correct "precise"
result for std::fmod(x, y) for all x & y as Bonita Montero claims
rendering her test case utter bollocks.
Am 03.03.2025 um 16:25 schrieb Bonita Montero:
The correct answer mathematically is 0 not 312 but as IEEE 754 has
a fixed width mantissa, x is inprecise so there is no correct
"precise" result for std::fmod(x, y) for all x & y as Bonita
Montero claims rendering her test case utter bollocks.
If you print 1e42 with cout and enough setprecision() digits you get:
1.00000000000000004488571267808e+42
Am 01.03.2025 um 20:27 schrieb Mr Flibble:
The correct answer mathematically is 0 not 312 but as IEEE 754 has a
fixed width mantissa, x is inprecise so there is no correct "precise"
result for std::fmod(x, y) for all x & y as Bonita Montero claims
rendering her test case utter bollocks.
fmod() is always exact because the calculations are the same as for
written division and the remainder in written division always has the
same or fewer digits than the divisor.
You fell into the trap when calculating the conversion between decimal >numbers and binary numbers, which is usually not exact for floating-
point values.
Not at all. It has nothing to do with conversion between decimal and
binary; it has everything to do with a fixed width mantissa which when shifted based on an exponent becomes an approximation.
Am 03.03.2025 um 18:11 schrieb Mr Flibble:
Not at all. It has nothing to do with conversion between decimal and
binary; it has everything to do with a fixed width mantissa which when
shifted based on an exponent becomes an approximation.
fmod() is always 100% precise since there are no intermediate results
with precision loss. Michael understands that, you don't. You're really
a n00b.
On Mon, 3 Mar 2025 18:26:21 +0100, Bonita Montero
<Bonita.Montero@gmail.com> wrote:
Am 03.03.2025 um 18:11 schrieb Mr Flibble:
Not at all. It has nothing to do with conversion between decimal and
binary; it has everything to do with a fixed width mantissa which when
shifted based on an exponent becomes an approximation.
fmod() is always 100% precise since there are no intermediate results
with precision loss. Michael understands that, you don't. You're really
a n00b.
No, dear, I am not a n00b but you ARE unable to follow a technical discussion. Is English not your first language?
Am 03.03.2025 um 19:20 schrieb Mr Flibble:
On Mon, 3 Mar 2025 18:26:21 +0100, Bonita Montero
<Bonita.Montero@gmail.com> wrote:
Am 03.03.2025 um 18:11 schrieb Mr Flibble:
Not at all. It has nothing to do with conversion between decimal and
binary; it has everything to do with a fixed width mantissa which when >>>> shifted based on an exponent becomes an approximation.
fmod() is always 100% precise since there are no intermediate results
with precision loss. Michael understands that, you don't. You're really
a n00b.
No, dear, I am not a n00b but you ARE unable to follow a technical
discussion. Is English not your first language?
You're a n00b because you dont't understand why fmod() is exact
for finite numbers. I've given the source so this could help you
to understand that.
On Mon, 3 Mar 2025 19:26:25 +0100, Bonita Montero
<Bonita.Montero@gmail.com> wrote:
Am 03.03.2025 um 19:20 schrieb Mr Flibble:
On Mon, 3 Mar 2025 18:26:21 +0100, Bonita Montero
<Bonita.Montero@gmail.com> wrote:
Am 03.03.2025 um 18:11 schrieb Mr Flibble:
Not at all. It has nothing to do with conversion between decimal and >>>>> binary; it has everything to do with a fixed width mantissa which when >>>>> shifted based on an exponent becomes an approximation.
fmod() is always 100% precise since there are no intermediate results
with precision loss. Michael understands that, you don't. You're really >>>> a n00b.
No, dear, I am not a n00b but you ARE unable to follow a technical
discussion. Is English not your first language?
You're a n00b because you dont't understand why fmod() is exact
for finite numbers. I've given the source so this could help you
to understand that.
Thanks for confirming that you are unable to follow a technical
discussion, n00b.
Am 03.03.2025 um 19:33 schrieb Mr Flibble:
On Mon, 3 Mar 2025 19:26:25 +0100, Bonita Montero
<Bonita.Montero@gmail.com> wrote:
Am 03.03.2025 um 19:20 schrieb Mr Flibble:
On Mon, 3 Mar 2025 18:26:21 +0100, Bonita Montero
<Bonita.Montero@gmail.com> wrote:
Am 03.03.2025 um 18:11 schrieb Mr Flibble:
Not at all. It has nothing to do with conversion between decimal and >>>>>> binary; it has everything to do with a fixed width mantissa which when >>>>>> shifted based on an exponent becomes an approximation.
fmod() is always 100% precise since there are no intermediate results >>>>> with precision loss. Michael understands that, you don't. You're really >>>>> a n00b.
No, dear, I am not a n00b but you ARE unable to follow a technical
discussion. Is English not your first language?
You're a n00b because you dont't understand why fmod() is exact
for finite numbers. I've given the source so this could help you
to understand that.
Thanks for confirming that you are unable to follow a technical
discussion, n00b.
If you were able to to that you would have inspected my code and you
would have come to the conclusion that there is no precision loss for
finite numbers. That's while the process is like a written division
where each intermediate result is smaller than the divisor; so that
there couldn't be any precision loss.
Your solution was silly.
The problem is that you cannot follow a technical discussion on
Usenet: if you read further up this thread at my discussion with
Michael S you will see that I am not talking about precision loss
within fmod but with the arguments passed to fmod; you will also
see the problems with your test code.
Am 03.03.2025 um 19:46 schrieb Mr Flibble:
The problem is that you cannot follow a technical discussion on
Usenet: if you read further up this thread at my discussion with
Michael S you will see that I am not talking about precision loss
within fmod but with the arguments passed to fmod; you will also
see the problems with your test code.
You tried to prove that fmod() can't be precise and trapped into
a decimal to floating point conversion error issue - n00b !
On Mon, 3 Mar 2025 19:48:31 +0100, Bonita Montero
<Bonita.Montero@gmail.com> wrote:
Am 03.03.2025 um 19:46 schrieb Mr Flibble:
The problem is that you cannot follow a technical discussion on
Usenet: if you read further up this thread at my discussion with
Michael S you will see that I am not talking about precision loss
within fmod but with the arguments passed to fmod; you will also
see the problems with your test code.
You tried to prove that fmod() can't be precise and trapped into
a decimal to floating point conversion error issue - n00b !
No I didn't; the problem here is that you are the n00b as you cannot
follow a technical discussion on Usenet.
Am 03.03.2025 um 19:50 schrieb Mr Flibble:
On Mon, 3 Mar 2025 19:48:31 +0100, Bonita Montero
<Bonita.Montero@gmail.com> wrote:
Am 03.03.2025 um 19:46 schrieb Mr Flibble:
The problem is that you cannot follow a technical discussion on
Usenet: if you read further up this thread at my discussion with
Michael S you will see that I am not talking about precision loss
within fmod but with the arguments passed to fmod; you will also
see the problems with your test code.
You tried to prove that fmod() can't be precise and trapped into
a decimal to floating point conversion error issue - n00b !
No I didn't; the problem here is that you are the n00b as you cannot
follow a technical discussion on Usenet.
The issue you mention has nothing to do with fmod() in special,
so I'm asking myself why you want to relate that to my implementation.
fmod() itself is always precise.
Something that is both precise and wrong is still wrong; fmod() is
totally useless mathematically for a subset of possible arguments to
it given what it is supposed to do.
Am 03.03.2025 um 19:58 schrieb Mr Flibble:
Something that is both precise and wrong is still wrong; fmod() is
totally useless mathematically for a subset of possible arguments to
it given what it is supposed to do.
If fmod() would be useless fmod() and fmod()-like functions wouldn't
be part of any lanugage's standard library.
The basic point that you are completely missing is that the result
of fmod() only makes sense for a subset of possible IEEE 754
representations passed to it as arguments; ...
your test code did not take this into account so was total bollocks as farYour implementation was unusable for sure since it has about 2% out of
as assessing the quality of implementation of something fmod()-like; others have pointed this out to you -- not just me.
Your implementation was unusable for sure since it has about 2% out of
range results, even for small exponent differences.
Am 03.03.2025 um 20:09 schrieb Mr Flibble:
The basic point that you are completely missing is that the result
of fmod() only makes sense for a subset of possible IEEE 754
representations passed to it as arguments; ...
You don't know that since you don't know all applications.
your test code did not take this into account so was total bollocks as far >> as assessing the quality of implementation of something fmod()-like; others >> have pointed this out to you -- not just me.Your implementation was unusable for sure since it has about 2% out of
range results, even for small exponent differences.
It can only do that correctly (mathematically speaking) for a small
subset of arguments due to nature of IEEE 754 floating point and any
QoI test code should reflect that.
Maybe std::fmod requirements should be changed so that the result
is implementation defined based on the exponents of the arguments.
Am 03.03.2025 um 20:33 schrieb Mr Flibble:
It can only do that correctly (mathematically speaking) for a small
subset of arguments due to nature of IEEE 754 floating point and any
QoI test code should reflect that.
You're talking about the constraints of floating point numbers in
general and not about the constaints of fmod().
Maybe std::fmod requirements should be changed so that the result
is implementation defined based on the exponents of the arguments.
Not necessary, fmod() is always precise.
On Mon, 3 Mar 2025 19:53:07 +0100, Bonita Montero
<Bonita.Montero@gmail.com> wrote:
Am 03.03.2025 um 19:50 schrieb Mr Flibble:
On Mon, 3 Mar 2025 19:48:31 +0100, Bonita Montero
<Bonita.Montero@gmail.com> wrote:
Am 03.03.2025 um 19:46 schrieb Mr Flibble:
The problem is that you cannot follow a technical discussion on
Usenet: if you read further up this thread at my discussion with
Michael S you will see that I am not talking about precision loss
within fmod but with the arguments passed to fmod; you will also
see the problems with your test code.
You tried to prove that fmod() can't be precise and trapped into
a decimal to floating point conversion error issue - n00b !
No I didn't; the problem here is that you are the n00b as you cannot
follow a technical discussion on Usenet.
The issue you mention has nothing to do with fmod() in special,
so I'm asking myself why you want to relate that to my implementation.
fmod() itself is always precise.
Something that is both precise and wrong is still wrong; fmod() is
totally useless mathematically for a subset of possible arguments to
it given what it is supposed to do.
Precisely wrong is still wrong.
Am 03.03.2025 um 20:37 schrieb Mr Flibble:
Precisely wrong is still wrong.
The a and b values could be from more of the half range of the exponent,
up to an exponent of 0x433, which is the last exponent with no missing
digits before the decimal point. That's not a "small subset".
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 1,029 |
Nodes: | 10 (1 / 9) |
Uptime: | 180:52:38 |
Calls: | 13,336 |
Calls today: | 3 |
Files: | 186,574 |
D/L today: |
5,114 files (1,408M bytes) |
Messages: | 3,356,559 |