• There is no "correct" result for std::fmod across all IEEE 754 representations

    From Mr Flibble@leigh@i42.co.uk to comp.lang.c++ on Sat Mar 1 19:27:46 2025
    From Newsgroup: comp.lang.c++

    Hi!

    There is no "correct" result for std::fmod across all IEEE 754
    representations.

    Why?

    #include <cmath>
    #include <iostream>

    int main()
    {
    std::cout << "Result of std::fmod(1.0e42, 1000.0): " <<
    std::fmod(1.0e42, 1000.0);
    }

    Result of std::fmod(1.0e42, 1000.0): 312

    The correct answer mathematically is 0 not 312 but as IEEE 754 has a
    fixed width mantissa, x is inprecise so there is no correct "precise"
    result for std::fmod(x, y) for all x & y as Bonita Montero claims
    rendering her test case utter bollocks.

    Given these FACTS please present actual problems with:

    double my_fmod(double x, double y)
    {
    if (y == 0.0)
    return x / y;
    return x - std::trunc(x / y) * y;
    }

    /Flibble
    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.lang.c++ on Sat Mar 1 23:03:20 2025
    From Newsgroup: comp.lang.c++

    On Sat, 01 Mar 2025 19:27:46 +0000
    Mr Flibble <leigh@i42.co.uk> wrote:

    Hi!

    There is no "correct" result for std::fmod across all IEEE 754 representations.

    Why?

    #include <cmath>
    #include <iostream>

    int main()
    {
    std::cout << "Result of std::fmod(1.0e42, 1000.0): " <<
    std::fmod(1.0e42, 1000.0);
    }

    Result of std::fmod(1.0e42, 1000.0): 312

    The correct answer mathematically is 0 not 312 but as IEEE 754 has a
    fixed width mantissa, x is inprecise so there is no correct "precise"
    result for std::fmod(x, y) for all x & y as Bonita Montero claims
    rendering her test case utter bollocks.

    Given these FACTS please present actual problems with:

    double my_fmod(double x, double y)
    {
    if (y == 0.0)
    return x / y;
    return x - std::trunc(x / y) * y;
    }

    /Flibble


    For x that is a closest IEEE-754 binary64 approximation of 1e42, i.e.
    for 0x1.6f578c4e0a061p+139, 312 is an exact answer.



    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mr Flibble@leigh@i42.co.uk to comp.lang.c++ on Sat Mar 1 21:57:13 2025
    From Newsgroup: comp.lang.c++

    On Sat, 1 Mar 2025 23:03:20 +0200, Michael S
    <already5chosen@yahoo.com> wrote:

    On Sat, 01 Mar 2025 19:27:46 +0000
    Mr Flibble <leigh@i42.co.uk> wrote:

    Hi!

    There is no "correct" result for std::fmod across all IEEE 754
    representations.

    Why?

    #include <cmath>
    #include <iostream>

    int main()
    {
    std::cout << "Result of std::fmod(1.0e42, 1000.0): " <<
    std::fmod(1.0e42, 1000.0);
    }

    Result of std::fmod(1.0e42, 1000.0): 312

    The correct answer mathematically is 0 not 312 but as IEEE 754 has a
    fixed width mantissa, x is inprecise so there is no correct "precise"
    result for std::fmod(x, y) for all x & y as Bonita Montero claims
    rendering her test case utter bollocks.

    Given these FACTS please present actual problems with:

    double my_fmod(double x, double y)
    {
    if (y == 0.0)
    return x / y;
    return x - std::trunc(x / y) * y;
    }

    /Flibble


    For x that is a closest IEEE-754 binary64 approximation of 1e42, i.e.
    for 0x1.6f578c4e0a061p+139, 312 is an exact answer.

    The 312 doesn't come directly from the mantissa though, if it did it
    would be 0 not 312. My point is std::fmod is almost useless
    MATHEMATICALLY unless the arguments don't involve approximation, but
    with IEEE 754 floating point you get what you pay for.

    /Flibble
    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.lang.c++ on Sun Mar 2 11:07:10 2025
    From Newsgroup: comp.lang.c++

    On Sat, 01 Mar 2025 21:57:13 +0000
    Mr Flibble <leigh@i42.co.uk> wrote:

    On Sat, 1 Mar 2025 23:03:20 +0200, Michael S
    <already5chosen@yahoo.com> wrote:

    On Sat, 01 Mar 2025 19:27:46 +0000
    Mr Flibble <leigh@i42.co.uk> wrote:

    Hi!

    There is no "correct" result for std::fmod across all IEEE 754
    representations.

    Why?

    #include <cmath>
    #include <iostream>

    int main()
    {
    std::cout << "Result of std::fmod(1.0e42, 1000.0): " <<
    std::fmod(1.0e42, 1000.0);
    }

    Result of std::fmod(1.0e42, 1000.0): 312

    The correct answer mathematically is 0 not 312 but as IEEE 754 has
    a fixed width mantissa, x is inprecise so there is no correct
    "precise" result for std::fmod(x, y) for all x & y as Bonita
    Montero claims rendering her test case utter bollocks.

    Given these FACTS please present actual problems with:

    double my_fmod(double x, double y)
    {
    if (y == 0.0)
    return x / y;
    return x - std::trunc(x / y) * y;
    }

    /Flibble


    For x that is a closest IEEE-754 binary64 approximation of 1e42, i.e.
    for 0x1.6f578c4e0a061p+139, 312 is an exact answer.

    The 312 doesn't come directly from the mantissa though, if it did it
    would be 0 not 312.

    I don't quite understand what you mean by that. That it does not come
    from *decimal* mantissa that you wrote in the source code?
    But how could it? IEEE-754 binary64 is, well, binary.

    Python:
    x=1e42
    x
    1e+42
    int(x)
    1000000000000000044885712678075916785549312


    My point is std::fmod is almost useless
    MATHEMATICALLY unless the arguments don't involve approximation, but
    with IEEE 754 floating point you get what you pay for.

    /Flibble

    If you were saying that fmod(x/y) for big values of x/y is almost
    useless physically / engineerially then I'd tend to agree.
    When physicist or engineer says 'double x= bar;' where bar is a number
    that can be represented exactly in binary64 then what he means (if he
    cared at all to think about what he means, which is rare) is that he
    believes that x is in range (bar-ULP:bar+ULP] and he hopes that
    probability of x being within range [bar-ULP/2:bar+ULP/2] is much
    higher that 0.5. So, obviously, as long as y is abs(y)<ULP(x) for
    physicist fmod(x/y) is meaningless.

    But the question of what is meaningful mathematically is different. The
    answer depends on branch of mathematics we are interested in. Mostly I
    tend to agree with you that it should depend on whether the argument
    involves approximation or not. However, I know that I can not predict
    what mathematician that specializes, for example, in numeric theory
    could find meaningful or just interesting.




    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Muttley@Muttley@DastardlyHQ.org to comp.lang.c++ on Sun Mar 2 09:17:54 2025
    From Newsgroup: comp.lang.c++

    On Sun, 2 Mar 2025 11:07:10 +0200
    Michael S <already5chosen@yahoo.com> wibbled:
    On Sat, 01 Mar 2025 21:57:13 +0000
    Mr Flibble <leigh@i42.co.uk> wrote:
    My point is std::fmod is almost useless
    MATHEMATICALLY unless the arguments don't involve approximation, but
    with IEEE 754 floating point you get what you pay for.

    /Flibble

    If you were saying that fmod(x/y) for big values of x/y is almost
    useless physically / engineerially then I'd tend to agree.
    When physicist or engineer says 'double x= bar;' where bar is a number
    that can be represented exactly in binary64 then what he means (if he
    cared at all to think about what he means, which is rare) is that he
    believes that x is in range (bar-ULP:bar+ULP] and he hopes that
    probability of x being within range [bar-ULP/2:bar+ULP/2] is much
    higher that 0.5. So, obviously, as long as y is abs(y)<ULP(x) for
    physicist fmod(x/y) is meaningless.

    The whole argument is somewhat moot anyway - for serious physics or engineering calculations you simply don't use IEE754. These sort of applications either require everything to be done in integer (see also Fintec) or to use a number library such as GMP where the precision is only limited by memory and the
    speed you require.

    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.lang.c++ on Sun Mar 2 16:03:30 2025
    From Newsgroup: comp.lang.c++

    On Sun, 2 Mar 2025 09:17:54 -0000 (UTC)
    Muttley@DastardlyHQ.org wrote:

    On Sun, 2 Mar 2025 11:07:10 +0200
    Michael S <already5chosen@yahoo.com> wibbled:
    On Sat, 01 Mar 2025 21:57:13 +0000
    Mr Flibble <leigh@i42.co.uk> wrote:
    My point is std::fmod is almost useless
    MATHEMATICALLY unless the arguments don't involve approximation,
    but with IEEE 754 floating point you get what you pay for.

    /Flibble

    If you were saying that fmod(x/y) for big values of x/y is almost
    useless physically / engineerially then I'd tend to agree.
    When physicist or engineer says 'double x= bar;' where bar is a
    number that can be represented exactly in binary64 then what he
    means (if he cared at all to think about what he means, which is
    rare) is that he believes that x is in range (bar-ULP:bar+ULP] and
    he hopes that probability of x being within range
    [bar-ULP/2:bar+ULP/2] is much higher that 0.5. So, obviously, as
    long as y is abs(y)<ULP(x) for physicist fmod(x/y) is meaningless.

    The whole argument is somewhat moot anyway - for serious physics or engineering calculations you simply don't use IEE754. These sort of applications either require everything to be done in integer (see
    also Fintec) or to use a number library such as GMP where the
    precision is only limited by memory and the speed you require.


    That's wrong.
    Overwhelming majority of worlds scientific and engineering calculations
    is performed with IEEE-754 binary64.
    You can call all this people non-serious.
    Me, personally, I don't think that Fintech people are not serious, but
    at the same time I think that most of them are crooks.

    In other field, like signal processing and graphics, the default is
    IEEE-754 binary32.
    In the modern turn of AI they use even lower precision, like IEEE-754
    binary16 and so-called bfloat. And the trend (DeepSeek) is further down
    in precision.

    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mr Flibble@leigh@i42.co.uk to comp.lang.c++ on Sun Mar 2 15:24:52 2025
    From Newsgroup: comp.lang.c++

    On Sun, 2 Mar 2025 11:07:10 +0200, Michael S
    <already5chosen@yahoo.com> wrote:

    On Sat, 01 Mar 2025 21:57:13 +0000
    Mr Flibble <leigh@i42.co.uk> wrote:

    On Sat, 1 Mar 2025 23:03:20 +0200, Michael S
    <already5chosen@yahoo.com> wrote:

    On Sat, 01 Mar 2025 19:27:46 +0000
    Mr Flibble <leigh@i42.co.uk> wrote:

    Hi!

    There is no "correct" result for std::fmod across all IEEE 754
    representations.

    Why?

    #include <cmath>
    #include <iostream>

    int main()
    {
    std::cout << "Result of std::fmod(1.0e42, 1000.0): " <<
    std::fmod(1.0e42, 1000.0);
    }

    Result of std::fmod(1.0e42, 1000.0): 312

    The correct answer mathematically is 0 not 312 but as IEEE 754 has
    a fixed width mantissa, x is inprecise so there is no correct
    "precise" result for std::fmod(x, y) for all x & y as Bonita
    Montero claims rendering her test case utter bollocks.

    Given these FACTS please present actual problems with:

    double my_fmod(double x, double y)
    {
    if (y == 0.0)
    return x / y;
    return x - std::trunc(x / y) * y;
    }

    /Flibble


    For x that is a closest IEEE-754 binary64 approximation of 1e42, i.e.
    for 0x1.6f578c4e0a061p+139, 312 is an exact answer.

    The 312 doesn't come directly from the mantissa though, if it did it
    would be 0 not 312.

    I don't quite understand what you mean by that. That it does not come
    from *decimal* mantissa that you wrote in the source code?
    But how could it? IEEE-754 binary64 is, well, binary.

    No, I mean does not come from the *binary* mantissa: the mantissa is *effectively* bit shifted according to the value of the exponent
    storing zeros in the exposed bits and it is these exposed zero bits
    that contribute to the approximation I was referring to.

    /Flibble
    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Muttley@Muttley@dastardlyhq.com to comp.lang.c++ on Sun Mar 2 16:56:11 2025
    From Newsgroup: comp.lang.c++

    On Sun, 2 Mar 2025 16:03:30 +0200
    Michael S <already5chosen@yahoo.com> gabbled:
    On Sun, 2 Mar 2025 09:17:54 -0000 (UTC)
    Muttley@DastardlyHQ.org wrote:
    The whole argument is somewhat moot anyway - for serious physics or
    engineering calculations you simply don't use IEE754. These sort of
    applications either require everything to be done in integer (see
    also Fintec) or to use a number library such as GMP where the
    precision is only limited by memory and the speed you require.


    That's wrong.
    Overwhelming majority of worlds scientific and engineering calculations
    is performed with IEEE-754 binary64.

    I guess it depends how you define scientific and engineering. For cars sure, who cares if the lane keep wanders by a few cm, deep space probes otoh could end up millions of miles off course if there's some rounding error.

    You can call all this people non-serious.
    Me, personally, I don't think that Fintech people are not serious, but
    at the same time I think that most of them are crooks.

    Feel free to close your bank accounts and keep bundles of cash under your mattress.

    In other field, like signal processing and graphics, the default is
    IEEE-754 binary32.

    Why would graphics need very accurate precision? Quite the opposite especially if you want a good frame rate. No one cares if Lara Crofts hair is a few pixels out of place. Signal processing would depend on what its used for.

    In the modern turn of AI they use even lower precision, like IEEE-754 >binary16 and so-called bfloat. And the trend (DeepSeek) is further down
    in precision.

    Given the signal to noise ratio in AI input data it would probably make
    little difference if they just used integer arithmetic frankly.

    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From James Kuyper@jameskuyper@alumni.caltech.edu to comp.lang.c++ on Sun Mar 2 17:37:48 2025
    From Newsgroup: comp.lang.c++

    On 3/2/25 04:07, Michael S wrote:
    On Sat, 01 Mar 2025 21:57:13 +0000
    Mr Flibble <leigh@i42.co.uk> wrote:
    ...
    My point is std::fmod is almost useless
    MATHEMATICALLY unless the arguments don't involve approximation, but
    with IEEE 754 floating point you get what you pay for.

    /Flibble

    If you were saying that fmod(x/y) for big values of x/y is almost

    fmod(x,y)?

    useless physically / engineerially then I'd tend to agree.
    When physicist or engineer says 'double x= bar;' where bar is a number
    that can be represented exactly in binary64 then what he means (if he
    cared at all to think about what he means, which is rare) is that he
    believes that x is in range (bar-ULP:bar+ULP] and he hopes that
    probability of x being within range [bar-ULP/2:bar+ULP/2] is much
    higher that 0.5. So, obviously, as long as y is abs(y)<ULP(x) for
    physicist fmod(x/y) is meaningless.

    Actually, most numbers of relevance to science and engineering have a uncertainty which is determined by how the raw measurements were made
    from which the number was calculated, which are usually much bigger than
    1 ULP. The only exceptions are mathematical constants. Despite that
    fact, it is in fact quite commonplace for the uncertainties to be small
    enough to make the return value of fmod() meaningful. As a general rule,
    if that weren't the case, there wouldn't be any desire to use fmod().

    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.lang.c++ on Mon Mar 3 13:20:24 2025
    From Newsgroup: comp.lang.c++

    On Sun, 2 Mar 2025 17:37:48 -0500
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    On 3/2/25 04:07, Michael S wrote:
    On Sat, 01 Mar 2025 21:57:13 +0000
    Mr Flibble <leigh@i42.co.uk> wrote:
    ...
    My point is std::fmod is almost useless
    MATHEMATICALLY unless the arguments don't involve approximation,
    but with IEEE 754 floating point you get what you pay for.

    /Flibble

    If you were saying that fmod(x/y) for big values of x/y is almost

    fmod(x,y)?

    Yes, sorry.


    useless physically / engineerially then I'd tend to agree.
    When physicist or engineer says 'double x= bar;' where bar is a
    number that can be represented exactly in binary64 then what he
    means (if he cared at all to think about what he means, which is
    rare) is that he believes that x is in range (bar-ULP:bar+ULP] and
    he hopes that probability of x being within range
    [bar-ULP/2:bar+ULP/2] is much higher that 0.5. So, obviously, as
    long as y is abs(y)<ULP(x) for physicist fmod(x/y) is meaningless.

    Actually, most numbers of relevance to science and engineering have a uncertainty which is determined by how the raw measurements were made
    from which the number was calculated, which are usually much bigger
    than 1 ULP. The only exceptions are mathematical constants. Despite
    that fact, it is in fact quite commonplace for the uncertainties to
    be small enough to make the return value of fmod() meaningful. As a
    general rule, if that weren't the case, there wouldn't be any desire
    to use fmod().


    My point was that for physicist fmod(x,y) is meaningless as long as abs(y)<ULP(x).
    I never said that when abs(y)>ULP(x) then fmod(x,y) is always
    meaningful, just that it is a necessary condition.

    Bonita's test suit consists of 50% of trivial cases (abs(y) > abs(x)),
    47.7 % of meaningless cases, and only 2.3% of the cases are
    potentially meaningful.



    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Bonita Montero@Bonita.Montero@gmail.com to comp.lang.c++ on Mon Mar 3 16:25:31 2025
    From Newsgroup: comp.lang.c++

    Am 01.03.2025 um 20:27 schrieb Mr Flibble:

    The correct answer mathematically is 0 not 312 but as IEEE 754 has a
    fixed width mantissa, x is inprecise so there is no correct "precise"
    result for std::fmod(x, y) for all x & y as Bonita Montero claims
    rendering her test case utter bollocks.

    fmod() is always exact because the calculations are the same as for
    written division and the remainder in written division always has the
    same or fewer digits than the divisor.
    You fell into the trap when calculating the conversion between decimal
    numbers and binary numbers, which is usually not exact for floating-
    point values.
    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Bonita Montero@Bonita.Montero@gmail.com to comp.lang.c++ on Mon Mar 3 16:30:26 2025
    From Newsgroup: comp.lang.c++

    Am 03.03.2025 um 16:25 schrieb Bonita Montero:

    The correct answer mathematically is 0 not 312 but as IEEE 754 has a
    fixed width mantissa, x is inprecise so there is no correct "precise"
    result for std::fmod(x, y) for all x & y as Bonita Montero claims
    rendering her test case utter bollocks.

    If you print 1e42 with cout and enough setprecision() digits you get:

    1.00000000000000004488571267808e+42


    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.lang.c++ on Mon Mar 3 18:54:29 2025
    From Newsgroup: comp.lang.c++

    On Mon, 3 Mar 2025 16:30:26 +0100
    Bonita Montero <Bonita.Montero@gmail.com> wrote:

    Am 03.03.2025 um 16:25 schrieb Bonita Montero:

    The correct answer mathematically is 0 not 312 but as IEEE 754 has
    a fixed width mantissa, x is inprecise so there is no correct
    "precise" result for std::fmod(x, y) for all x & y as Bonita
    Montero claims rendering her test case utter bollocks.

    If you print 1e42 with cout and enough setprecision() digits you get:

    1.00000000000000004488571267808e+42



    Which is still not enlightening. The full answer is 1000000000000000044885712678075916785549312
    Or, if you want
    1.000000000000000044885712678075916785549312e42

    Which, of course, is very easy to see if you use working formatting
    library instead of misdesigned C++ crap.
    printf("%.42e\n", 1e42);

    Both 5 times less code and 5 times better result.


    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mr Flibble@leigh@i42.co.uk to comp.lang.c++ on Mon Mar 3 17:11:39 2025
    From Newsgroup: comp.lang.c++

    On Mon, 3 Mar 2025 16:25:31 +0100, Bonita Montero
    <Bonita.Montero@gmail.com> wrote:

    Am 01.03.2025 um 20:27 schrieb Mr Flibble:

    The correct answer mathematically is 0 not 312 but as IEEE 754 has a
    fixed width mantissa, x is inprecise so there is no correct "precise"
    result for std::fmod(x, y) for all x & y as Bonita Montero claims
    rendering her test case utter bollocks.

    fmod() is always exact because the calculations are the same as for
    written division and the remainder in written division always has the
    same or fewer digits than the divisor.
    You fell into the trap when calculating the conversion between decimal >numbers and binary numbers, which is usually not exact for floating-
    point values.

    Not at all. It has nothing to do with conversion between decimal and
    binary; it has everything to do with a fixed width mantissa which when
    shifted based on an exponent becomes an approximation.

    /Flibble
    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Bonita Montero@Bonita.Montero@gmail.com to comp.lang.c++ on Mon Mar 3 18:26:21 2025
    From Newsgroup: comp.lang.c++

    Am 03.03.2025 um 18:11 schrieb Mr Flibble:

    Not at all. It has nothing to do with conversion between decimal and
    binary; it has everything to do with a fixed width mantissa which when shifted based on an exponent becomes an approximation.

    fmod() is always 100% precise since there are no intermediate results
    with precision loss. Michael understands that, you don't. You're really
    a n00b.

    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mr Flibble@leigh@i42.co.uk to comp.lang.c++ on Mon Mar 3 18:20:47 2025
    From Newsgroup: comp.lang.c++

    On Mon, 3 Mar 2025 18:26:21 +0100, Bonita Montero
    <Bonita.Montero@gmail.com> wrote:

    Am 03.03.2025 um 18:11 schrieb Mr Flibble:

    Not at all. It has nothing to do with conversion between decimal and
    binary; it has everything to do with a fixed width mantissa which when
    shifted based on an exponent becomes an approximation.

    fmod() is always 100% precise since there are no intermediate results
    with precision loss. Michael understands that, you don't. You're really
    a n00b.

    No, dear, I am not a n00b but you ARE unable to follow a technical
    discussion. Is English not your first language?

    /Flibble
    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Bonita Montero@Bonita.Montero@gmail.com to comp.lang.c++ on Mon Mar 3 19:26:25 2025
    From Newsgroup: comp.lang.c++

    Am 03.03.2025 um 19:20 schrieb Mr Flibble:
    On Mon, 3 Mar 2025 18:26:21 +0100, Bonita Montero
    <Bonita.Montero@gmail.com> wrote:

    Am 03.03.2025 um 18:11 schrieb Mr Flibble:

    Not at all. It has nothing to do with conversion between decimal and
    binary; it has everything to do with a fixed width mantissa which when
    shifted based on an exponent becomes an approximation.

    fmod() is always 100% precise since there are no intermediate results
    with precision loss. Michael understands that, you don't. You're really
    a n00b.

    No, dear, I am not a n00b but you ARE unable to follow a technical discussion. Is English not your first language?

    You're a n00b because you dont't understand why fmod() is exact
    for finite numbers. I've given the source so this could help you
    to understand that.

    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mr Flibble@leigh@i42.co.uk to comp.lang.c++ on Mon Mar 3 18:33:18 2025
    From Newsgroup: comp.lang.c++

    On Mon, 3 Mar 2025 19:26:25 +0100, Bonita Montero
    <Bonita.Montero@gmail.com> wrote:

    Am 03.03.2025 um 19:20 schrieb Mr Flibble:
    On Mon, 3 Mar 2025 18:26:21 +0100, Bonita Montero
    <Bonita.Montero@gmail.com> wrote:

    Am 03.03.2025 um 18:11 schrieb Mr Flibble:

    Not at all. It has nothing to do with conversion between decimal and
    binary; it has everything to do with a fixed width mantissa which when >>>> shifted based on an exponent becomes an approximation.

    fmod() is always 100% precise since there are no intermediate results
    with precision loss. Michael understands that, you don't. You're really
    a n00b.

    No, dear, I am not a n00b but you ARE unable to follow a technical
    discussion. Is English not your first language?

    You're a n00b because you dont't understand why fmod() is exact
    for finite numbers. I've given the source so this could help you
    to understand that.

    Thanks for confirming that you are unable to follow a technical
    discussion, n00b.

    /Flibble
    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Bonita Montero@Bonita.Montero@gmail.com to comp.lang.c++ on Mon Mar 3 19:37:54 2025
    From Newsgroup: comp.lang.c++

    Am 03.03.2025 um 19:33 schrieb Mr Flibble:
    On Mon, 3 Mar 2025 19:26:25 +0100, Bonita Montero
    <Bonita.Montero@gmail.com> wrote:

    Am 03.03.2025 um 19:20 schrieb Mr Flibble:
    On Mon, 3 Mar 2025 18:26:21 +0100, Bonita Montero
    <Bonita.Montero@gmail.com> wrote:

    Am 03.03.2025 um 18:11 schrieb Mr Flibble:

    Not at all. It has nothing to do with conversion between decimal and >>>>> binary; it has everything to do with a fixed width mantissa which when >>>>> shifted based on an exponent becomes an approximation.

    fmod() is always 100% precise since there are no intermediate results
    with precision loss. Michael understands that, you don't. You're really >>>> a n00b.

    No, dear, I am not a n00b but you ARE unable to follow a technical
    discussion. Is English not your first language?

    You're a n00b because you dont't understand why fmod() is exact
    for finite numbers. I've given the source so this could help you
    to understand that.

    Thanks for confirming that you are unable to follow a technical
    discussion, n00b.

    If you were able to to that you would have inspected my code and you
    would have come to the conclusion that there is no precision loss for
    finite numbers. That's while the process is like a written division
    where each intermediate result is smaller than the divisor; so that
    there couldn't be any precision loss.
    Your solution was silly.

    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mr Flibble@leigh@i42.co.uk to comp.lang.c++ on Mon Mar 3 18:46:37 2025
    From Newsgroup: comp.lang.c++

    On Mon, 3 Mar 2025 19:37:54 +0100, Bonita Montero
    <Bonita.Montero@gmail.com> wrote:

    Am 03.03.2025 um 19:33 schrieb Mr Flibble:
    On Mon, 3 Mar 2025 19:26:25 +0100, Bonita Montero
    <Bonita.Montero@gmail.com> wrote:

    Am 03.03.2025 um 19:20 schrieb Mr Flibble:
    On Mon, 3 Mar 2025 18:26:21 +0100, Bonita Montero
    <Bonita.Montero@gmail.com> wrote:

    Am 03.03.2025 um 18:11 schrieb Mr Flibble:

    Not at all. It has nothing to do with conversion between decimal and >>>>>> binary; it has everything to do with a fixed width mantissa which when >>>>>> shifted based on an exponent becomes an approximation.

    fmod() is always 100% precise since there are no intermediate results >>>>> with precision loss. Michael understands that, you don't. You're really >>>>> a n00b.

    No, dear, I am not a n00b but you ARE unable to follow a technical
    discussion. Is English not your first language?

    You're a n00b because you dont't understand why fmod() is exact
    for finite numbers. I've given the source so this could help you
    to understand that.

    Thanks for confirming that you are unable to follow a technical
    discussion, n00b.

    If you were able to to that you would have inspected my code and you
    would have come to the conclusion that there is no precision loss for
    finite numbers. That's while the process is like a written division
    where each intermediate result is smaller than the divisor; so that
    there couldn't be any precision loss.
    Your solution was silly.

    The problem is that you cannot follow a technical discussion on
    Usenet: if you read further up this thread at my discussion with
    Michael S you will see that I am not talking about precision loss
    within fmod but with the arguments passed to fmod; you will also see
    the problems with your test code.

    /Flibble
    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Bonita Montero@Bonita.Montero@gmail.com to comp.lang.c++ on Mon Mar 3 19:48:31 2025
    From Newsgroup: comp.lang.c++

    Am 03.03.2025 um 19:46 schrieb Mr Flibble:

    The problem is that you cannot follow a technical discussion on
    Usenet: if you read further up this thread at my discussion with
    Michael S you will see that I am not talking about precision loss
    within fmod but with the arguments passed to fmod; you will also
    see the problems with your test code.

    You tried to prove that fmod() can't be precise and trapped into
    a decimal to floating point conversion error issue - n00b !
    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mr Flibble@leigh@i42.co.uk to comp.lang.c++ on Mon Mar 3 18:50:32 2025
    From Newsgroup: comp.lang.c++

    On Mon, 3 Mar 2025 19:48:31 +0100, Bonita Montero
    <Bonita.Montero@gmail.com> wrote:

    Am 03.03.2025 um 19:46 schrieb Mr Flibble:

    The problem is that you cannot follow a technical discussion on
    Usenet: if you read further up this thread at my discussion with
    Michael S you will see that I am not talking about precision loss
    within fmod but with the arguments passed to fmod; you will also
    see the problems with your test code.

    You tried to prove that fmod() can't be precise and trapped into
    a decimal to floating point conversion error issue - n00b !

    No I didn't; the problem here is that you are the n00b as you cannot
    follow a technical discussion on Usenet.

    /Flibble
    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Bonita Montero@Bonita.Montero@gmail.com to comp.lang.c++ on Mon Mar 3 19:53:07 2025
    From Newsgroup: comp.lang.c++

    Am 03.03.2025 um 19:50 schrieb Mr Flibble:
    On Mon, 3 Mar 2025 19:48:31 +0100, Bonita Montero
    <Bonita.Montero@gmail.com> wrote:

    Am 03.03.2025 um 19:46 schrieb Mr Flibble:

    The problem is that you cannot follow a technical discussion on
    Usenet: if you read further up this thread at my discussion with
    Michael S you will see that I am not talking about precision loss
    within fmod but with the arguments passed to fmod; you will also
    see the problems with your test code.

    You tried to prove that fmod() can't be precise and trapped into
    a decimal to floating point conversion error issue - n00b !

    No I didn't; the problem here is that you are the n00b as you cannot
    follow a technical discussion on Usenet.

    The issue you mention has nothing to do with fmod() in special,
    so I'm asking myself why you want to relate that to my implementation.
    fmod() itself is always precise.
    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mr Flibble@leigh@i42.co.uk to comp.lang.c++ on Mon Mar 3 18:58:42 2025
    From Newsgroup: comp.lang.c++

    On Mon, 3 Mar 2025 19:53:07 +0100, Bonita Montero
    <Bonita.Montero@gmail.com> wrote:

    Am 03.03.2025 um 19:50 schrieb Mr Flibble:
    On Mon, 3 Mar 2025 19:48:31 +0100, Bonita Montero
    <Bonita.Montero@gmail.com> wrote:

    Am 03.03.2025 um 19:46 schrieb Mr Flibble:

    The problem is that you cannot follow a technical discussion on
    Usenet: if you read further up this thread at my discussion with
    Michael S you will see that I am not talking about precision loss
    within fmod but with the arguments passed to fmod; you will also
    see the problems with your test code.

    You tried to prove that fmod() can't be precise and trapped into
    a decimal to floating point conversion error issue - n00b !

    No I didn't; the problem here is that you are the n00b as you cannot
    follow a technical discussion on Usenet.

    The issue you mention has nothing to do with fmod() in special,
    so I'm asking myself why you want to relate that to my implementation.
    fmod() itself is always precise.

    Something that is both precise and wrong is still wrong; fmod() is
    totally useless mathematically for a subset of possible arguments to
    it given what it is supposed to do.

    /Flibble
    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Bonita Montero@Bonita.Montero@gmail.com to comp.lang.c++ on Mon Mar 3 20:05:02 2025
    From Newsgroup: comp.lang.c++

    Am 03.03.2025 um 19:58 schrieb Mr Flibble:

    Something that is both precise and wrong is still wrong; fmod() is
    totally useless mathematically for a subset of possible arguments to
    it given what it is supposed to do.

    If fmod() would be useless fmod() and fmod()-like functions wouldn't
    be part of any lanugage's standard library.
    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mr Flibble@leigh@i42.co.uk to comp.lang.c++ on Mon Mar 3 19:09:39 2025
    From Newsgroup: comp.lang.c++

    On Mon, 3 Mar 2025 20:05:02 +0100, Bonita Montero
    <Bonita.Montero@gmail.com> wrote:

    Am 03.03.2025 um 19:58 schrieb Mr Flibble:

    Something that is both precise and wrong is still wrong; fmod() is
    totally useless mathematically for a subset of possible arguments to
    it given what it is supposed to do.

    If fmod() would be useless fmod() and fmod()-like functions wouldn't
    be part of any lanugage's standard library.

    The basic point that you are completely missing is that the result of
    fmod() only makes sense for a subset of possible IEEE 754
    representations passed to it as arguments; your test code did not take
    this into account so was total bollocks as far as assessing the
    quality of implementation of something fmod()-like; others have
    pointed this out to you -- not just me.

    /Flibble
    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Bonita Montero@Bonita.Montero@gmail.com to comp.lang.c++ on Mon Mar 3 20:14:50 2025
    From Newsgroup: comp.lang.c++

    Am 03.03.2025 um 20:09 schrieb Mr Flibble:

    The basic point that you are completely missing is that the result
    of fmod() only makes sense for a subset of possible IEEE 754
    representations passed to it as arguments; ...

    You don't know that since you don't know all applications.

    your test code did not take this into account so was total bollocks as far
    as assessing the quality of implementation of something fmod()-like; others have pointed this out to you -- not just me.
    Your implementation was unusable for sure since it has about 2% out of
    range results, even for small exponent differences.

    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Bonita Montero@Bonita.Montero@gmail.com to comp.lang.c++ on Mon Mar 3 20:27:56 2025
    From Newsgroup: comp.lang.c++

    Am 03.03.2025 um 20:14 schrieb Bonita Montero:

    Your implementation was unusable for sure since it has about 2% out of
    range results, even for small exponent differences.

    For this test with one million combinations with small exponent
    differences and numbers where a > b and all numbers don't miss
    any binary digit before the decimal point your code gets about
    55% out of range results:

    #include <iostream>
    #include <random>
    #include <bit>
    #include <cmath>

    using namespace std;

    double fibble( double x, double y );

    int main()
    {
    mt19937_64 mt;
    uniform_int_distribution<uint64_t> gen( 0x3FFull << 52, 0x433ull << 52 );
    auto get = [&]() { return bit_cast<double>( gen( mt ) ); };
    size_t rangeErr = 0;
    for( size_t r = 1'000'000; r; )
    {
    double a = get(), b = get();
    if( a < b )
    continue;
    rangeErr += fibble( a, b ) >= fmod( a, b );
    --r;
    }
    cout << rangeErr / 1.0e6 * 100 << endl;
    }

    double fibble( double x, double y )
    {
    if( !y )
    return x / y;
    return x - trunc( x / y ) * y;
    }
    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mr Flibble@leigh@i42.co.uk to comp.lang.c++ on Mon Mar 3 19:33:29 2025
    From Newsgroup: comp.lang.c++

    On Mon, 3 Mar 2025 20:14:50 +0100, Bonita Montero
    <Bonita.Montero@gmail.com> wrote:

    Am 03.03.2025 um 20:09 schrieb Mr Flibble:

    The basic point that you are completely missing is that the result
    of fmod() only makes sense for a subset of possible IEEE 754
    representations passed to it as arguments; ...

    You don't know that since you don't know all applications.

    Applications? What std::fmod is supposed to do:

    "Computes the floating-point remainder of the division operation x/y"

    It can only do that correctly (mathematically speaking) for a small
    subset of arguments due to nature of IEEE 754 floating point and any
    QoI test code should reflect that.

    Maybe std::fmod requirements should be changed so that the result is implementation defined based on the exponents of the arguments.


    your test code did not take this into account so was total bollocks as far >> as assessing the quality of implementation of something fmod()-like; others >> have pointed this out to you -- not just me.
    Your implementation was unusable for sure since it has about 2% out of
    range results, even for small exponent differences.

    /Flibble
    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Bonita Montero@Bonita.Montero@gmail.com to comp.lang.c++ on Mon Mar 3 20:36:23 2025
    From Newsgroup: comp.lang.c++

    Am 03.03.2025 um 20:33 schrieb Mr Flibble:

    It can only do that correctly (mathematically speaking) for a small
    subset of arguments due to nature of IEEE 754 floating point and any
    QoI test code should reflect that.

    You're talking about the constraints of floating point numbers in
    general and not about the constaints of fmod().

    Maybe std::fmod requirements should be changed so that the result
    is implementation defined based on the exponents of the arguments.

    Not necessary, fmod() is always precise.

    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mr Flibble@leigh@i42.co.uk to comp.lang.c++ on Mon Mar 3 19:37:39 2025
    From Newsgroup: comp.lang.c++

    On Mon, 3 Mar 2025 20:36:23 +0100, Bonita Montero
    <Bonita.Montero@gmail.com> wrote:

    Am 03.03.2025 um 20:33 schrieb Mr Flibble:

    It can only do that correctly (mathematically speaking) for a small
    subset of arguments due to nature of IEEE 754 floating point and any
    QoI test code should reflect that.

    You're talking about the constraints of floating point numbers in
    general and not about the constaints of fmod().

    Maybe std::fmod requirements should be changed so that the result
    is implementation defined based on the exponents of the arguments.

    Not necessary, fmod() is always precise.

    Precisely wrong is still wrong.

    /Flibble
    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.lang.c++ on Mon Mar 3 12:25:54 2025
    From Newsgroup: comp.lang.c++

    On 3/3/2025 10:58 AM, Mr Flibble wrote:
    On Mon, 3 Mar 2025 19:53:07 +0100, Bonita Montero
    <Bonita.Montero@gmail.com> wrote:

    Am 03.03.2025 um 19:50 schrieb Mr Flibble:
    On Mon, 3 Mar 2025 19:48:31 +0100, Bonita Montero
    <Bonita.Montero@gmail.com> wrote:

    Am 03.03.2025 um 19:46 schrieb Mr Flibble:

    The problem is that you cannot follow a technical discussion on
    Usenet: if you read further up this thread at my discussion with
    Michael S you will see that I am not talking about precision loss
    within fmod but with the arguments passed to fmod; you will also
    see the problems with your test code.

    You tried to prove that fmod() can't be precise and trapped into
    a decimal to floating point conversion error issue - n00b !

    No I didn't; the problem here is that you are the n00b as you cannot
    follow a technical discussion on Usenet.

    The issue you mention has nothing to do with fmod() in special,
    so I'm asking myself why you want to relate that to my implementation.
    fmod() itself is always precise.

    Something that is both precise and wrong is still wrong; fmod() is
    totally useless mathematically for a subset of possible arguments to
    it given what it is supposed to do.

    Side note, a bit comic relief? Using fmod for coloring a fractal and/or
    IFS is fun. abs(fmod(x, 1)), then we color on it and look at the pretty colors... ;^)

    https://github.com/g-truc/glm/issues/308
    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Bonita Montero@Bonita.Montero@gmail.com to comp.lang.c++ on Tue Mar 4 10:42:55 2025
    From Newsgroup: comp.lang.c++

    Am 03.03.2025 um 20:37 schrieb Mr Flibble:

    Precisely wrong is still wrong.

    The a and b values could be from more of the half range of the exponent,
    up to an exponent of 0x433, which is the last exponent with no missing
    digits before the decimal point. That's not a "small subset".

    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mr Flibble@leigh@i42.co.uk to comp.lang.c++ on Tue Mar 4 18:24:28 2025
    From Newsgroup: comp.lang.c++

    On Tue, 4 Mar 2025 10:42:55 +0100, Bonita Montero
    <Bonita.Montero@gmail.com> wrote:

    Am 03.03.2025 um 20:37 schrieb Mr Flibble:

    Precisely wrong is still wrong.

    The a and b values could be from more of the half range of the exponent,
    up to an exponent of 0x433, which is the last exponent with no missing
    digits before the decimal point. That's not a "small subset".

    Wrong -- consider the size of the mantissa.

    /Flibble
    --- Synchronet 3.20c-Linux NewsLink 1.2