In ISO C99:TC3 to C17, 7.12p4:
The macro INFINITY expands to a constant expression of type float
representing positive or unsigned infinity, if available; else to a
positive constant of type float that overflows at translation time.
Consider the "else" case. It is said that INFINITY expands to a
constant and that it overflows, so that it is not in the range of representable values of float.
But in 6.4.4p2:
Each constant shall have a type and the value of a constant shall
be in the range of representable values for its type.
which would imply that INFINITY expands to a value in the range of representable values of float, contradicted by 7.12p4.
Same issue in the current C2x draft N2596 (7.12p7 and 6.4.4p2).
In ISO C99:TC3 to C17, 7.12p4:
The macro INFINITY expands to a constant expression of type float
representing positive or unsigned infinity, if available; else to a
positive constant of type float that overflows at translation time.
Consider the "else" case. It is said that INFINITY expands to a
constant and that it overflows, so that it is not in the range of representable values of float.
But in 6.4.4p2:
Each constant shall have a type and the value of a constant shall
be in the range of representable values for its type.
which would imply that INFINITY expands to a value in the range of representable values of float, contradicted by 7.12p4.
Vincent Lefevre <vincent-news@vinc17.net> writes:
In ISO C99:TC3 to C17, 7.12p4:
The macro INFINITY expands to a constant expression of type float
representing positive or unsigned infinity, if available; else to a
positive constant of type float that overflows at translation time.
Consider the "else" case. It is said that INFINITY expands to a
constant and that it overflows, so that it is not in the range of representable values of float.
But in 6.4.4p2:
Each constant shall have a type and the value of a constant shall
be in the range of representable values for its type.
which would imply that INFINITY expands to a value in the range of representable values of float, contradicted by 7.12p4.
Same issue in the current C2x draft N2596 (7.12p7 and 6.4.4p2).
6.4.4p2 is a constraint. It doesn't make it impossible to write code
that violates that constraint.
If I understand correctly, it means that if an infinite value is not available, then a program that refers to the INFINITY macro (in a
context where it's treated as a floating-point expression) violates that constraint, resulting in a required diagnostic.
In article <87pmsqizrh.fsf@nosuchdomain.example.com>,
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Vincent Lefevre <vincent-news@vinc17.net> writes:
In ISO C99:TC3 to C17, 7.12p4:
The macro INFINITY expands to a constant expression of type float
representing positive or unsigned infinity, if available; else to a
positive constant of type float that overflows at translation time.
Consider the "else" case. It is said that INFINITY expands to a
constant and that it overflows, so that it is not in the range of
representable values of float.
But in 6.4.4p2:
Each constant shall have a type and the value of a constant shall
be in the range of representable values for its type.
which would imply that INFINITY expands to a value in the range of
representable values of float, contradicted by 7.12p4.
Same issue in the current C2x draft N2596 (7.12p7 and 6.4.4p2).
6.4.4p2 is a constraint. It doesn't make it impossible to write code
that violates that constraint.
Yes, but the issue here is that the standard mandates the implementation
to violate a constraint, which is rather different from the case where a
user writes buggy code.
If I understand correctly, it means that if an infinite value is not
available, then a program that refers to the INFINITY macro (in a
context where it's treated as a floating-point expression) violates that
constraint, resulting in a required diagnostic.
I think the consequence is more than a diagnostic (which may yield a compilation failure in practice, BTW): AFAIK, the standard does not
give a particular definition for "overflows at translation time",
which would make it undefined behavior as usual for overflows.
No, it doesn't force the implementation to violate a constraint. It
says that a *program* that uses the INFINITY macro violates a constraint
(if the implementation doesn't support infinities).
Constraints apply to programs, not to implementations.
It means that if a program assumes that INFINITY is meaningful, and it's compiled for a target system where it isn't, a diagnostic is guaranteed.
And again, it might have made more sense to say that INFINITY is not
defined for such implementations (as is done for the NAN macro), but
perhaps there was existing practice.
Here's what the C99 Rationale says:
What is INFINITY on machines that do not support infinity? It should
be defined along the lines of: #define INFINITY 9e99999f, where
there are enough 9s in the exponent so that the value is too large
to represent as a float, hence, violates the constraint of 6.4.4
Constants. In addition, the number classification macro FP_INFINITE
should not be defined. That allows an application to test for the
existance of FP_INFINITE as a safe way to determine if infinity is
supported; this is the feature test macro for support for infinity.
The problem with this is that the standard itself doesn't say that FP_INFINITE is defined conditionally.
In article <87lf3ehy4v.fsf@nosuchdomain.example.com>,[...]
Shouldn't the standard by changed to make INFINITY conditionally
defined (if not required to expand to a true infinity)?
This should not break existing programs.
Even if FP_INFINITE could be defined conditionally, this would not
imply that INFINITY is usable, since for instance, long double may
have an infinity but not float.
In article <87lf3ehy4v.fsf@nosuchdomain.example.com>,
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
No, it doesn't force the implementation to violate a constraint. It
says that a *program* that uses the INFINITY macro violates a constraint
(if the implementation doesn't support infinities).
Then this means that the C standard defines something the user must
not use (except when __STDC_IEC_559__ is defined, as in this case,
INFINITY is guaranteed to expand to the true infinity).
Constraints apply to programs, not to implementations.
This is related, as programs will be transformed by an implementation.
It means that if a program assumes that INFINITY is meaningful, and it's
compiled for a target system where it isn't, a diagnostic is guaranteed.
And again, it might have made more sense to say that INFINITY is not
defined for such implementations (as is done for the NAN macro), but
perhaps there was existing practice.
Yes, currently there is no way of fallback (without things like
autoconf tests).
Shouldn't the standard by changed to make INFINITY conditionally
defined (if not required to expand to a true infinity)?
This should not break existing programs.
Here's what the C99 Rationale says:
What is INFINITY on machines that do not support infinity? It should
be defined along the lines of: #define INFINITY 9e99999f, where
there are enough 9s in the exponent so that the value is too large
to represent as a float, hence, violates the constraint of 6.4.4
Constants. In addition, the number classification macro FP_INFINITE
should not be defined. That allows an application to test for the
existance of FP_INFINITE as a safe way to determine if infinity is
supported; this is the feature test macro for support for infinity.
The problem with this is that the standard itself doesn't say that
FP_INFINITE is defined conditionally.
Even if FP_INFINITE could be defined conditionally, this would not
imply that INFINITY is usable, since for instance, long double may
have an infinity but not float.
On 2021-10-01 11:05, Vincent Lefevre wrote:
In article <87lf3ehy4v.fsf@nosuchdomain.example.com>,
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
No, it doesn't force the implementation to violate a constraint. ItThen this means that the C standard defines something the user must
says that a *program* that uses the INFINITY macro violates a constraint >>> (if the implementation doesn't support infinities).
not use (except when __STDC_IEC_559__ is defined, as in this case,
INFINITY is guaranteed to expand to the true infinity).
Constraints apply to programs, not to implementations.This is related, as programs will be transformed by an
implementation.
It means that if a program assumes that INFINITY is meaningful, and it's >>> compiled for a target system where it isn't, a diagnostic is guaranteed. >>> And again, it might have made more sense to say that INFINITY is notYes, currently there is no way of fallback (without things like
defined for such implementations (as is done for the NAN macro), but
perhaps there was existing practice.
autoconf tests).
Shouldn't the standard by changed to make INFINITY conditionally
defined (if not required to expand to a true infinity)?
This should not break existing programs.
The fallback is to test for defined(FP_INFINITE), see below.
Here's what the C99 Rationale says:Even if FP_INFINITE could be defined conditionally, this would not
What is INFINITY on machines that do not support infinity? It should >>> be defined along the lines of: #define INFINITY 9e99999f, where
there are enough 9s in the exponent so that the value is too large
to represent as a float, hence, violates the constraint of 6.4.4
Constants. In addition, the number classification macro FP_INFINITE >>> should not be defined. That allows an application to test for the
existance of FP_INFINITE as a safe way to determine if infinity is
supported; this is the feature test macro for support for infinity.
The problem with this is that the standard itself doesn't say that
FP_INFINITE is defined conditionally.
imply that INFINITY is usable, since for instance, long double may
have an infinity but not float.
I don't know if there is a set of similar macros for double and long
double types buried somewhere in the standard.
Vincent Lefevre <vincent-news@vinc17.net> writes:
In article <87lf3ehy4v.fsf@nosuchdomain.example.com>,
Even if FP_INFINITE could be defined conditionally, this would not
imply that INFINITY is usable, since for instance, long double may
have an infinity but not float.
The standard only defines INFINITY and NAN for type float. I think the implication is that it assumes either all floating types have NaNs
and/or infinities, or none do. That might be a valid assumption.
In article <877dewimc9.fsf@nosuchdomain.example.com>,
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Vincent Lefevre <vincent-news@vinc17.net> writes:
In article <87lf3ehy4v.fsf@nosuchdomain.example.com>,
Even if FP_INFINITE could be defined conditionally, this would not
imply that INFINITY is usable, since for instance, long double may
have an infinity but not float.
The standard only defines INFINITY and NAN for type float. I think the
implication is that it assumes either all floating types have NaNs
and/or infinities, or none do. That might be a valid assumption.
But the standard doesn't say that explicitly. It even just says
"if and only if the implementation supports quiet NaNs for the
float type". If the intent were to have NaN support for all the
FP types or none, why doesn't it say "... for the floating types"
instead of "... for the float type"?
Presumably if an implementation had
NaN for float but not for double, it would define NAN.
IMHO it would have been better if the assumption that all floating
types behave similarly had been stated explicitly, and perhaps if there
were three NAN macros for the three floating-point types.
NaN support for each floating type can be queried, and a NaN
obtained if supported, by calling nanf(), nan(), and nanl().
Vincent Lefevre <vincent-news@vinc17.net> writes:
In article <877dewimc9.fsf@nosuchdomain.example.com>,
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Vincent Lefevre <vincent-news@vinc17.net> writes:
In article <87lf3ehy4v.fsf@nosuchdomain.example.com>,
Even if FP_INFINITE could be defined conditionally, this would
not imply that INFINITY is usable, since for instance, long
double may have an infinity but not float.
The standard only defines INFINITY and NAN for type float. I
think the implication is that it assumes either all floating types
have NaNs and/or infinities, or none do. That might be a valid
assumption.
But the standard doesn't say that explicitly. It even just says
"if and only if the implementation supports quiet NaNs for the
float type". If the intent were to have NaN support for all the
FP types or none, why doesn't it say "... for the floating types"
instead of "... for the float type"?
Since the NAN macro is of type float (if it's define), it only makes
sense to define it that way. Presumably if an implementation had
NaN for float but not for double, it would define NAN.
Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
Vincent Lefevre <vincent-news@vinc17.net> writes:
In article <877dewimc9.fsf@nosuchdomain.example.com>,
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Vincent Lefevre <vincent-news@vinc17.net> writes:
In article <87lf3ehy4v.fsf@nosuchdomain.example.com>,
Even if FP_INFINITE could be defined conditionally, this would
not imply that INFINITY is usable, since for instance, long
double may have an infinity but not float.
The standard only defines INFINITY and NAN for type float. I
think the implication is that it assumes either all floating types
have NaNs and/or infinities, or none do. That might be a valid
assumption.
But the standard doesn't say that explicitly. It even just says
"if and only if the implementation supports quiet NaNs for the
float type". If the intent were to have NaN support for all the
FP types or none, why doesn't it say "... for the floating types"
instead of "... for the float type"?
Since the NAN macro is of type float (if it's define), it only makes
sense to define it that way. Presumably if an implementation had
NaN for float but not for double, it would define NAN.
If float has a NaN then so do double and long double, because of
6.2.5 paragraph 10. Similarly for infinity (or infinities).
In article <87pmsqizrh.fsf@nosuchdomain.example.com>,
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
If I understand correctly, it means that if an infinite value is
not available, then a program that refers to the INFINITY macro (in
a context where it's treated as a floating-point expression)
violates that constraint, resulting in a required diagnostic.
I think the consequence is more than a diagnostic (which may yield
a compilation failure in practice, BTW): AFAIK, the standard does
not give a particular definition for "overflows at translation
time", which would make it undefined behavior as usual for
overflows.
In article <87lf3ehy4v.fsf@nosuchdomain.example.com>,
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
It means that if a program assumes that INFINITY is meaningful, and
it's compiled for a target system where it isn't, a diagnostic is
guaranteed. And again, it might have made more sense to say that
INFINITY is not defined for such implementations (as is done for
the NAN macro), but perhaps there was existing practice.
Yes, currently there is no way of fallback (without things like
autoconf tests).
Shouldn't the standard by changed to make INFINITY conditionally
defined (if not required to expand to a true infinity)? [...]
Vincent Lefevre <vincent-news@vinc17.net> writes:[...]
Shouldn't the standard by changed to make INFINITY conditionally
defined (if not required to expand to a true infinity)? [...]
To me it seems better for INFINITY to be defined as it is rather
than being conditionally defined. If what is needed is really an
infinite value, just write INFINITY and the code either works or
compiling it gives a diagnostic. If what is needed is just a very
large value, write HUGE_VAL (or HUGE_VALF or HUGE_VALL, depending)
and the code works whether infinite floating-point values are
supported or not. If it's important that infinite values be
supported but we don't want to risk a compilation failure, use
HUGE_VAL combined with an assertion
assert( HUGE_VAL == HUGE_VAL/2 );
Alternatively, use INFINITY only in one small .c file, and give
other sources a make dependency for a successful compilation
(with of course a -pedantic-errors option) of that .c file. I
don't see that having INFINITY be conditionally defined buys
anything, except to more or less force use of #if/#else/#endif
blocks in the preprocessor. I don't mind using the preprocessor
when there is a good reason to do so, but here I don't see one.
Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
Vincent Lefevre <vincent-news@vinc17.net> writes:
In article <877dewimc9.fsf@nosuchdomain.example.com>,
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Vincent Lefevre <vincent-news@vinc17.net> writes:
In article <87lf3ehy4v.fsf@nosuchdomain.example.com>,
Even if FP_INFINITE could be defined conditionally, this would
not imply that INFINITY is usable, since for instance, long
double may have an infinity but not float.
The standard only defines INFINITY and NAN for type float. I
think the implication is that it assumes either all floating types
have NaNs and/or infinities, or none do. That might be a valid
assumption.
But the standard doesn't say that explicitly. It even just says
"if and only if the implementation supports quiet NaNs for the
float type". If the intent were to have NaN support for all the
FP types or none, why doesn't it say "... for the floating types"
instead of "... for the float type"?
Since the NAN macro is of type float (if it's define), it only makes
sense to define it that way. Presumably if an implementation had
NaN for float but not for double, it would define NAN.
If float has a NaN then so do double and long double, because of
6.2.5 paragraph 10. Similarly for infinity (or infinities).
Agreed. 6.2.5p10 says:
There are three real floating types, designated as float, double,
and long double. The set of values of the type float is a subset of
the set of values of the type double; the set of values of the type
double is a subset of the set of values of the type long double.
(No need to make everyone look it up.)
Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
Vincent Lefevre <vincent-news@vinc17.net> writes:
In article <877dewimc9.fsf@nosuchdomain.example.com>,
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Vincent Lefevre <vincent-news@vinc17.net> writes:
In article <87lf3ehy4v.fsf@nosuchdomain.example.com>,
Even if FP_INFINITE could be defined conditionally, this would
not imply that INFINITY is usable, since for instance, long
double may have an infinity but not float.
The standard only defines INFINITY and NAN for type float. I
think the implication is that it assumes either all floating types >>>>> have NaNs and/or infinities, or none do. That might be a valid
assumption.
But the standard doesn't say that explicitly. It even just says
"if and only if the implementation supports quiet NaNs for the
float type". If the intent were to have NaN support for all the
FP types or none, why doesn't it say "... for the floating types"
instead of "... for the float type"?
Since the NAN macro is of type float (if it's define), it only makes
sense to define it that way. Presumably if an implementation had
NaN for float but not for double, it would define NAN.
If float has a NaN then so do double and long double, because of
6.2.5 paragraph 10. Similarly for infinity (or infinities).
Agreed. 6.2.5p10 says:
There are three real floating types, designated as float, double,
and long double. The set of values of the type float is a subset of
the set of values of the type double; the set of values of the type
double is a subset of the set of values of the type long double.
(No need to make everyone look it up.)
I just noticed that leaves open the possibility, for example, that
double supports infinity but float doesn't.
To me it seems better for INFINITY to be defined as it is rather
than being conditionally defined. If what is needed is really an
infinite value, just write INFINITY and the code either works or
compiling it gives a diagnostic.
If what is needed is just a very large value, write HUGE_VAL (or
HUGE_VALF or HUGE_VALL, depending) and the code works whether
infinite floating-point values are supported or not.
What occurs is defined behavior and (for implementations that do
not have the needed value for infinity) violates a constraint.
A diagnostic must be produced.
In article <874k9r7419.fsf@nosuchdomain.example.com>,
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
Vincent Lefevre <vincent-news@vinc17.net> writes:
In article <877dewimc9.fsf@nosuchdomain.example.com>,
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Vincent Lefevre <vincent-news@vinc17.net> writes:
In article <87lf3ehy4v.fsf@nosuchdomain.example.com>,
Even if FP_INFINITE could be defined conditionally, this would
not imply that INFINITY is usable, since for instance, long
double may have an infinity but not float.
The standard only defines INFINITY and NAN for type float. I
think the implication is that it assumes either all floating types
have NaNs and/or infinities, or none do. That might be a valid
assumption.
But the standard doesn't say that explicitly. It even just says
"if and only if the implementation supports quiet NaNs for the
float type". If the intent were to have NaN support for all the
FP types or none, why doesn't it say "... for the floating types"
instead of "... for the float type"?
Since the NAN macro is of type float (if it's define), it only makes
sense to define it that way. Presumably if an implementation had
NaN for float but not for double, it would define NAN.
If float has a NaN then so do double and long double, because of
6.2.5 paragraph 10. Similarly for infinity (or infinities).
Agreed. 6.2.5p10 says:
There are three real floating types, designated as float, double,
and long double. The set of values of the type float is a subset of
the set of values of the type double; the set of values of the type
double is a subset of the set of values of the type long double.
(No need to make everyone look it up.)
I just noticed that leaves open the possibility, for example, that
double supports infinity but float doesn't.
This is what I had said above:
"[...] for instance, long double may have an infinity but not float."
In article <86wnmoov7c.fsf@linuxsc.com>,
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
What occurs is defined behavior and (for implementations that do
not have the needed value for infinity) violates a constraint.
A diagnostic must be produced.
If this is defined behavior, where is the result of an overflow
defined by the standard? (I can see only 7.12.1p5, but this is
for math functions; here, this is a constant that overflows.)
there are still potential issues. For instance, the standard does not guarantee that HUGE_VAL has the largest possible double value, and
On 10/9/21 4:17 PM, Vincent Lefevre wrote:
In article <86wnmoov7c.fsf@linuxsc.com>,
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
What occurs is defined behavior and (for implementations that do
not have the needed value for infinity) violates a constraint.
A diagnostic must be produced.
If this is defined behavior, where is the result of an overflow
defined by the standard? (I can see only 7.12.1p5, but this is
for math functions; here, this is a constant that overflows.)
"For decimal floating constants, and also for hexadecimal floating
constants when FLT_RADIX is not a power of 2, the result is either
the nearest representable value, or the larger or smaller representable
value immediately adjacent to the nearest representable value, chosen in
an implementation-defined manner.
For hexadecimal floating constants when FLT_RADIX is a power of 2, the
result is correctly rounded." (6.4.4.2p3)
In the case of overflow, for a type that cannot represent infinity,
there is only one "nearest representable value", which is DBL_MAX.
James Kuyper <jameskuyper@alumni.caltech.edu> writes:...
"For decimal floating constants, and also for hexadecimal floating
constants when FLT_RADIX is not a power of 2, the result is either
the nearest representable value, or the larger or smaller representable
value immediately adjacent to the nearest representable value, chosen in
an implementation-defined manner.
For hexadecimal floating constants when FLT_RADIX is a power of 2, the
result is correctly rounded." (6.4.4.2p3)
In the case of overflow, for a type that cannot represent infinity,
there is only one "nearest representable value", which is DBL_MAX.
But does that apply when a constraint is violated?
6.4.4p2, a constraint, says:
Each constant shall have a type and the value of a constant shall be
in the range of representable values for its type.
A "constraint", aside from triggering a required diagnostic, is a "restriction, either syntactic or semantic, by which the exposition of language elements is to be interpreted", which is IMHO a bit vague.
On 10/11/21 3:39 PM, Keith Thompson wrote:
James Kuyper <jameskuyper@alumni.caltech.edu> writes:...
"For decimal floating constants, and also for hexadecimal floating
constants when FLT_RADIX is not a power of 2, the result is either
the nearest representable value, or the larger or smaller representable
value immediately adjacent to the nearest representable value, chosen in >>> an implementation-defined manner.
For hexadecimal floating constants when FLT_RADIX is a power of 2, the
result is correctly rounded." (6.4.4.2p3)
In the case of overflow, for a type that cannot represent infinity,
there is only one "nearest representable value", which is DBL_MAX.
But does that apply when a constraint is violated?
6.4.4p2, a constraint, says:
Each constant shall have a type and the value of a constant shall be
in the range of representable values for its type.
A "constraint", aside from triggering a required diagnostic, is a
"restriction, either syntactic or semantic, by which the exposition of
language elements is to be interpreted", which is IMHO a bit vague.
I can agree with the "a bit vague" description. I have previously said
"I've never understood what it is that the part of that definition after
the second comma was intended to convey."
"If a ‘‘shall’’ or ‘‘shall not’’ requirement that appears outside of a
constraint or runtime-constraint is violated, the behavior is undefined. Undefined behavior is otherwise indicated in this International Standard
by the words ‘‘undefined behavior’’ or by the omission of any explicit
definition of behavior." (4p2)
There's no mention in there of a constraint violation automatically
having undefined behavior. Most constraint violations do qualify as
undefined behavior due to the "omission of any explicit definition of behavior" when the constraint is violated. But this isn't an example: 6.4.4.2p3 provide a perfectly applicable definition for the behavior.
On 10/9/21 4:17 PM, Vincent Lefevre wrote:
In article <86wnmoov7c.fsf@linuxsc.com>,
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
What occurs is defined behavior and (for implementations that do
not have the needed value for infinity) violates a constraint.
A diagnostic must be produced.
If this is defined behavior, where is the result of an overflow
defined by the standard? (I can see only 7.12.1p5, but this is
for math functions; here, this is a constant that overflows.)
"For decimal floating constants, and also for hexadecimal floating
constants when FLT_RADIX is not a power of 2, the result is either
the nearest representable value, or the larger or smaller representable
value immediately adjacent to the nearest representable value, chosen in
an implementation-defined manner.
For hexadecimal floating constants when FLT_RADIX is a power of 2, the
result is correctly rounded." (6.4.4.2p3)
In the case of overflow, for a type that cannot represent infinity,
there is only one "nearest representable value", which is DBL_MAX.
In article <sk1pd2$5e3$3@dont-email.me>,...
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
On 10/9/21 4:17 PM, Vincent Lefevre wrote:
If this is defined behavior, where is the result of an overflow
defined by the standard? (I can see only 7.12.1p5, but this is
for math functions; here, this is a constant that overflows.)
"For decimal floating constants, and also for hexadecimal floating
constants when FLT_RADIX is not a power of 2, the result is either
the nearest representable value, or the larger or smaller representable
value immediately adjacent to the nearest representable value, chosen in
an implementation-defined manner.
For hexadecimal floating constants when FLT_RADIX is a power of 2, the
result is correctly rounded." (6.4.4.2p3)
In the case of overflow, for a type that cannot represent infinity,
there is only one "nearest representable value", which is DBL_MAX.
OK, but I was asking "where is the result of an overflow defined by
the standard?" I don't see the word "overflow" in the above spec.
Note also that in case of overflow, "the nearest representable value"
is not defined.
On 10/26/21 6:01 AM, Vincent Lefevre wrote:
OK, but I was asking "where is the result of an overflow defined by
the standard?" I don't see the word "overflow" in the above spec.
Overflow occurs when a floating constant is created whose value is
greater than DBL_MAX or less than -DBL_MAX. Despite the fact that the
above description does not explicitly mention the word "overflow", it's perfectly clear what that description means when overflow occurs.
Note also that in case of overflow, "the nearest representable value"
is not defined.
No definition by the standard is needed; the conventional mathematical definitions of "nearest" are sufficient. If infinity is representable, DBL_MAX is always nearer to any finite value than infinity is.
Regardless of whether infinity is representable, any finite value
greater than DBL_MAX is closer to DBL_MAX than it is to any other representable value.
In article <sl9bqb$hf5$2@dont-email.me>,
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
On 10/26/21 6:01 AM, Vincent Lefevre wrote:
OK, but I was asking "where is the result of an overflow defined by
the standard?" I don't see the word "overflow" in the above spec.
Overflow occurs when a floating constant is created whose value is
greater than DBL_MAX or less than -DBL_MAX. Despite the fact that the
above description does not explicitly mention the word "overflow", it's
perfectly clear what that description means when overflow occurs.
Why "perfectly clear"??? This is even inconsistent with 7.12.1p5
of N2596, which says:
A floating result overflows if the magnitude (absolute value)
of the mathematical result is finite but so large that the
mathematical result cannot be represented without extraordinary
roundoff error in an object of the specified type.
If you have a mathematical value (exact value) much larger than
DBL_MAX and that rounds to DBL_MAX (e.g. with round-toward-zero),
there should be an overflow, despite the fact that the FP result
is not greater than DBL_MAX (since it is equal to DBL_MAX).
Moreover, with the above definition, it is DBL_NORM_MAX that is
more likely taken into account, not DBL_MAX.
Note also that in case of overflow, "the nearest representable value"
is not defined.
No definition by the standard is needed; the conventional mathematical
definitions of "nearest" are sufficient. If infinity is representable,
DBL_MAX is always nearer to any finite value than infinity is.
Regardless of whether infinity is representable, any finite value
greater than DBL_MAX is closer to DBL_MAX than it is to any other
representable value.
The issue is that this may easily be confused with the result
obtained in the FE_TONEAREST rounding mode with the IEEE 754 rules
(where, for instance, 2*DBL_MAX rounds to +Inf, not to DBL_MAX,
despite the fact that 2*DBL_MAX is closer to DBL_MAX than to +Inf).
On 10/28/21 5:38 AM, Vincent Lefevre wrote:
In article <sl9bqb$hf5$2@dont-email.me>,
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
On 10/26/21 6:01 AM, Vincent Lefevre wrote:
OK, but I was asking "where is the result of an overflow defined by
the standard?" I don't see the word "overflow" in the above spec.
Overflow occurs when a floating constant is created whose value is
greater than DBL_MAX or less than -DBL_MAX. Despite the fact that the
above description does not explicitly mention the word "overflow", it's
perfectly clear what that description means when overflow occurs.
Why "perfectly clear"??? This is even inconsistent with 7.12.1p5
of N2596, which says:
7.12.1p5 describes the math library, not the handling of floating point constants. While the C standard does recommended that "The
translation-time conversion of floating constants should match the execution-time conversion of character strings by library functions,
such as strtod , given matching inputs suitable for both conversions,
the same result format, and default execution-time rounding."
(6.4.4.2p11), it does not actually require such a match. Therefore, if
there is any inconsistency it would not be problematic.
A floating result overflows if the magnitude (absolute value)
of the mathematical result is finite but so large that the
mathematical result cannot be represented without extraordinary
roundoff error in an object of the specified type.
7.12.1p5 goes on to say that "If a floating result overflows and default rounding is in effect, then the function returns the value of the macro HUGE_VAL ...".
As cited above, the standard recommends, but does not require, the use
of default execution-time rounding mode for floating point constants. HUGE_VAL is only required to be positive (7.12p6) - it could be as small
as DBL_MIN.
Moreover, with the above definition, it is DBL_NORM_MAX that is
more likely taken into account, not DBL_MAX.
According to 5.2.4.2.2p19, DBL_MAX is the maximum representable finite floating point value, while DBL_NORM_MAX is the maximum normalized
number. 6.4.4.2p4 refers only to representable values, saying nothing
about normalization. Neither 7.12.5p1 nor 7.12p6 say anything to require
that the value be normalized. Therefore, as far as I can see, DBL_MAX is
the relevant value.
Note also that in case of overflow, "the nearest representable value"
is not defined.
No definition by the standard is needed; the conventional mathematical
definitions of "nearest" are sufficient. If infinity is representable,
DBL_MAX is always nearer to any finite value than infinity is.
Regardless of whether infinity is representable, any finite value
greater than DBL_MAX is closer to DBL_MAX than it is to any other
representable value.
The issue is that this may easily be confused with the result
obtained in the FE_TONEAREST rounding mode with the IEEE 754 rules
(where, for instance, 2*DBL_MAX rounds to +Inf, not to DBL_MAX,
despite the fact that 2*DBL_MAX is closer to DBL_MAX than to +Inf).
Yes, and DBL_MAX and +Inf are two of the three values permitted by
6.4.4.2p4, so I don't see any conflict there.
In article <slef9t$98j$2@dont-email.me>,...
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
On 10/28/21 5:38 AM, Vincent Lefevre wrote:
In article <sl9bqb$hf5$2@dont-email.me>,
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
On 10/26/21 6:01 AM, Vincent Lefevre wrote:
7.12.1p5 describes the math library, not the handling of floating point
constants. While the C standard does recommended that "The
translation-time conversion of floating constants should match the
execution-time conversion of character strings by library functions,
such as strtod , given matching inputs suitable for both conversions,
the same result format, and default execution-time rounding."
(6.4.4.2p11), it does not actually require such a match. Therefore, if
there is any inconsistency it would not be problematic.
Yes, but this means that any implicit use of overflow is not
perfectly clear.
7.12.1p5 goes on to say that "If a floating result overflows and default
rounding is in effect, then the function returns the value of the macro
HUGE_VAL ...".
As cited above, the standard recommends, but does not require, the use
of default execution-time rounding mode for floating point constants.
HUGE_VAL is only required to be positive (7.12p6) - it could be as small
as DBL_MIN.
Note that C2x (in particular, the current draft N2731) requires that nextup(HUGE_VAL) be HUGE_VAL, probably assuming that HUGE_VAL is the
maximum value. I've just sent a mail to the CFP list about that.
about normalization. Neither 7.12.5p1 nor 7.12p6 say anything to require
that the value be normalized. Therefore, as far as I can see, DBL_MAX is
the relevant value.
But DBL_NORM_MAX is the relevant value for the general definition
of "overflow" (on double). So in 7.12p4, "overflows" is not used
correctly, at least not this the usual meaning.
More than that, with the IEEE 754 overflow definition, you have
numbers larger than DBL_MAX (up to those within 1 ulp) that do not
overflow.
No definition by the standard is needed; the conventional mathematical >>>> definitions of "nearest" are sufficient. If infinity is representable, >>>> DBL_MAX is always nearer to any finite value than infinity is.
Regardless of whether infinity is representable, any finite value
greater than DBL_MAX is closer to DBL_MAX than it is to any other
representable value.
The issue is that this may easily be confused with the result
obtained in the FE_TONEAREST rounding mode with the IEEE 754 rules
(where, for instance, 2*DBL_MAX rounds to +Inf, not to DBL_MAX,
despite the fact that 2*DBL_MAX is closer to DBL_MAX than to +Inf).
Yes, and DBL_MAX and +Inf are two of the three values permitted by
6.4.4.2p4, so I don't see any conflict there.
My point is that this definition of "nearest" does not match the
definition of IEEE 754's FE_TONEAREST.
... I'm not saying that there
is a conflict, just that the text is ambiguous. If one follows
the IEEE 754 definition, there are only two possible values
(DBL_MAX and +Inf, thus excluding nextdown(DBL_MAX)).
On 10/29/21 8:12 AM, Vincent Lefevre wrote:
In article <slef9t$98j$2@dont-email.me>,
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
...On 10/28/21 5:38 AM, Vincent Lefevre wrote:
In article <sl9bqb$hf5$2@dont-email.me>,
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
On 10/26/21 6:01 AM, Vincent Lefevre wrote:
7.12.1p5 describes the math library, not the handling of floating point
constants. While the C standard does recommended that "The
translation-time conversion of floating constants should match the
execution-time conversion of character strings by library functions,
such as strtod , given matching inputs suitable for both conversions,
the same result format, and default execution-time rounding."
(6.4.4.2p11), it does not actually require such a match. Therefore, if
there is any inconsistency it would not be problematic.
Yes, but this means that any implicit use of overflow is not
perfectly clear.
What is unclear about it? It very explicitly allows three different
values, deliberately failing to specify only one of them as valid, and
it is perfectly clear what those three values are.
But DBL_NORM_MAX is the relevant value for the general definition
of "overflow" (on double). So in 7.12p4, "overflows" is not used
correctly, at least not this the usual meaning.
What do you consider the "general definition of overflow"?
I would have though you were referring to 7.12.1p5, but I see no
wording there that distinguishes between normalized and unnormalized
values.
More than that, with the IEEE 754 overflow definition, you have
numbers larger than DBL_MAX (up to those within 1 ulp) that do not overflow.
I don't see how that's a problem.
...
No definition by the standard is needed; the conventional mathematical >>>> definitions of "nearest" are sufficient. If infinity is representable, >>>> DBL_MAX is always nearer to any finite value than infinity is.
Regardless of whether infinity is representable, any finite value
greater than DBL_MAX is closer to DBL_MAX than it is to any other
representable value.
The issue is that this may easily be confused with the result
obtained in the FE_TONEAREST rounding mode with the IEEE 754 rules
(where, for instance, 2*DBL_MAX rounds to +Inf, not to DBL_MAX,
despite the fact that 2*DBL_MAX is closer to DBL_MAX than to +Inf).
Yes, and DBL_MAX and +Inf are two of the three values permitted by
6.4.4.2p4, so I don't see any conflict there.
My point is that this definition of "nearest" does not match the
definition of IEEE 754's FE_TONEAREST.
FE_TONEAREST is not "IEEE 754's". It is a macro defined by the C
standard, and in the latest draft it's been changed so it now represents
IEC 60559's "roundTiesToEven" rounding attribute.
... I'm not saying that there
is a conflict, just that the text is ambiguous. If one follows
the IEEE 754 definition, there are only two possible values
(DBL_MAX and +Inf, thus excluding nextdown(DBL_MAX)).
Yes, that was deliberate - it was intended to be compatible with IEC
60559, but also to be sufficiently loose to allow use of non-IEC 60559 floating point.
In article <slingl$56v$1@dont-email.me>,...
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
On 10/29/21 8:12 AM, Vincent Lefevre wrote:
Yes, but this means that any implicit use of overflow is not
perfectly clear.
What is unclear about it? It very explicitly allows three different
values, deliberately failing to specify only one of them as valid, and
it is perfectly clear what those three values are.
These rules are not about overflow. They are general rules.
What is not defined is when a value overflows (there are different definitions). And what is the consequence of the overflow (at runtime,
there may be traps).
But DBL_NORM_MAX is the relevant value for the general definition
of "overflow" (on double). So in 7.12p4, "overflows" is not used
correctly, at least not this the usual meaning.
What do you consider the "general definition of overflow"?
The one given by the standard in 7.12.1p5.
I would have though you were referring to 7.12.1p5, but I see no
wording there that distinguishes between normalized and unnormalized
values.
"A floating result overflows if the magnitude of the mathematical
result is finite but so large that the mathematical result cannot
be represented without extraordinary roundoff error in an object
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
of the specified type."
If the exact result is above the maximum normal value, there is
likely to be an extraordinary roundoff error.
... I'm not saying that there
is a conflict, just that the text is ambiguous. If one follows
the IEEE 754 definition, there are only two possible values
(DBL_MAX and +Inf, thus excluding nextdown(DBL_MAX)).
Yes, that was deliberate - it was intended to be compatible with IEC
60559, but also to be sufficiently loose to allow use of non-IEC 60559
floating point.
But what is allowed is not clear for an IEEE 754 format (this does
not affect the INFINITY macro, but users could write exact values
larger than DBL_MAX + 1 ulp, for which nextdown(DBL_MAX) could be
unexpected as the obtained value).
On 11/7/21 9:44 PM, Vincent Lefevre wrote:
In article <slingl$56v$1@dont-email.me>,
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
...On 10/29/21 8:12 AM, Vincent Lefevre wrote:
Yes, but this means that any implicit use of overflow is not
perfectly clear.
What is unclear about it? It very explicitly allows three different
values, deliberately failing to specify only one of them as valid, and
it is perfectly clear what those three values are.
These rules are not about overflow. They are general rules.
Yes, and they are sufficiently general that it is perfectly clear how
they apply to the case when there is overflow.
What is not defined is when a value overflows (there are different definitions). And what is the consequence of the overflow (at runtime, there may be traps).
We're talking about floating point constants here. The standard clearly specifies that "Floating constants are converted to internal format as
if at translation-time. The conversion of a floating constant shall not
raise an exceptional condition or a floating-point exception at
execution time." Runtime behavior is not the issue, and traps are not allowed.
The standard describes two cases: if infinities are supported (as they necessarily are when IEEE formats are used), INFINITY is required to
expand to a constant expression that represents positive or unsigned infinity. This is not outside the range of representable values - that
range includes either positive or unsigned infinity, so the constraint
in 6.4.4p2 is not violated.
If infinities are not supported (which is therefore necessarily not an
IEEE format), then INFINITY is required to expand to a constant that
will overflow. This does violate that constraint, which means that a diagnostic message is required.
It's normally the case that, when a constraint is violated, the behavior
is undefined. However, that's not because of anything the standard says
about constraint violations in general. It's because, in most cases, the behavior is undefined "by the omission of any explicit definition of behavior." (4p2). However, this is one of the rare exceptions: there is
no such omission. There is a general definition of the behavior that continues to apply in a perfectly clear fashion even in the event of overflow. Therefore, an implementation is required to assign a value to
such a constant that is one of the two identified by that definition,
either FLT_MAX or nextdownf(FLT_MAX).
But DBL_NORM_MAX is the relevant value for the general definition
of "overflow" (on double). So in 7.12p4, "overflows" is not used
correctly, at least not this the usual meaning.
What do you consider the "general definition of overflow"?
The one given by the standard in 7.12.1p5.
I would have though you were referring to 7.12.1p5, but I see no
wording there that distinguishes between normalized and unnormalized
values.
"A floating result overflows if the magnitude of the mathematical
result is finite but so large that the mathematical result cannot
be represented without extraordinary roundoff error in an object
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
of the specified type."
If the exact result is above the maximum normal value, there is
likely to be an extraordinary roundoff error.
Your comment made me realize that I had no idea how DBL_NORM_MAX could possibly be less than DBL_MAX. I did some searching, and discovered
official text of a committee decision indicating that they are normally
the same - the only exception known to the committee was systems that implemented a long double as the sum of a pair of doubles, for which
LDBL_MAX == 2.0L*DBL_MAX, while LDBL_NORM_MAX is just slightly larger
than DBL_MAX.
However, I'm confused about how this connects to the standard's
definition of normalized floating-point numbers: "f_1 > 0"
(5.2.4.2.2p4). It seems to me that, even for the pair-of-doubles format, LDBL_MAX is represented by a value with f_1 = 1, and therefore is a normalized floating point number that is larger than LDBL_NORM_MAX,
which strikes me as a contradiction.
In any event, INFINITY is required to expand into an expression of type "float", so if the only known exception involves long double, it's not
very relevant.
...
... I'm not saying that there
is a conflict, just that the text is ambiguous. If one follows
the IEEE 754 definition, there are only two possible values
(DBL_MAX and +Inf, thus excluding nextdown(DBL_MAX)).
Yes, that was deliberate - it was intended to be compatible with IEC
60559, but also to be sufficiently loose to allow use of non-IEC 60559
floating point.
But what is allowed is not clear for an IEEE 754 format (this does
not affect the INFINITY macro, but users could write exact values
larger than DBL_MAX + 1 ulp, for which nextdown(DBL_MAX) could be unexpected as the obtained value).
It's unexpected because that would violate a requirement of IEEE 754,
but the C standard doesn't require violating that requirement. Section 6.4.4.2p4 of the C standard allows such a constant to have any one of
the three values (+infinity, FLT_MAX, or nextdownf(FLT_MAX)).
Therefore, an implementation that wants to conform to both the C
standard and IEEE 754 must select FLT_MAX. What's unclear or ambiguous
about that?
In article <smah3q$a9f$1@dont-email.me>,
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
On 11/7/21 9:44 PM, Vincent Lefevre wrote:...
These rules are not about overflow. They are general rules.
Yes, and they are sufficiently general that it is perfectly clear how
they apply to the case when there is overflow.
I've done some tests, and it is interesting to see that both GCC and
Clang choose the IEEE 754 definition of overflow on floating-point
constants, not yours (<sl9bqb$hf5$2@dont-email.me>).
... For instance,
the exact value of 0x1.fffffffffffff7p1023 is larger than DBL_MAX,
but it doesn't trigger an overflow warning with GCC and Clang.
What is not defined is when a value overflows (there are different
definitions). And what is the consequence of the overflow (at runtime,
there may be traps).
We're talking about floating point constants here. The standard clearly
specifies that "Floating constants are converted to internal format as
if at translation-time. The conversion of a floating constant shall not
raise an exceptional condition or a floating-point exception at
execution time." Runtime behavior is not the issue, and traps are not
allowed.
I agree. But the question is whether the compiler may choose to
stop the compilation.
There is a confusion in the standard, because 6.4.4p2 says
"the value of a constant" while "value" is defined by 3.19
and means the value of the object, while I suspect that
6.4.4p2 intends to mean the *exact* value.
The standard describes two cases: if infinities are supported (as they
necessarily are when IEEE formats are used), INFINITY is required to
expand to a constant expression that represents positive or unsigned
infinity. This is not outside the range of representable values - that
range includes either positive or unsigned infinity, so the constraint
in 6.4.4p2 is not violated.
The range includes all real numbers, but not infinities.
... No issues
with INFINITY, but my remark was about the case a user would write
a constant like 0x1.0p1024 (or 1.0e999). Such constants are in the
range of floating-point numbers (which is the set of real numbers in
this case), but this constant overflows with the IEEE 754 meaning,
and both GCC and Clang emits a warning for this reason.
Note that if the intent were "exceeds the range", the C standard
should have said that.
If infinities are not supported (which is therefore necessarily not an
IEEE format), then INFINITY is required to expand to a constant that
will overflow. This does violate that constraint, which means that a
diagnostic message is required.
This point is not clear and does not match what implementations
consider as overflow.
I think that I was initially confused by the meaning of "value".
in 6.4.4p2, as it seems to imply that a converted value may be
outside the range of representable values.
... It seems that it was
written mainly with integer constants in mind.
But there's still the fact that "overflow" is not defined (this
term is used only when there are no infinities, though).
However, I'm confused about how this connects to the standard's
definition of normalized floating-point numbers: "f_1 > 0"
(5.2.4.2.2p4). It seems to me that, even for the pair-of-doubles format,
LDBL_MAX is represented by a value with f_1 = 1, and therefore is a
normalized floating point number that is larger than LDBL_NORM_MAX,
which strikes me as a contradiction.
Note that there is a requirement on the exponent: e ≤ e_max.
But what is allowed is not clear for an IEEE 754 format (this does
not affect the INFINITY macro, but users could write exact values
larger than DBL_MAX + 1 ulp, for which nextdown(DBL_MAX) could be
unexpected as the obtained value).
It's unexpected because that would violate a requirement of IEEE 754,
but the C standard doesn't require violating that requirement. Section
6.4.4.2p4 of the C standard allows such a constant to have any one of
the three values (+infinity, FLT_MAX, or nextdownf(FLT_MAX)).
Therefore, an implementation that wants to conform to both the C
standard and IEEE 754 must select FLT_MAX. What's unclear or ambiguous
about that?
If Annex F is not claimed to be supported[*], this requirement would
not be violated.
On 11/8/21 5:56 AM, Vincent Lefevre wrote:
In article <smah3q$a9f$1@dont-email.me>,
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
On 11/7/21 9:44 PM, Vincent Lefevre wrote:...
These rules are not about overflow. They are general rules.
Yes, and they are sufficiently general that it is perfectly clear how
they apply to the case when there is overflow.
I've done some tests, and it is interesting to see that both GCC and
Clang choose the IEEE 754 definition of overflow on floating-point constants, not yours (<sl9bqb$hf5$2@dont-email.me>).
The only definition for overflow that I discussed is not mine, it
belongs to the C standard: "A floating result overflows if the magnitude (absolute value) of the mathematical result is finite but so large that
the mathematical result cannot be represented without extraordinary
roundoff error in an object of the specified type." (7.12.1p5).
... For instance, the exact value of 0x1.fffffffffffff7p1023 is
larger than DBL_MAX, but it doesn't trigger an overflow warning
with GCC and Clang.
No warning is mandated for overflows, so that doesn't contradict
anything I said.
I wasn't talking about overflow for it's own sake, but only in the
context of what the standard says about the value of floating point constants. What value does that constant have? Is it one of the three
values permitted by 6.4.4.2p4? Is it, in particular, the value required
by IEEE 754? If the answers to both questions are yes, it's consistent
with everything I said.
I agree. But the question is whether the compiler may choose to
stop the compilation.
I don't remember that issue having previously been raised.
"The implementation shall not successfully translate a preprocessing translation unit containing a #error preprocessing directive unless it
is part of a group skipped by conditional inclusion." (4p4).
"The implementation shall be able to translate and execute at least one program that contains at least one instance of every one of the
following limits:" (5.2.4.1p1).
In all other cases, stopping compilation is neither mandatory nor
prohibited.
...
The standard describes two cases: if infinities are supported (as they
necessarily are when IEEE formats are used), INFINITY is required to
expand to a constant expression that represents positive or unsigned
infinity. This is not outside the range of representable values - that
range includes either positive or unsigned infinity, so the constraint
in 6.4.4p2 is not violated.
The range includes all real numbers, but not infinities.
For an implementation that supports infinities (in other words, an implementation where infinities are representable), how do infinities
fail to qualify as being within the range of representable values? Where
is that exclusion specified?
Such formats correspond to affinely extended real number systems,
which differ from ordinary real number systems by including
-infinity and +infinity. IEEE 754 specifies that infinities are to
be interpreted in the affine sense.
and both GCC and Clang emits a warning for this reason.
Note that if the intent were "exceeds the range", the C standard
should have said that.
I'm sorry - I seem to have lost the thread of your argument. In which location in the current standard do you think the current wording would
need to be changed to "exceeds the range", in to support my argument?
Which current phrase would need to be replaced, and why?
If infinities are not supported (which is therefore necessarily not an
IEEE format), then INFINITY is required to expand to a constant that
will overflow. This does violate that constraint, which means that a
diagnostic message is required.
This point is not clear and does not match what implementations
consider as overflow.
Which implementations did you test on, which don't support infinities,
in order to justify that conclusion?
But there's still the fact that "overflow" is not defined (this
term is used only when there are no infinities, though).
7.12.1p5 is not marked as a definition for "overflows", but has the form
of a definition. There is no restriction within 7.12.1 to
implementations that don't support infinities.
However, I'm confused about how this connects to the standard's
definition of normalized floating-point numbers: "f_1 > 0"
(5.2.4.2.2p4). It seems to me that, even for the pair-of-doubles format, >> LDBL_MAX is represented by a value with f_1 = 1, and therefore is a
normalized floating point number that is larger than LDBL_NORM_MAX,
which strikes me as a contradiction.
Note that there is a requirement on the exponent: e ≤ e_max.
Yes, and DBL_MAX has e==e_max.
In article <smbrgo$g4b$1@dont-email.me>,...
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
On 11/8/21 5:56 AM, Vincent Lefevre wrote:
In article <smah3q$a9f$1@dont-email.me>,
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
The only definition for overflow that I discussed is not mine, it
belongs to the C standard: "A floating result overflows if the magnitude
(absolute value) of the mathematical result is finite but so large that
the mathematical result cannot be represented without extraordinary
roundoff error in an object of the specified type." (7.12.1p5).
That's in the C standard. But in <sl9bqb$hf5$2@dont-email.me>, you
said: "Overflow occurs when a floating constant is created whose
value is greater than DBL_MAX or less than -DBL_MAX."
So... I don't understand what you consider as an overflow.
I wasn't talking about overflow for it's own sake, but only in the
context of what the standard says about the value of floating point
constants. What value does that constant have? Is it one of the three
values permitted by 6.4.4.2p4? Is it, in particular, the value required
by IEEE 754? If the answers to both questions are yes, it's consistent
with everything I said.
The second answer is not "yes", in case nextdown(DBL_MAX) would be
returned.
I agree. But the question is whether the compiler may choose to
stop the compilation.
I don't remember that issue having previously been raised.
"The implementation shall not successfully translate a preprocessing
translation unit containing a #error preprocessing directive unless it
is part of a group skipped by conditional inclusion." (4p4).
"The implementation shall be able to translate and execute at least one
program that contains at least one instance of every one of the
following limits:" (5.2.4.1p1).
In all other cases, stopping compilation is neither mandatory nor
prohibited.
Well, from this point of view, an implementation is free to regard
an overflowing constant as not having a defined behavior and stop compilation.
For an implementation that supports infinities (in other words, an
implementation where infinities are representable), how do infinities
fail to qualify as being within the range of representable values? Where
is that exclusion specified?
5.2.4.2.2p5. Note that it seems that it is intended to exclude
some representable values from the range. Otherwise such a long
specification of the range would not be needed.
That said, either this specification seems incorrect or there are
several meanings of "range". For instance, 5.2.4.2.2p9 says "Except
for assignment and cast (which remove all extra range and precision)",
and here, the intent is to limit the range to the emax exponent of
the considered type.
Such formats correspond to affinely extended real number systems,
which differ from ordinary real number systems by including
-infinity and +infinity. IEEE 754 specifies that infinities are to
be interpreted in the affine sense.
Yes, but I'm not sure that the exclusion of infinities from the range
has any consequence. For instance, 6.3.1.5 says:
When a value of real floating type is converted to a real floating type,
if the value being converted can be represented exactly in the new type,
it is unchanged. If the value being converted is in the range of values
that can be represented but cannot be represented exactly, the result is
either the nearest higher or nearest lower representable value, chosen
in an implementation-defined manner. If the value being converted is
outside the range of values that can be represented, the behavior is
undefined. [...]
So, if infinity is representable in both types, we are in the first
case ("can be represented exactly"), and the range is not used.
and both GCC and Clang emits a warning for this reason.
Note that if the intent were "exceeds the range", the C standard
should have said that.
I'm sorry - I seem to have lost the thread of your argument. In which
location in the current standard do you think the current wording would
need to be changed to "exceeds the range", in to support my argument?
Which current phrase would need to be replaced, and why?
I don't remember exactly, but I think that was 7.12p4 to make it
consistent with its footnote (which refers to 6.4.4).
If infinities are not supported (which is therefore necessarily not an >>>> IEEE format), then INFINITY is required to expand to a constant that
will overflow. This does violate that constraint, which means that a
diagnostic message is required.
This point is not clear and does not match what implementations
consider as overflow.
Which implementations did you test on, which don't support infinities,
in order to justify that conclusion?
Note that the notion of overflow as defined by 7.12.1p5 (which is
consistent with the particular case of IEEE 754) exists whether
infinities are supported or not.
And for implementations without infinities, see the GCC code: gcc/c-family/c-lex.c
if (REAL_VALUE_ISINF (real)
|| (const_type != type && REAL_VALUE_ISINF (real_trunc)))
{
*overflow = OT_OVERFLOW;
if (!(flags & CPP_N_USERDEF))
{
if (!MODE_HAS_INFINITIES (TYPE_MODE (type)))
pedwarn (input_location, 0,
"floating constant exceeds range of %qT", type);
else
warning (OPT_Woverflow,
"floating constant exceeds range of %qT", type);
}
}
However, I'm confused about how this connects to the standard's
definition of normalized floating-point numbers: "f_1 > 0"
(5.2.4.2.2p4). It seems to me that, even for the pair-of-doubles format, >>>> LDBL_MAX is represented by a value with f_1 = 1, and therefore is a
normalized floating point number that is larger than LDBL_NORM_MAX,
which strikes me as a contradiction.
Note that there is a requirement on the exponent: e ≤ e_max.
Yes, and DBL_MAX has e==e_max.
No, not necessarily. DBL_NORM_MAX has e == e_max. But DBL_MAX may
have a larger exponent. The C2x draft says:
maximum representable finite floating-point number; if that number
is normalized, its value is (1 − b^(−p)) b^(e_max).
On 11/8/21 9:48 PM, Vincent Lefevre wrote:
In article <smbrgo$g4b$1@dont-email.me>,
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
...On 11/8/21 5:56 AM, Vincent Lefevre wrote:
In article <smah3q$a9f$1@dont-email.me>,
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
The only definition for overflow that I discussed is not mine, it
belongs to the C standard: "A floating result overflows if the magnitude >> (absolute value) of the mathematical result is finite but so large that
the mathematical result cannot be represented without extraordinary
roundoff error in an object of the specified type." (7.12.1p5).
That's in the C standard. But in <sl9bqb$hf5$2@dont-email.me>, you
said: "Overflow occurs when a floating constant is created whose
value is greater than DBL_MAX or less than -DBL_MAX."
So... I don't understand what you consider as an overflow.
At that time, I was unaware of the existence of any floating point
format where DBL_NORM_MAX < DBL_MAX. I've since acknowledged that such
things can occur - but only in such obscure formats.
I wasn't talking about overflow for it's own sake, but only in the
context of what the standard says about the value of floating point
constants. What value does that constant have? Is it one of the three
values permitted by 6.4.4.2p4? Is it, in particular, the value required
by IEEE 754? If the answers to both questions are yes, it's consistent
with everything I said.
The second answer is not "yes", in case nextdown(DBL_MAX) would be returned.
I'm asking what value you observed - was it nextdown(DBL_MAX), DBL_MAX, +infinity, or something else? The first three are permitted by the C standard, the second one is mandated by IEEE 754, so I would expect an implementation that claimed conformance to both standards to choose
DBL_MAX, and NOT nextdown(DBL_MAX). So - which value did you see?
What renders the behavior undefined? On an implementation that doesn't support infinities, it's a constraint violation - but constraint
violations don't necessarily have undefined behavior. They usually have undefined behavior due to "ommission of any explicit definition of the behavior", but there is in fact an explicit definition of the behavior
that continues to apply even when that constraint is violated.
And on an implementation that does support infinities, it isn't even a constraint violation.
...
For an implementation that supports infinities (in other words, an
implementation where infinities are representable), how do infinities
fail to qualify as being within the range of representable values? Where >> is that exclusion specified?
5.2.4.2.2p5. Note that it seems that it is intended to exclude
some representable values from the range. Otherwise such a long specification of the range would not be needed.
That clause correctly states that infinities do NOT qualify as
floating point numbers.
However, it also correctly refers to them as values. The relevant
clauses refer to the range of representable values, not the range of representable floating point numbers. On such an implementation,
infinities are representable and they are values.
What are you referring to when you say "such a long specification"?
That said, either this specification seems incorrect or there are
several meanings of "range". For instance, 5.2.4.2.2p9 says "Except
for assignment and cast (which remove all extra range and precision)",
and here, the intent is to limit the range to the emax exponent of
the considered type.
I had to go back to n1570.pdf to find that wording. It was removed from n2310.pdf (2018-11-06). 1. In n2596.pdf (2020-12-11), wording about the
extra range was placed in footnote 22, referred to by 5.2.4.2.2p4, and
is still there in the latest draft I have, n2731.pdf (2021-10-18).
I believe that "extra range" refers to extra representable values that
are supported by the evaluation format, but not by the format of the
type itself. The extra range consists entirely of finite values, even if
the full range is infinite for both formats.
I don't remember exactly, but I think that was 7.12p4 to make it
consistent with its footnote (which refers to 6.4.4).
In the latest draft standard that I have, that wording is now in 7.12p7.
I've already conceded that "overflows" is not necessarily the same as "exceeds the range". However, the only known exception is for a long
double type, which can't apply to INFINITY, which is what 7.12p7 describes.
Note that the notion of overflow as defined by 7.12.1p5 (which is consistent with the particular case of IEEE 754) exists whether
infinities are supported or not.
Yes, but INFINITY is only required to overflow, which is what you were talking about, on implementations that don't support infinities. So, in
order to justify saying that it "does not match what implementations
consider as overflow", you must necessarily be referring to
implementations that don't support infinities.
And for implementations without infinities, see the GCC code: gcc/c-family/c-lex.c
if (REAL_VALUE_ISINF (real)
|| (const_type != type && REAL_VALUE_ISINF (real_trunc)))
{
*overflow = OT_OVERFLOW;
if (!(flags & CPP_N_USERDEF))
{
if (!MODE_HAS_INFINITIES (TYPE_MODE (type)))
pedwarn (input_location, 0,
"floating constant exceeds range of %qT", type);
else
warning (OPT_Woverflow,
"floating constant exceeds range of %qT", type);
}
}
It's actually the behavior of that implementation in modes that do
support infinities that is most relevant to this discussion - it labels
and infinite value as exceeding the type's range, even if it does
support infinities. Apparently they are using "range" to refer to the
range of finite values - but I would consider the wording to be
misleading without the qualifier "finite".
However, I'm confused about how this connects to the standard's
definition of normalized floating-point numbers: "f_1 > 0"
(5.2.4.2.2p4). It seems to me that, even for the pair-of-doubles format, >>>> LDBL_MAX is represented by a value with f_1 = 1, and therefore is a
normalized floating point number that is larger than LDBL_NORM_MAX,
which strikes me as a contradiction.
Note that there is a requirement on the exponent: e ≤ e_max.
Yes, and DBL_MAX has e==e_max.
No, not necessarily. DBL_NORM_MAX has e == e_max. But DBL_MAX may
have a larger exponent. The C2x draft says:
maximum representable finite floating-point number; if that number
is normalized, its value is (1 − b^(−p)) b^(e_max).
So, what is the value of e for LDBL_MAX in the pair-of-doubles format?
What is the value of e_max?
If LDBL_MAX does not have e==e_max,
what is the largest representable value in that format that does
have e==e_max?
In article <86wnmoov7c.fsf@linuxsc.com>,
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
What occurs is defined behavior and (for implementations that do
not have the needed value for infinity) violates a constraint.
A diagnostic must be produced.
If this is defined behavior, where is the result of an overflow
defined by the standard? (I can see only 7.12.1p5, but this is
for math functions; here, this is a constant that overflows.)
In article <smd27p$28v$1@dont-email.me>,...
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
On 11/8/21 9:48 PM, Vincent Lefevre wrote:
In article <smbrgo$g4b$1@dont-email.me>,
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
I wasn't talking about overflow for it's own sake, but only in the
context of what the standard says about the value of floating point
constants. What value does that constant have? Is it one of the three
values permitted by 6.4.4.2p4? Is it, in particular, the value required >>>> by IEEE 754? If the answers to both questions are yes, it's consistent >>>> with everything I said.
The second answer is not "yes", in case nextdown(DBL_MAX) would be
returned.
I'm asking what value you observed - was it nextdown(DBL_MAX), DBL_MAX,
+infinity, or something else? The first three are permitted by the C
standard, the second one is mandated by IEEE 754, so I would expect an
implementation that claimed conformance to both standards to choose
DBL_MAX, and NOT nextdown(DBL_MAX). So - which value did you see?
This issue is not what one can observe on a subset of implementations,
but what is possible.
... The value nextdown(DBL_MAX) does not make much
sense when the implementation *knows* that the value is larger than
DBL_MAX because it exceeds the range (there is a diagnostic to tell
that to the user because of 6.4.4p2).
Actually it is when the mathematical result exceeds the range. 6.5p5
says: "If an /exceptional condition/ occurs during the evaluation of
an expression (that is, if the result is not mathematically defined or
not in the range of representable values for its type), the behavior
is undefined." So this appears to be an issue when infinity is not
supported.
I suppose that when the standard defines something, it assumes the
case where such an exceptional condition does not occur, unless
explicitly said otherwise (that's the whole point of 6.5p5). And in
the definitions concerning floating-point expressions, the standard
never distinguishes between an exceptional condition or not. For
instance, for addition, the standatd just says "The result of the
binary + operator is the sum of the operands." (on the real numbers,
this operation is always mathematically well-defined, so the only
issue is results that exceed the range, introduced by 6.5p5).
For an implementation that supports infinities (in other words, an
implementation where infinities are representable), how do infinities
fail to qualify as being within the range of representable values? Where >>>> is that exclusion specified?
5.2.4.2.2p5. Note that it seems that it is intended to exclude
some representable values from the range. Otherwise such a long
specification of the range would not be needed.
That clause correctly states that infinities do NOT qualify as
floating point numbers.
Note that there are inconsistencies in the standard about what
it means by "floating-point numbers". It is sometimes used to
mean the value of a floating type. For instance, the standard
says for fabs: "The fabs functions compute the absolute value
of a floating-point number x." But I really don't think that
this function is undefined on infinities.
However, it also correctly refers to them as values. The relevant
clauses refer to the range of representable values, not the range of
representable floating point numbers. On such an implementation,
infinities are representable and they are values.
My point is that it says *real* numbers. And infinities are not
real numbers.
However, I'm confused about how this connects to the standard's
definition of normalized floating-point numbers: "f_1 > 0"
(5.2.4.2.2p4). It seems to me that, even for the pair-of-doubles format, >>>>>> LDBL_MAX is represented by a value with f_1 = 1, and therefore is a >>>>>> normalized floating point number that is larger than LDBL_NORM_MAX, >>>>>> which strikes me as a contradiction.
Note that there is a requirement on the exponent: e ≤ e_max.
Yes, and DBL_MAX has e==e_max.
No, not necessarily. DBL_NORM_MAX has e == e_max. But DBL_MAX may
have a larger exponent. The C2x draft says:
maximum representable finite floating-point number; if that number
is normalized, its value is (1 − b^(−p)) b^(e_max).
So, what is the value of e for LDBL_MAX in the pair-of-doubles format?
It should be DBL_MAX_EXP. What happens with double-double is that
for the maximum exponent of double, not all precision-p numbers
are representable (here, p = 106 = 2 * 53 historically, though
107 could actually be used thanks to the constraint below and the
limitation on the exponent discussed here).
The reason is that there is a constraint on the format in order
to make the double-double algorithms fast enough: if (x1,x2) is
a valid double-double number, then x1 must be equal to x1 + x2
rounded to nearest. So LDBL_MAX has the form:
.111...1110111...111 * 2^(DBL_MAX_EXP)
where both sequences 111...111 have 53 bits. Values above this
number would increase the exponent of x1 to DBL_MAX_EXP + 1,
which is above the maximum exponent for double; thus such values
are not representable.
The consequence is that e_max < DBL_MAX_EXP.
What is the value of e_max?
DBL_MAX_EXP - 1
If LDBL_MAX does not have e==e_max,
(LDBL_MAX has exponent e = e_max + 1.)
Why does it matter to you that such implementations are possible?
No such implementation can qualify as conforming to IEEE 754 - so
what? The C standard very deliberately does NOT require conformance
to IEEE 754,
... The value nextdown(DBL_MAX) does not make much
sense when the implementation *knows* that the value is larger than
DBL_MAX because it exceeds the range (there is a diagnostic to tell
that to the user because of 6.4.4p2).
You misunderstand the purpose of the specification in 6.4.4.2p4. It was
not intended that a floating point implementation would generate the
nearest representable value, and that the implementation of C would then arbitrarily chose to pick one of the other two adjacent representable
values. The reason was to accommodate floating point implementations
that couldn't meet the accuracy requirements of IEC 60559.
...
Actually it is when the mathematical result exceeds the range. 6.5p5
says: "If an /exceptional condition/ occurs during the evaluation of
an expression (that is, if the result is not mathematically defined or
not in the range of representable values for its type), the behavior
is undefined." So this appears to be an issue when infinity is not supported.
Conversion of a floating point constant into a floating point value is
not "evaluation of an expression", and therefore is not covered by
6.5p5. Such conversions are required to occur "as-if at translation
time", and exceptional conditions are explicitly prohibited.
Note that there are inconsistencies in the standard about what
it means by "floating-point numbers". It is sometimes used to
mean the value of a floating type. For instance, the standard
says for fabs: "The fabs functions compute the absolute value
of a floating-point number x." But I really don't think that
this function is undefined on infinities.
If __STDC_IEC_60559_BFP__ is pre#defined by the implementation, F10.4.3
not only allows fabs (±∞), it explicitly mandates that it return +∞.
However, it also correctly refers to them as values. The relevant
clauses refer to the range of representable values, not the range of
representable floating point numbers. On such an implementation,
infinities are representable and they are values.
My point is that it says *real* numbers. And infinities are not
real numbers.
In n2731.pdf, 5.2.4.2.2p5 says "An implementation may give zero and
values that are not floating-point numbers (such as infinities
and NaNs) a sign or may leave them unsigned. Wherever such values are unsigned, any requirement in this document to retrieve the sign shall
produce an unspecified sign, and any requirement to set the sign shall
be ignored."
Nowhere in that clause does it use the term "real".
Are you perhaps referring to 5.2.4.2.2p7?
...
However, I'm confused about how this connects to the standard's
definition of normalized floating-point numbers: "f_1 > 0"
(5.2.4.2.2p4). It seems to me that, even for the pair-of-doubles format,
LDBL_MAX is represented by a value with f_1 = 1, and therefore is a >>>>>> normalized floating point number that is larger than LDBL_NORM_MAX, >>>>>> which strikes me as a contradiction.
Note that there is a requirement on the exponent: e ≤ e_max.
Yes, and DBL_MAX has e==e_max.
No, not necessarily. DBL_NORM_MAX has e == e_max. But DBL_MAX may
have a larger exponent. The C2x draft says:
maximum representable finite floating-point number; if that number
is normalized, its value is (1 − b^(−p)) b^(e_max).
So, what is the value of e for LDBL_MAX in the pair-of-doubles format?
It should be DBL_MAX_EXP. What happens with double-double is that
for the maximum exponent of double, not all precision-p numbers
are representable (here, p = 106 = 2 * 53 historically, though
107 could actually be used thanks to the constraint below and the limitation on the exponent discussed here).
The reason is that there is a constraint on the format in order
to make the double-double algorithms fast enough: if (x1,x2) is
a valid double-double number, then x1 must be equal to x1 + x2
rounded to nearest. So LDBL_MAX has the form:
.111...1110111...111 * 2^(DBL_MAX_EXP)
where both sequences 111...111 have 53 bits. Values above this
number would increase the exponent of x1 to DBL_MAX_EXP + 1,
which is above the maximum exponent for double; thus such values
are not representable.
The consequence is that e_max < DBL_MAX_EXP.
What is the value of e_max?
DBL_MAX_EXP - 1
If LDBL_MAX does not have e==e_max,
(LDBL_MAX has exponent e = e_max + 1.)
That doesn't work. 5.2.4.2.2p2 and p3 both specify that floating point numbers must have e_min <= e && e <= e_max.
LDBL_MAX is defined as the "maximum finite floating point number".
Vincent Lefevre <vincent-news@vinc17.net> writes:
In article <86wnmoov7c.fsf@linuxsc.com>,
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
What occurs is defined behavior and (for implementations that do
not have the needed value for infinity) violates a constraint.
A diagnostic must be produced.
If this is defined behavior, where is the result of an overflow
defined by the standard? (I can see only 7.12.1p5, but this is
for math functions; here, this is a constant that overflows.)
I'm wondering if you have resolved your original uncertainty
about the behavior of INFINITY in an implementation that does
not support infinities?
In article <861r3pbbwh.fsf@linuxsc.com>,
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
Vincent Lefevre <vincent-news@vinc17.net> writes:
In article <86wnmoov7c.fsf@linuxsc.com>,
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
What occurs is defined behavior and (for implementations that do
not have the needed value for infinity) violates a constraint.
A diagnostic must be produced.
If this is defined behavior, where is the result of an overflow
defined by the standard? (I can see only 7.12.1p5, but this is
for math functions; here, this is a constant that overflows.)
I'm wondering if you have resolved your original uncertainty
about the behavior of INFINITY in an implementation that does
not support infinities?
I suspect that by saying "overflow", the standard actually meant that
the result is not in the range of representable values. This is the
only way the footnote "In this case, using INFINITY will violate the constraint in 6.4.4 and thus require a diagnostic." can make sense
(the constraint in 6.4.4 is about the range, not overflow). But IMHO,
the failing constraint makes the behavior undefined, actually makes
the program erroneous.
Similarly, on
static int i = 1 / 0;
int main (void)
{
return 0;
}
GCC fails to translate the program due to the failing constraint:
tst.c:1:16: error: initializer element is not constant
1 | static int i = 1 / 0;
| ^
(this is not just a diagnostic, GCC does not generate an
executable).
Ditto with Clang:
tst.c:1:18: error: initializer element is not a compile-time constant
static int i = 1 / 0;
~~^~~
In article <smecfc$jai$1@dont-email.me>,
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
Why does it matter to you that such implementations are possible?
When writing a portable program, one wants it to behave correctly
even on untested implementations (which can be known implementations
but without a machine available to test the program, implementations
unknown to the developer, and possible future implementations).
... The value nextdown(DBL_MAX) does not make much
sense when the implementation *knows* that the value is larger than
DBL_MAX because it exceeds the range (there is a diagnostic to tell
that to the user because of 6.4.4p2).
You misunderstand the purpose of the specification in 6.4.4.2p4. It was
not intended that a floating point implementation would generate the
nearest representable value, and that the implementation of C would then
arbitrarily chose to pick one of the other two adjacent representable
values. The reason was to accommodate floating point implementations
that couldn't meet the accuracy requirements of IEC 60559.
You didn't understand. I repeat. The implementation *knows* that the
value is larger than DBL_MAX. This knowledge is *required* by the C
standard so that the required diagnostic can be emitted (due to the constraint in 6.4.4p2). So there is no reason that the implementation
would assume that the value can be less than DBL_MAX.
This is not an accuracy issue, or if there is one, it occurs at the
level of the 6.4.4p2 constraint.
Actually it is when the mathematical result exceeds the range. 6.5p5
says: "If an /exceptional condition/ occurs during the evaluation of
an expression (that is, if the result is not mathematically defined or
not in the range of representable values for its type), the behavior
is undefined." So this appears to be an issue when infinity is not
supported.
Conversion of a floating point constant into a floating point value is
not "evaluation of an expression", and therefore is not covered by
6.5p5. Such conversions are required to occur "as-if at translation
time", and exceptional conditions are explicitly prohibited.
But what about constant expressions?
For instance, assuming no IEEE 754 support, what is the behavior of
the following code?
static double x = DBL_MAX + DBL_MAX;
If one ignores 6.5p5 because this is a translation-time computation,
I find the standard rather ambiguous on what is required.
Note that there is a constraint 6.6p4 "Each constant expression shall evaluate to a constant that is in the range of representable values
for its type." but this is of the same kind as 6.4.4p2 for constants.
And what about the following?
static int i = 2 || 1 / 0;
Note that there are inconsistencies in the standard about what
it means by "floating-point numbers". It is sometimes used to
mean the value of a floating type. For instance, the standard
says for fabs: "The fabs functions compute the absolute value
of a floating-point number x." But I really don't think that
this function is undefined on infinities.
If __STDC_IEC_60559_BFP__ is pre#defined by the implementation, F10.4.3
not only allows fabs (±∞), it explicitly mandates that it return +∞.
The issue is when __STDC_IEC_60559_BFP__ is not defined but infinities
are supported (as allowed by the standard).
(LDBL_MAX has exponent e = e_max + 1.)
That doesn't work. 5.2.4.2.2p2 and p3 both specify that floating point
numbers must have e_min <= e && e <= e_max.
Yes, *floating-point numbers*.
LDBL_MAX is defined as the "maximum finite floating point number".
I'd see this as a defect in N2731. As I was saying earlier, the
standard does not use "floating-point number" in a consistent way.
This was discussed, but it seems that not everything was fixed.
As an attempt to clarify this point, "normalized" was added, but
this may not have been the right thing.
The purpose of LDBL_MAX is to be able to be a finite value larger
than LDBL_NORM_MAX,
... which is the maximum floating-point number
following the 5.2.4.2.2p3 definition. LDBL_NORM_MAX was introduced> precisely because LDBL_MAX does not necessarily follow the model
of 5.2.4.2.2p3 (i.e. LDBL_MAX isn't necessarily a floating-point
number).
Vincent Lefevre <vincent-news@vinc17.net> writes:
In article <861r3pbbwh.fsf@linuxsc.com>,
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
Vincent Lefevre <vincent-news@vinc17.net> writes:
In article <86wnmoov7c.fsf@linuxsc.com>,
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
What occurs is defined behavior and (for implementations that do
not have the needed value for infinity) violates a constraint.
A diagnostic must be produced.
If this is defined behavior, where is the result of an overflow
defined by the standard? (I can see only 7.12.1p5, but this is
for math functions; here, this is a constant that overflows.)
I'm wondering if you have resolved your original uncertainty
about the behavior of INFINITY in an implementation that does
not support infinities?
I suspect that by saying "overflow", the standard actually meant that
the result is not in the range of representable values. This is the
only way the footnote "In this case, using INFINITY will violate the
constraint in 6.4.4 and thus require a diagnostic." can make sense
(the constraint in 6.4.4 is about the range, not overflow). But IMHO,
the failing constraint makes the behavior undefined, actually makes
the program erroneous.
Suppose we have an implementation that does not support
infinities, a range of double and long double up to about ten to
the 99999, and ask it to translate the following .c file
double way_too_big = 1.e1000000;
This constant value violates the constraint in 6.4.4. Do you
think this .c file (and any program it is part of) has undefined
behavior? If so, do you think any constraint violation implies
undefined behavior, or just some of them?
On 11/10/21 7:48 AM, Vincent Lefevre wrote:
In article <smecfc$jai$1@dont-email.me>,...
... The value nextdown(DBL_MAX) does not make much
sense when the implementation *knows* that the value is larger than
DBL_MAX because it exceeds the range (there is a diagnostic to tell
that to the user because of 6.4.4p2).
You misunderstand the purpose of the specification in 6.4.4.2p4. It was
not intended that a floating point implementation would generate the
nearest representable value, and that the implementation of C would then >> arbitrarily chose to pick one of the other two adjacent representable
values. The reason was to accommodate floating point implementations
that couldn't meet the accuracy requirements of IEC 60559.
You didn't understand. I repeat. The implementation *knows* that the
value is larger than DBL_MAX. This knowledge is *required* by the C standard so that the required diagnostic can be emitted (due to the constraint in 6.4.4p2). So there is no reason that the implementation
would assume that the value can be less than DBL_MAX.
This is not an accuracy issue, or if there is one, it occurs at the
level of the 6.4.4p2 constraint.
I assume we've been talking about implementations that conform to the C standard, right? Otherwise there's nothing meaningful that can be said.
6.4.4.2p4 describes accuracy requirements that allow the result you find objectionable. I've been talking about the fact that those requirements
are a little bit more lenient than those imposed by IEEE 754, because
those looser requirements allow a slightly simpler implementation, one
which might use up less code space or execute somewhat faster, at the
cost of lower accuracy.
As you should see, the maximum error allowed by the C standard is not enormously larger than the maximum error allowed by IEEE 754.
You're worried about the possibility of an implementation conforming to
the C standard by returning nextdown(DBL_MAX), despite the fact that, in order to conform, the implementation would also have to generate that diagnostic message?
This means that there must be a block of code in the compiler
somewhere, which issues that diagnostic, and which only gets
executed when that constraint is violated, but for some reason the implementor choose not to add code to that block to set the value to
DBL_MAX. If you're worried about that possibility, that implies that
you can imagine a reason why someone might do that. What might that
reason be?
For the sake of argument, let's postulate that a given implementor does
in fact have some reason to do that. If that's the case, there's
something I can guarantee to you: that implementor considers such an
error to be acceptably small, and believes that a sufficiently large
fraction of the users of his implementation will agree. If the
implementor is wrong about that second point, people will eventually
stop using his implementation. If he's right about that point - if both
he and the users of his implementation consider such inaccuracy
acceptable - why should he change his implementation just because you consider it unacceptable? You wouldn't be a user of such an
implementation anyway, right?
For instance, assuming no IEEE 754 support, what is the behavior of
the following code?
static double x = DBL_MAX + DBL_MAX;
That involves addition, and is therefore covered by 5.2.4.2.2p8, which I quoted in my previous message..
If one ignores 6.5p5 because this is a translation-time computation,
I find the standard rather ambiguous on what is required.
Floating point constants are required to be evaluated as-if at translation-time.
Constant expressions are permitted to be evaluated at translation-time,
but it is not required.
And what about the following?
static int i = 2 || 1 / 0;
Integer division is far more tightly constrained by the C standard than floating point division (it would be really difficult, bordering on impossible, for something to constrain floating point division more
loosely than the C standard does).
...
(LDBL_MAX has exponent e = e_max + 1.)
That doesn't work. 5.2.4.2.2p2 and p3 both specify that floating point
numbers must have e_min <= e && e <= e_max.
Yes, *floating-point numbers*.
LDBL_MAX is defined as the "maximum finite floating point number".
I'd see this as a defect in N2731. As I was saying earlier, the
standard does not use "floating-point number" in a consistent way.
This was discussed, but it seems that not everything was fixed.
As an attempt to clarify this point, "normalized" was added, but
this may not have been the right thing.
The purpose of LDBL_MAX is to be able to be a finite value larger
than LDBL_NORM_MAX,
No, LDBL_MAX is allowed to be larger than LDBL_NORM_MAX,
but the committee made it clear that they expected LDBL_MAX and
LDBL_NORM_MAX to have the same value on virtually all real-world implementations.
... which is the maximum floating-point number
following the 5.2.4.2.2p3 definition. LDBL_NORM_MAX was introduced> precisely because LDBL_MAX does not necessarily follow the model
of 5.2.4.2.2p3 (i.e. LDBL_MAX isn't necessarily a floating-point
number).
I don't believe that was the intent.
Suppose we have an implementation that does not support
infinities, a range of double and long double up to about ten to
the 99999, and ask it to translate the following .c file
double way_too_big = 1.e1000000;
This constant value violates the constraint in 6.4.4. Do you
think this .c file (and any program it is part of) has undefined
behavior? If so, do you think any constraint violation implies
undefined behavior, or just some of them?
Similarly, on
static int i = 1 / 0;
int main (void)
{
return 0;
}
GCC fails to translate the program due to the failing constraint:
tst.c:1:16: error: initializer element is not constant
1 | static int i = 1 / 0;
| ^
(this is not just a diagnostic, GCC does not generate an
executable).
Note that the C standard does not distinguish between "errors"
and "warnings" in diagnostic messages. Either is allowed
regardless of whether undefined behavior is present.
Ditto with Clang:
tst.c:1:18: error: initializer element is not a compile-time constant static int i = 1 / 0;
~~^~~
The messages indicate that the failure is not about exceeding the
range of a type, but rather about satisfying the constraints for
constant expressions, in particular 6.6 p4, which says in part
Each constant expression shall evaluate to a constant [...]
The problem here is that 1/0 doesn't evaluate to anything,
because division by 0 is not defined. Any question of range of
representable values doesn't enter into it.
I think it's a tricky question. I think the language would be cleaner
if the standard explicitly stated that violating a constraint always
results in undefined behavior -- or if it explicitly stated that it
doesn't. (The former is my personal preference.)
The semantics of floating constants specify that the value is "either
the nearest representable value, or the larger or smaller representable
value immediately adjacent to the nearest representable value, chosen in
an implementation-defined manner". Given that infinities are not
supported, that would be DBL_MAX or its predecessor. Based on that, I'd
say that:
- A diagnostic is required.
- A compiler may reject the program.
- If the compiler doesn't reject the program, the value of way_too_big
must be DBL_MAX or its predecessor. (Making it the predecessor of
DBL_MAX would be weird but conforming.)
*Except* that the definition of "constraint" is "restriction, either syntactic or semantic, by which the exposition of language elements is
to be interpreted". I find that rather vague, but it could be
interpreted to mean that if a constraint is violated, there is no valid interpretation of language elements.
In article <smgu08$3r1$1@dont-email.me>,...
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
I assume we've been talking about implementations that conform to the C
standard, right? Otherwise there's nothing meaningful that can be said.
The issue is more related to (strictly) conforming programs.
On 11/12/21 6:17 PM, Vincent Lefevre wrote:
In article <smgu08$3r1$1@dont-email.me>,...
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
I assume we've been talking about implementations that conform to the C
standard, right? Otherwise there's nothing meaningful that can be said.
The issue is more related to (strictly) conforming programs.
It can't be. Strictly conforming programs are prohibited from having
output that depends upon behavior that the standard leaves unspecified.
6.4.4.2p4 identifies what is usually three different possible values for
each floating point constant (four if the constant describes a value
exactly half-way between two consecutive representable values, but only
two if it describes a value larger than DBL_MAX or smaller than -DBL_MAX
on a platform that doesn't support infinities), and leaves it
unspecified which one of those values is chosen.
In article <86wnlg9ey7.fsf@linuxsc.com>,
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
Suppose we have an implementation that does not support
infinities, a range of double and long double up to about ten to
the 99999, and ask it to translate the following .c file
double way_too_big = 1.e1000000;
This constant value violates the constraint in 6.4.4. Do you
think this .c file (and any program it is part of) has undefined
behavior? If so, do you think any constraint violation implies
undefined behavior, or just some of them?
I think that constraints are there to define conditions under which specifications make sense. Thus, if a constraint is not satisfied,
behavior is undefined [...]
In article <smn6d8$c92$1@dont-email.me>,...
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
On 11/12/21 6:17 PM, Vincent Lefevre wrote:
The issue is more related to (strictly) conforming programs.
It can't be. Strictly conforming programs are prohibited from having
output that depends upon behavior that the standard leaves unspecified.
This is not how I interpret the standard.
Otherwise there would be
an obvious contradiction with note 3, which uses
#ifdef __STDC_IEC_559__
while the value of __STDC_IEC_559__ is not specified in the standard.
What matters is that the program needs to take every possibility into
account and make sure that the (visible) behavior is the same in each
case. So...
6.4.4.2p4 identifies what is usually three different possible values for
each floating point constant (four if the constant describes a value
exactly half-way between two consecutive representable values, but only
two if it describes a value larger than DBL_MAX or smaller than -DBL_MAX
on a platform that doesn't support infinities), and leaves it
unspecified which one of those values is chosen.
The program can deal with that in order to get the same behavior in
each case, so that it could be strictly conforming.
... However, if the
behavior is undefined (assumed as a consequence of the failed
constraint), there is *nothing* that one can do.
That said, since the floating-point accuracy is not specified, can be extremely low and is not even checkable by the program (so that there
is no possible fallback in case of low accuracy), there is not much
one can do with floating point.
Suppose again we have an implementation that does not support
infinities and has a range of double and long double up to about
ten to the 99999. Question one: as far as the C standard is
concerned, is the treatment of this .c file
double way_too_big = 1.e1000000;
and of this .c file
#include <math.h>
double way_too_big = INFINITY;
the same in the two cases?
Question two: does the C standard require that at least one
diagnostic be issued for each of the above .c files?
In article <86v90t8l6j.fsf@linuxsc.com>,
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
Suppose again we have an implementation that does not support
infinities and has a range of double and long double up to about
ten to the 99999. Question one: as far as the C standard is
concerned, is the treatment of this .c file
double way_too_big = 1.e1000000;
and of this .c file
#include <math.h>
double way_too_big = INFINITY;
the same in the two cases?
IMHO, this is undefined behavior in both cases, due to the
unsatisfied constraint. So, yes.
On 11/15/21 4:18 AM, Vincent Lefevre wrote:
In article <smn6d8$c92$1@dont-email.me>,
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
...On 11/12/21 6:17 PM, Vincent Lefevre wrote:
The issue is more related to (strictly) conforming programs.
It can't be. Strictly conforming programs are prohibited from having
output that depends upon behavior that the standard leaves unspecified.
This is not how I interpret the standard.
I don't see how there's room for interpretation: "A strictly conforming program ... shall not produce output dependent on any unspecified ... behavior, ..." (4p6).
... However, if the
behavior is undefined (assumed as a consequence of the failed
constraint), there is *nothing* that one can do.
Yes, but nowhere does the standard specify that violating a constraint
does, in itself, render the behavior undefined. Most constraint
violations do render the behavior undefined "by omission of any explicit definition of the behavior", but not this one. You might not like the definition that 6.4.4.2p4 provides, but it does provide one.
That said, since the floating-point accuracy is not specified, can be extremely low and is not even checkable by the program (so that there
is no possible fallback in case of low accuracy), there is not much
one can do with floating point.
??? You can check for __STDC_IEC_60559_BFP__; if it's defined, then
pretty much the highest possible accuracy is required.
On 11/15/21 6:39 PM, Vincent Lefevre wrote:
In article <86v90t8l6j.fsf@linuxsc.com>,
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
Suppose again we have an implementation that does not support
infinities and has a range of double and long double up to about
ten to the 99999. Question one: as far as the C standard is
concerned, is the treatment of this .c file
double way_too_big = 1.e1000000;
and of this .c file
#include <math.h>
double way_too_big = INFINITY;
the same in the two cases?
IMHO, this is undefined behavior in both cases, due to the
unsatisfied constraint. So, yes.
So, of the three ways used by the standard to indicate that the behavior
is undefined, which one was used in this case?
"If a "shall" or "shall not" requirement that appears outside of a
constraint or runtime-constraint is violated, the behavior is undefined. Undefined behavior is otherwise indicated in this document by the words "undefined behavior" or by the omission of any explicit definition of behavior. There is no difference in emphasis among these three; they all describe "behavior that is undefined"." (4p2).
I would expect the implementation to reject the code, or accept it
in a way unspecified by the standard (but the implementation could
document what happens, as an extension).
I don't see how there's room for interpretation: "A strictly conforming program ... shall not produce output dependent on any unspecified ... behavior, ..." (4p6).
#ifdef __STDC_IEC_559__
while the value of __STDC_IEC_559__ is not specified in the standard.
In article <smuvs5$6qh$1@dont-email.me>,
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
On 11/15/21 6:39 PM, Vincent Lefevre wrote:
In article <86v90t8l6j.fsf@linuxsc.com>,
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
Suppose again we have an implementation that does not support
infinities and has a range of double and long double up to about
ten to the 99999. Question one: as far as the C standard is
concerned, is the treatment of this .c file
double way_too_big = 1.e1000000;
and of this .c file
#include <math.h>
double way_too_big = INFINITY;
the same in the two cases?
IMHO, this is undefined behavior in both cases, due to the
unsatisfied constraint. So, yes.
So, of the three ways used by the standard to indicate that the behavior
is undefined, which one was used in this case?
"If a "shall" or "shall not" requirement that appears outside of a
constraint or runtime-constraint is violated, the behavior is undefined.
Undefined behavior is otherwise indicated in this document by the words
"undefined behavior" or by the omission of any explicit definition of
behavior. There is no difference in emphasis among these three; they all
describe "behavior that is undefined"." (4p2).
Omission of any explicit definition of behavior.
... There is a constraint
(restriction) that is not satisfied.
... Thus the code becomes invalid and
nothing gets defined as a consequence.
I would expect the implementation to reject the code, or accept it
in a way unspecified by the standard (but the implementation could
document what happens, as an extension).
In article <smuc7i$6hq$1@dont-email.me>,
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
On 11/15/21 4:18 AM, Vincent Lefevre wrote:
In article <smn6d8$c92$1@dont-email.me>,...
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
On 11/12/21 6:17 PM, Vincent Lefevre wrote:
This is not how I interpret the standard.The issue is more related to (strictly) conforming programs.
It can't be. Strictly conforming programs are prohibited from having
output that depends upon behavior that the standard leaves unspecified. >>>
I don't see how there's room for interpretation: "A strictly conforming
program ... shall not produce output dependent on any unspecified ...
behavior, ..." (4p6).
I'm not sure what you intended to mean, but IMHO, the "It can't be."
is wrong based on the unsatisfied constraint and definition 3.8 of "constraint" (but this should really be clarified).
That said, since the floating-point accuracy is not specified, can be
extremely low and is not even checkable by the program (so that there
is no possible fallback in case of low accuracy), there is not much
one can do with floating point.
??? You can check for __STDC_IEC_60559_BFP__; if it's defined, then
pretty much the highest possible accuracy is required.
Indeed, well, almost I think. One should also check that
FLT_EVAL_METHOD is either 0 or 1. Otherwise the accuracy
becomes unknown.
All,
I don't see how there's room for interpretation: "A strictly conforming
program ... shall not produce output dependent on any unspecified ...
behavior, ..." (4p6).
Indeed.
Now the order of evaluation of binary operators is
unspecified. But this does not mean that all programs containing
at least one binary operator is not strictly conforming.
For instance, the order of evaluation of the two
operands in the following expression-statement is unspecified.
But unless they are volatile qualified the output does
not depend on the unspecified behavior:
x+y;
But in:
a[printf("Hello")]+a[printf(" World")];
the output does depend on the order of evaluation,
and a program containing this code is not strictly conforming.
#ifdef __STDC_IEC_559__
while the value of __STDC_IEC_559__ is not specified in the standard.
The output of a strictly conforming program does not depend on the implementation used.
Since the value of __STDC_IEC_559__ depends on the implementation,
its use can produce a program that is not strictly conforming.
On 11/15/21 8:28 PM, Vincent Lefevre wrote:[...]
In article <smuvs5$6qh$1@dont-email.me>,
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
"If a "shall" or "shall not" requirement that appears outside of a
constraint or runtime-constraint is violated, the behavior is undefined. >>> Undefined behavior is otherwise indicated in this document by the words
"undefined behavior" or by the omission of any explicit definition of
behavior. There is no difference in emphasis among these three; they all >>> describe "behavior that is undefined"." (4p2).
Omission of any explicit definition of behavior.
The fact that a constraint is violated does not erase the definition
provided by 6.4.4.2p4, or render it any less applicable.
... There is a constraint
(restriction) that is not satisfied.
Agreed.
... Thus the code becomes invalid and
nothing gets defined as a consequence.
This is, I presume, what makes you think that 6.4.4.2p4 is effectively erased?
The standard says nothing to that effect. The only meaningful thing it
says is that a diagnostic is required (5.1.1.3p1). I do not consider the standard's definition of "constraint" to be meaningful: "restriction,
either syntactic or semantic, by which the exposition of language
elements is to be interpreted" (3.8). What that sentence means,if
anything, is not at all clear, but one thing is clear - it says nothing about what should happen if the restriction is violated. 5.1.1.3p1 is
the only clause that says anything about that issue.
Note: the requirement specified in 5.1.1.3p1 would also be erased, if a constraint violation is considered to effectively erase unspecified
parts of the rest of the standard. Surely you don't claim that 3.8
specifies which parts get erased?
I think it's a tricky question. I think the language would be cleaner
if the standard explicitly stated that violating a constraint always
results in undefined behavior -- or if it explicitly stated that it
doesn't. (The former is my personal preference.)
??? You can check for __STDC_IEC_60559_BFP__; if it's defined, then
pretty much the highest possible accuracy is required.
Indeed, well, almost I think. One should also check that
FLT_EVAL_METHOD is either 0 or 1. Otherwise the accuracy
becomes unknown.
A value of 2 tells you that the implementation will evaluate "all
operations and constants to the range and precision of the long double
type", which is pretty specific about what the accuracy is. It has
precisely the same accuracy that it would have had on an otherwise
identical implementation where FLT_EVAL_METHOD == 0, if you explicitly converted all double operands to long double, and then converted the
final result back to double. Would you consider the accuracy of such
code to be unknown?
A value of -1 leaves some uncertainty about the accuracy. However, the evaluation format is allowed to have range or precision that is greater
than that of the expression's type. The accuracy of such a type might be greater than that of the expression's type, but it's not allowed to be
worse.
The standard's definition of "constraint" is uncomfortably vague -- but
that doesn't mean I'm comfortable ignoring it.
Given the definition of a "constraint" as a "restriction, either
syntactic or semantic, by which the exposition of language elements is
to be interpreted", it seems to me to be at least plausible that when a constraint is violated, the "exposition of language elements" cannot be interpreted.
Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
Vincent Lefevre <vincent-news@vinc17.net> writes:
[...]
Shouldn't the standard by changed to make INFINITY conditionally
defined (if not required to expand to a true infinity)? [...]
To me it seems better for INFINITY to be defined as it is rather
than being conditionally defined. If what is needed is really an
infinite value, just write INFINITY and the code either works or
compiling it gives a diagnostic. If what is needed is just a very
large value, write HUGE_VAL (or HUGE_VALF or HUGE_VALL, depending)
and the code works whether infinite floating-point values are
supported or not. If it's important that infinite values be
supported but we don't want to risk a compilation failure, use
HUGE_VAL combined with an assertion
assert( HUGE_VAL == HUGE_VAL/2 );
Alternatively, use INFINITY only in one small .c file, and give
other sources a make dependency for a successful compilation
(with of course a -pedantic-errors option) of that .c file. I
don't see that having INFINITY be conditionally defined buys
anything, except to more or less force use of #if/#else/#endif
blocks in the preprocessor. I don't mind using the preprocessor
when there is a good reason to do so, but here I don't see one.
I don't see how that's better than conditionally defining INFINITY.
In article <86sfxbpm9d.fsf@linuxsc.com>,
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
To me it seems better for INFINITY to be defined as it is rather
than being conditionally defined. If what is needed is really an
infinite value, just write INFINITY and the code either works or
compiling it gives a diagnostic.
diagnostic and undefined behavior. [...]
James Kuyper <jameskuyper@alumni.caltech.edu> writes:
On 10/9/21 4:17 PM, Vincent Lefevre wrote:
In article <86wnmoov7c.fsf@linuxsc.com>,
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
What occurs is defined behavior and (for implementations that do
not have the needed value for infinity) violates a constraint. A
diagnostic must be produced.
If this is defined behavior, where is the result of an overflow
defined by the standard? (I can see only 7.12.1p5, but this is
for math functions; here, this is a constant that overflows.)
"For decimal floating constants, and also for hexadecimal floating
constants when FLT_RADIX is not a power of 2, the result is either
the nearest representable value, or the larger or smaller
representable value immediately adjacent to the nearest
representable value, chosen in an implementation-defined manner.
For hexadecimal floating constants when FLT_RADIX is a power of 2,
the result is correctly rounded." (6.4.4.2p3)
In the case of overflow, for a type that cannot represent infinity,
there is only one "nearest representable value", which is DBL_MAX.
But does that apply when a constraint is violated?
6.4.4p2, a constraint, says:
Each constant shall have a type and the value of a constant shall be
in the range of representable values for its type.
A "constraint", aside from triggering a required diagnostic, is a "restriction, either syntactic or semantic, by which the exposition
of language elements is to be interpreted",
which is IMHO a bit vague.
My mental model is that if a program violates a constraint and the implementation still accepts it (i.e., the required diagnostic is a
non-fatal warning) the program's behavior is undefined -- but the
standard doesn't say that. Of course if the implementation rejects
the program, it has no behavior.
For what it's worth, given this:
double too_big = 1e1000;
gcc, clang, and tcc all print a warning and set too_big to infinity.
That's obviously valid if the behavior is undefined. I think it's
also valid if the behavior is defined; the nearest representable
value is DBL_MAX, and the larger representable value immediately
adjacent to DBL_MAX is infinity.
It doesn't seem to me to be particularly useful to say that a
program can be rejected, but its behavior is defined if the
implementation chooses not to reject it.
[...] one possible interpretation of the phrase "a restriction
... by which the exposition of language elements is to be
interpreted" could be that if the constraint is violated, there
is no meaningful interpretation. Or to put it another way,
that the semantic description applies only if all constraints
are satisfied.
I've searched for the word "constraint" in the C89 and C99
Rationale documents. They were not helpful.
I am admittedly trying to read into the standard what I think
it *should* say. A rule that constraint violations cause
undefined behavior would, if nothing else, make the standard a
bit simpler.
Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
Vincent Lefevre <vincent-news@vinc17.net> writes:
In article <861r3pbbwh.fsf@linuxsc.com>,
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
Vincent Lefevre <vincent-news@vinc17.net> writes:
In article <86wnmoov7c.fsf@linuxsc.com>,
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
What occurs is defined behavior and (for implementations that do
not have the needed value for infinity) violates a constraint.
A diagnostic must be produced.
If this is defined behavior, where is the result of an overflow
defined by the standard? (I can see only 7.12.1p5, but this is
for math functions; here, this is a constant that overflows.)
I'm wondering if you have resolved your original uncertainty
about the behavior of INFINITY in an implementation that does
not support infinities?
I suspect that by saying "overflow", the standard actually meant that
the result is not in the range of representable values. This is the
only way the footnote "In this case, using INFINITY will violate the
constraint in 6.4.4 and thus require a diagnostic." can make sense
(the constraint in 6.4.4 is about the range, not overflow). But IMHO,
the failing constraint makes the behavior undefined, actually makes
the program erroneous.
Suppose we have an implementation that does not support
infinities, a range of double and long double up to about ten to
the 99999, and ask it to translate the following .c file
double way_too_big = 1.e1000000;
This constant value violates the constraint in 6.4.4. Do you
think this .c file (and any program it is part of) has undefined
behavior? If so, do you think any constraint violation implies
undefined behavior, or just some of them?
(Jumping in though the question was addressed to someone else.)
I think it's a tricky question. I think the language would be
cleaner if the standard explicitly stated that violating a
constraint always results in undefined behavior -- or if it
explicitly stated that it doesn't. (The former is my personal
preference.)
Clearly a compiler is allowed (but not required) to reject a program
that violates a constraint. If it does so, there is no behavior.
So the question is whether the behavior is undefined if the
implementation chooses not to reject it. (I personally don't see a
whole lot of value in defining the behavior of code that could have
been rejected outright.
I'm also not a big fan of the fact that
required diagnostics don't have to be fatal, but that's not likely
to change.)
[analysis of possible interpretations of the above code fragment]
James Kuyper <jameskuyper@alumni.caltech.edu> writes:
On 11/15/21 8:28 PM, Vincent Lefevre wrote:
In article <smuvs5$6qh$1@dont-email.me>,
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
[...]
"If a "shall" or "shall not" requirement that appears outside of
a constraint or runtime-constraint is violated, the behavior is
undefined. Undefined behavior is otherwise indicated in this
document by the words "undefined behavior" or by the omission of
any explicit definition of behavior. There is no difference in
emphasis among these three; they all describe "behavior that is
undefined"." (4p2).
Omission of any explicit definition of behavior.
The fact that a constraint is violated does not erase the
definition provided by 6.4.4.2p4, or render it any less applicable.
I suggest that that may be an open question.
... There is a constraint
(restriction) that is not satisfied.
Agreed.
... Thus the code becomes invalid and
nothing gets defined as a consequence.
This is, I presume, what makes you think that 6.4.4.2p4 is
effectively erased?
The standard says nothing to that effect. The only meaningful
thing it says is that a diagnostic is required (5.1.1.3p1). I do
not consider the standard's definition of "constraint" to be
meaningful: "restriction, either syntactic or semantic, by which
the exposition of language elements is to be interpreted" (3.8).
What that sentence means,if anything, is not at all clear, but one
thing is clear - it says nothing about what should happen if the
restriction is violated. 5.1.1.3p1 is the only clause that says
anything about that issue. Note: the requirement specified in
5.1.1.3p1 would also be erased, if a constraint violation is
considered to effectively erase unspecified parts of the rest of
the standard. Surely you don't claim that 3.8 specifies which
parts get erased?
The standard's definition of "constraint" is uncomfortably vague
-- but that doesn't mean I'm comfortable ignoring it.
Given the definition of a "constraint" as a "restriction, either
syntactic or semantic, by which the exposition of language
elements is to be interpreted", it seems to me to be at least
plausible that when a constraint is violated, the "exposition of
language elements" cannot be interpreted.
The implication would be that any program that violates a constraint
has undefined behavior (assuming it survives translation). And yes,
I'm proposing that violating any single constraint makes most of the
rest of the standard moot.
I'm not saying that this is the only way to interpret that wording.
It's vague enough to permit a number of reasonable readings. [...]
Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
[ is a constraint violation always undefined behavior? ]
[...] one possible interpretation of the phrase "a restriction
... by which the exposition of language elements is to be
interpreted" could be that if the constraint is violated, there
is no meaningful interpretation. Or to put it another way,
that the semantic description applies only if all constraints
are satisfied.
I've searched for the word "constraint" in the C89 and C99
Rationale documents. They were not helpful.
I am admittedly trying to read into the standard what I think
it *should* say. A rule that constraint violations cause
undefined behavior would, if nothing else, make the standard a
bit simpler.
Note that constraint violations are not undefined behavior in a
strict literal reading of the definition. Undefined behavior
means there are no restrictions as to what an implemenation may
do, but constraint violations require the implementation to
issue at least one diagnostic, which is not the same as "no
restrictions".
On 1/3/22 3:03 PM, Tim Rentsch wrote:
Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
[ is a constraint violation always undefined behavior? ]
[...] one possible interpretation of the phrase "a restrictionNote that constraint violations are not undefined behavior in a
... by which the exposition of language elements is to be
interpreted" could be that if the constraint is violated, there
is no meaningful interpretation. Or to put it another way,
that the semantic description applies only if all constraints
are satisfied.
I've searched for the word "constraint" in the C89 and C99
Rationale documents. They were not helpful.
I am admittedly trying to read into the standard what I think
it *should* say. A rule that constraint violations cause
undefined behavior would, if nothing else, make the standard a
bit simpler.
strict literal reading of the definition. Undefined behavior
means there are no restrictions as to what an implemenation may
do, but constraint violations require the implementation to
issue at least one diagnostic, which is not the same as "no
restrictions".
Although, after issuing that one diagnostic, if the implementation
continues and generates an output program, and that program is run,
then its behavior is explicitly defined to be undefined behavior.
Keith Thompson <Keith.S.T...@gmail.com> writes:...
James Kuyper <james...@alumni.caltech.edu> writes:
...The standard says nothing to that effect. The only meaningful
thing it says is that a diagnostic is required (5.1.1.3p1). I do
not consider the standard's definition of "constraint" to be
meaningful: "restriction, either syntactic or semantic, by which
the exposition of language elements is to be interpreted" (3.8).
...The standard's definition of "constraint" is uncomfortably vague
-- but that doesn't mean I'm comfortable ignoring it.
Given the definition of a "constraint" as a "restriction, either
syntactic or semantic, by which the exposition of language
elements is to be interpreted", it seems to me to be at least
plausible that when a constraint is violated, the "exposition of
language elements" cannot be interpreted.
I'm not saying that this is the only way to interpret that wording.
It's vague enough to permit a number of reasonable readings. [...]
So you're saying that the meaning of the definition of constraint
is to some extent subjective, ie, reader dependent?
On 1/3/22 3:03 PM, Tim Rentsch wrote:
Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
[ is a constraint violation always undefined behavior? ]
[...] one possible interpretation of the phrase "a restriction
... by which the exposition of language elements is to be
interpreted" could be that if the constraint is violated, there
is no meaningful interpretation. Or to put it another way,
that the semantic description applies only if all constraints
are satisfied.
I've searched for the word "constraint" in the C89 and C99
Rationale documents. They were not helpful.
I am admittedly trying to read into the standard what I think
it *should* say. A rule that constraint violations cause
undefined behavior would, if nothing else, make the standard a
bit simpler.
Note that constraint violations are not undefined behavior in a
strict literal reading of the definition. Undefined behavior
means there are no restrictions as to what an implemenation may
do, but constraint violations require the implementation to
issue at least one diagnostic, which is not the same as "no
restrictions".
Although, after issuing that one diagnostic, if the implementation
continues and generates an output program, and that program is run, then
its behavior is explicitly defined to be undefined behavior.
On Monday, January 3, 2022 at 3:56:22 PM UTC-5, Tim Rentsch wrote:
Keith Thompson <Keith.S.T...@gmail.com> writes:
James Kuyper <james...@alumni.caltech.edu> writes:
...
The standard says nothing to that effect. The only meaningful
thing it says is that a diagnostic is required (5.1.1.3p1). I do
not consider the standard's definition of "constraint" to be
meaningful: "restriction, either syntactic or semantic, by which
the exposition of language elements is to be interpreted" (3.8).
...
The standard's definition of "constraint" is uncomfortably vague
-- but that doesn't mean I'm comfortable ignoring it.
Given the definition of a "constraint" as a "restriction, either
syntactic or semantic, by which the exposition of language
elements is to be interpreted", it seems to me to be at least
plausible that when a constraint is violated, the "exposition of
language elements" cannot be interpreted.
...
I'm not saying that this is the only way to interpret that wording.
It's vague enough to permit a number of reasonable readings. [...]
So you're saying that the meaning of the definition of constraint
is to some extent subjective, ie, reader dependent?
As I said above, it seems to me to be so poorly worded that it's not
clear to me that it has any meaning, much less one that is subjective.
Since other people disagree with me on that point, there would
appear to be some subjectivity at play in that judgment, but after
reading their arguments, I remain at a loss as to how they can
interpret that phrase as being meaningful.
On 1/3/22 3:03 PM, Tim Rentsch wrote:
Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
[ is a constraint violation always undefined behavior? ]
[...] one possible interpretation of the phrase "a restriction
... by which the exposition of language elements is to be
interpreted" could be that if the constraint is violated, there
is no meaningful interpretation. Or to put it another way,
that the semantic description applies only if all constraints
are satisfied.
I've searched for the word "constraint" in the C89 and C99
Rationale documents. They were not helpful.
I am admittedly trying to read into the standard what I think
it *should* say. A rule that constraint violations cause
undefined behavior would, if nothing else, make the standard a
bit simpler.
Note that constraint violations are not undefined behavior in a
strict literal reading of the definition. Undefined behavior
means there are no restrictions as to what an implemenation may
do, but constraint violations require the implementation to
issue at least one diagnostic, which is not the same as "no
restrictions".
Although, after issuing that one diagnostic, if the implementation
continues and generates an output program, and that program is run,
then its behavior is explicitly defined to be undefined behavior.
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 1,030 |
Nodes: | 10 (0 / 10) |
Uptime: | 57:50:08 |
Calls: | 13,349 |
Calls today: | 1 |
Files: | 186,574 |
D/L today: |
532 files (128M bytes) |
Messages: | 3,358,478 |