(INTERNAL) Default floating point precision produced by IDL STRING function with FLOAT input
Anonym
Internal Note:
For now, this tech tip should remain internal since the external reference information is not a verified source of information, although it should be good enough if you need to directly send the information to a customer asking for technical support assistance. |
Topic:
A customer was concerned that STRING was inappropriately truncating precision of a single float type value from 7 significant digits to 6 significant digits.
The discussion section of this article is based on an email exchange between myself (Jim) and Berangere from 10/11/11, discussing the expected number of significant digits in the conversion of a floating point number to a string by the IDL STRING function, when no FORMAT keyword setting is applied.
Discussion:
The default number of significant digits preserved when converting from a 16-bit float to a string by the IDL STRING() function is same default used by the underlying C routine that does the actual conversion. For example:
IDL> test = [662528.00000000, 5054488.0000000]
IDL> HELP,STRING(test[1])
STRING = ' 5.05449e+006'
such that STRING converts the float value to exponential form with only 6 significant digits. Then attempting to print the not converted string value as a float with 7 or more significant digit simply prints out the converted 6-significant digit rounded value with extra empty digits. For example:
IDL> print,STRING(test[1]),format='(g21.14)'
1054490.0000000
As you mentioned, one way to preserve the precision when going from float to string is to start with a double float, or also to use an explicit FORMAT setting within the STRING function call in order to preserve the full number of significant digits in the example.
In terms of why IDL STRING function (or the underlying C routine) would by default round a floating point value to 6 significant digits, I understand that IEEE decimal representation of 32-bit floating point values may consist of 6 or 7 significant digits (sometimes mentioned as "approximately 7 significant digits).
If it might be helpful, here are excerpts from a web page on the University of North Carolina Applied Mathematics department web site, that discuss this concept:
http://www.amath.unc.edu/sysadmin/DOC4.0/common-tools/numerical_comp_guide/ncg_math.doc.html
"Ranges and Precisions in Decimal Representation
This section covers the notions of range and precision for a given storage format. It includes the ranges and precisions corresponding to the IEEE single and double formats, and to the implementations of IEEE double-extended format to SPARC, PowerPC, and Intel architectures. In explaining the notions of range and precision, reference is made to the IEEE single format.
The IEEE standard specifies that 32 bits be used to represent a floating point number in single format. Because there are only finitely many combinations of 32 zeroes and ones, only finitely many numbers can be represented by 32 bits.
"
"...The accuracy of representing y in IEEE single format is about 6 to 7 significant digits, or that y has about six significant digits if it is to be represented in IEEE single format.
"
Consequently the default behavior of STRING() as applied to floating point values is intended to round to the maximum number of significant digits that can guarantee accuracy for all 32-bit floating values. If 7 digits are needed, then a user will need to use the techniques mentioned above to explicitly preserve the value.