Analysis of loss of precision of java floating point type

Posted Jun 15, 20203 min read

Java floating-point values will suffer from precision loss in operations. In scenarios where business requirements are relatively high, such as transactions, BigDecimal is generally used to resolve precision loss. Recently, a colleague still experienced a loss of accuracy when using BigDecimal. Please briefly record

Test case

code show as below

@Test
    public void fd() {
        double abc = 0.56D;
        System.out.println("abc:"+ abc);
        System.out.println("new BigDecimal(abc):"+ new BigDecimal(abc));
        System.out.println("BigDecimal.valueOf(abc):"+ BigDecimal.valueOf(abc));
    }

Output

abc:0.56
new BigDecimal(abc):0.560000000000000053290705182007513940334320068359375
BigDecimal.valueOf(abc):0.56

It can be seen that there is still a loss in converting the floating-point type using the BigDecimal constructor, and there is no loss of precision using the valueOf method.

In-depth source code

BigDecimal constructor, the core code(BigDecimal(double val)) is as follows

public BigDecimal(double val, MathContext mc) {
  .....
    long valBits = Double.doubleToLongBits(val);
    int sign =((valBits >> 63) == 0? 1:-1);
    int exponent =(int)((valBits >> 52) & 0x7ffL);
    long significand =(exponent == 0
                      ?(valBits &((1L << 52)-1)) << 1
                      :(valBits &((1L << 52)-1)) |(1L << 52));
  exponent -= 1075;
  ...
}

Focus on the point, Double.doubleToLongBits returns the bit layout according to IEEE754 floating point "double precision format", returns the representation of the specified floating point value

BigDecimal.valueOf core code

public static BigDecimal valueOf(double val) {
        return new BigDecimal(Double.toString(val));
    }
public BigDecimal(char[]in, int offset, int len, MathContext mc) {
  ....
}

You can see that using the valueOf method actually converts the double to a String, and then calls the string constructor.

So why does using Double.doubleToLongBits have a loss of precision, but not using the string constructor? The main reason is that BigDecimal uses decimal(BigInteger)+decimal point(scale) position to represent decimal, instead of directly using binary, such as 101.001 = 101001 * 0.1^3, the operation will be divided into two parts, BigInteger The calculation and the update of the decimal point position will not be expanded here.

Principle Analysis

Why does Double.doubleToLongBits have a loss of precision? The main reason is that floating-point types cannot be expressed in exact binary, just as decimal numbers cannot accurately describe infinite decimals.

The algorithm for converting floating point to binary is to multiply by 2 until there are no decimals, for a chestnut, 0.8 is expressed as binary

0.8*2=1.6 take the integer part 1

0.6*2=1.2 take the integer part 1

0.2*2=0.4 take integer part 0

0.4*2=0.8 take integer part 0

It can be seen that the above calculation process has looped, so it is sometimes impossible to accurately convert the floating-point type to binary.

in conclusion

If you want to convert floating point to BigDecimal, try to use the valueOf method instead of using the constructor.

This article was created by QITU Lao Nong and licensed under the CC BY 4.0 CN agreement. It can be reproduced and quoted freely, but the author must be signed and the source of the article should be indicated. If reprinted to WeChat public account, please add the author's public account QR code at the end of the article.
img