Python vs C++ Precision
Question:
I am trying to reproduce a C++ high precision calculation in full python, but I got a slight difference and I do not understand why.
Python:
from decimal import *
getcontext().prec = 18
r = 0 + (((Decimal(0.95)-Decimal(1.0))**2)+(Decimal(0.00403)-Decimal(0.00063))**2).sqrt()
# r = Decimal('0.0501154666744709107')
C++:
#include <iostream>
#include <math.h>
int main()
{
double zx2 = 0.95;
double zx1 = 1.0;
double zy2 = 0.00403;
double zy1 = 0.00063;
double r;
r = 0.0 + sqrt((zx2-zx1)*(zx2-zx1)+(zy2-zy1)*(zy2-zy1));
std::cout<<"r = " << r << " ****";
return 0;
}
// r = 0.050115466674470907 ****
There is this 1
showing up near the end in python but not in c++, why ? Changing the precision in python will not change anything (i already tried) because, the 1
is before the "rounding".
Python: 0.0501154666744709107
C++ : 0.050115466674470907
Edit:
I though that Decimal
would convert anything passed to it into a string in order to "recut" them, but the comment of juanpa.arrivillaga made me doubt about it and after checking the source code, it is not the case ! So I changed to use string. Now the Python result is the same as WolframAlpha shared by Random Davis: link.
Answers:
The origin of the discrepancy is that Python Decimal
follows the more modern IBM’s General Decimal Arithmetic Specification.
In C++ however there too exist support available for 80-bit "extended precision" through the long double
format.
For reference, the standard IEEE-754 floating point double
s contain 53 bits of precision.
Here below the C++ example from the question, refactored using long double
s:
#include <iostream>
#include <math.h>
#include <iomanip>
int main()
{
long double zx2 = 0.95;
long double zx1 = 1.0;
long double zy2 = 0.00403;
long double zy1 = 0.00063;
long double r;
r = 0.0 + sqrt((zx2-zx1)*(zx2-zx1)+(zy2-zy1)*(zy2-zy1));
std::fixed;
std::cout<< std::setprecision(25) << "r = " << r << " ****"; //25 floats
// prints "r = 0.05011546667447091067728042 ****"
return 0;
}
I am trying to reproduce a C++ high precision calculation in full python, but I got a slight difference and I do not understand why.
Python:
from decimal import *
getcontext().prec = 18
r = 0 + (((Decimal(0.95)-Decimal(1.0))**2)+(Decimal(0.00403)-Decimal(0.00063))**2).sqrt()
# r = Decimal('0.0501154666744709107')
C++:
#include <iostream>
#include <math.h>
int main()
{
double zx2 = 0.95;
double zx1 = 1.0;
double zy2 = 0.00403;
double zy1 = 0.00063;
double r;
r = 0.0 + sqrt((zx2-zx1)*(zx2-zx1)+(zy2-zy1)*(zy2-zy1));
std::cout<<"r = " << r << " ****";
return 0;
}
// r = 0.050115466674470907 ****
There is this 1
showing up near the end in python but not in c++, why ? Changing the precision in python will not change anything (i already tried) because, the 1
is before the "rounding".
Python: 0.0501154666744709107
C++ : 0.050115466674470907
Edit:
I though that Decimal
would convert anything passed to it into a string in order to "recut" them, but the comment of juanpa.arrivillaga made me doubt about it and after checking the source code, it is not the case ! So I changed to use string. Now the Python result is the same as WolframAlpha shared by Random Davis: link.
The origin of the discrepancy is that Python Decimal
follows the more modern IBM’s General Decimal Arithmetic Specification.
In C++ however there too exist support available for 80-bit "extended precision" through the long double
format.
For reference, the standard IEEE-754 floating point double
s contain 53 bits of precision.
Here below the C++ example from the question, refactored using long double
s:
#include <iostream>
#include <math.h>
#include <iomanip>
int main()
{
long double zx2 = 0.95;
long double zx1 = 1.0;
long double zy2 = 0.00403;
long double zy1 = 0.00063;
long double r;
r = 0.0 + sqrt((zx2-zx1)*(zx2-zx1)+(zy2-zy1)*(zy2-zy1));
std::fixed;
std::cout<< std::setprecision(25) << "r = " << r << " ****"; //25 floats
// prints "r = 0.05011546667447091067728042 ****"
return 0;
}