How precise are eigenvalues in Python? And can precision be improved?
Question:
When calculating eigenvalues and eigenvectors of a matrix, the eigenmatrix times itself should result in the identity matrix (E @ E.T = I). However, this is rarely the case, as some (small) errors always occurs.
So question 1: how precise are eigenvalues / eigenvectors calculated?
And question 2: is there any way to improve precision?
Answers:
- I assume you are using a standard library for this, such as:
https://numpy.org/doc/stable/reference/generated/numpy.linalg.eigh.html
or
https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.eig.html
which I would expect to employ high speed floating point operations under the hood.
And so the (high) precision of 64 bit IEEE floats is a bound.
Far to the right of the decimal, you may find variances in results from run to run, due to the way FPUs cache and process (higher precision) intermediate results during context switches. Google returned a number of quick hits here, such as:
https://indico.cern.ch/event/166141/sessions/125686/attachments/201416/282784/Corden_FP_control.pdf
- As for improving precision, your question is well discussed here, where a slower but higher resolution library is mentioned:
Higher precision eigenvalues with numpy
You might also consider using Mathematica.
When calculating eigenvalues and eigenvectors of a matrix, the eigenmatrix times itself should result in the identity matrix (E @ E.T = I). However, this is rarely the case, as some (small) errors always occurs.
So question 1: how precise are eigenvalues / eigenvectors calculated?
And question 2: is there any way to improve precision?
- I assume you are using a standard library for this, such as:
https://numpy.org/doc/stable/reference/generated/numpy.linalg.eigh.html
or
https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.eig.html
which I would expect to employ high speed floating point operations under the hood.
And so the (high) precision of 64 bit IEEE floats is a bound.
Far to the right of the decimal, you may find variances in results from run to run, due to the way FPUs cache and process (higher precision) intermediate results during context switches. Google returned a number of quick hits here, such as:
https://indico.cern.ch/event/166141/sessions/125686/attachments/201416/282784/Corden_FP_control.pdf
- As for improving precision, your question is well discussed here, where a slower but higher resolution library is mentioned:
Higher precision eigenvalues with numpy
You might also consider using Mathematica.