Difference between the built-in pow() and math.pow() for floats, in Python?

Question:

Is there a difference in the results returned by Python’s built-in pow(x, y) (no third argument) and the values returned by math.pow(), in the case of two float arguments.

I am asking this question because the documentation for math.pow() implies that pow(x, y) (i.e. x**y) is essentially the same as math.pow(x, y):

math.pow(x, y)

Return x raised to the power y. Exceptional cases
follow Annex ‘F’ of the C99 standard as far as possible. In
particular, pow(1.0, x) and pow(x, 0.0) always return 1.0, even when x
is a zero or a NaN. If both x and y are finite, x is negative, and y
is not an integer then pow(x, y) is undefined, and raises ValueError.

Changed in version 2.6: The outcome of 1**nan and nan**0 was undefined.

Note the last line: the documentation implies that the behavior of math.pow() is that of the exponentiation operator ** (and therefore of pow(x, y)). Is this officially guaranteed?

Background: My goal is to provide an implementation of both the built-in pow() and of math.pow() for numbers with uncertainty that behaves in the same way as with regular Python floats (same numerical results, same exceptions, same results for corner cases, etc.). I have already implemented something that works quite well, but there are some corner cases that need to be handled.

Asked By: Eric O Lebigot

||

Answers:

Quick Check

From the signatures, we can tell that they are different:

pow(x, y[, z])

math.pow(x, y)

Also, trying it in the shell will give you a quick idea:

>>> pow is math.pow
False

Testing the differences

Another way to understand the differences in behaviour between the two functions is to test for them:

import math
import traceback
import sys

inf = float("inf")
NaN = float("nan")

vals = [inf, NaN, 0.0, 1.0, 2.2, -1.0, -0.0, -2.2, -inf, 1, 0, 2]

tests = set([])

for vala in vals:
  for valb in vals:
    tests.add( (vala, valb) )
    tests.add( (valb, vala) )


for a,b in tests:
  print("math.pow(%f,%f)"%(a,b) )
  try:
    print("    %f "%math.pow(a,b))
  except:
    traceback.print_exc()
  
  print("__builtins__.pow(%f,%f)"%(a,b) )
  try:
    print("    %f "%__builtins__.pow(a,b))
  except:
    traceback.print_exc()

We can then notice some subtle differences. For example:

math.pow(0.000000,-2.200000)
    ValueError: math domain error

__builtins__.pow(0.000000,-2.200000)
    ZeroDivisionError: 0.0 cannot be raised to a negative power

There are other differences, and the test list above is not complete (no long numbers, no complex, etc…), but this will give us a pragmatic list of how the two functions behave differently. I would also recommend extending the above test to check for the type that each function returns. You could probably write something similar that creates a report of the differences between the two functions.

math.pow()

math.pow() handles its arguments very differently from the builtin ** or pow(). This comes at the cost of flexibility. Having a look at the source, we can see that the arguments to math.pow() are cast directly to doubles:

static PyObject *
math_pow(PyObject *self, PyObject *args)
{
    PyObject *ox, *oy;
    double r, x, y;
    int odd_y;

    if (! PyArg_UnpackTuple(args, "pow", 2, 2, &ox, &oy))
        return NULL;
    x = PyFloat_AsDouble(ox);
    y = PyFloat_AsDouble(oy);
/*...*/

The checks are then carried out against the doubles for validity, and then the result is passed to the underlying C math library.

builtin pow()

The built-in pow() (same as the ** operator) on the other hand behaves very differently, it actually uses the Objects’s own implementation of the ** operator, which can be overridden by the end user if need be by replacing a number’s __pow__(), __rpow__() or __ipow__(), method.

For built-in types, it is instructive to study the difference between the power function implemented for two numeric types, for example, floats, long and complex.

Overriding the default behaviour

Emulating numeric types is described here. essentially, if you are creating a new type for numbers with uncertainty, what you will have to do is provide the __pow__(), __rpow__() and possibly __ipow__() methods for your type. This will allow your numbers to be used with the operator:

class Uncertain:
  def __init__(self, x, delta=0):
    self.delta = delta
    self.x = x
  def __pow__(self, other):
    return Uncertain(
      self.x**other.x, 
      Uncertain._propagate_power(self, other)
    )
  @staticmethod
  def _propagate_power(A, B):
    return math.sqrt(
      ((B.x*(A.x**(B.x-1)))**2)*A.delta*A.delta +
      (((A.x**B.x)*math.log(B.x))**2)*B.delta*B.delta
    )

In order to override math.pow() you will have to monkey patch it to support your new type:

def new_pow(a,b):
    _a = Uncertain(a)
    _b = Uncertain(b)
    return _a ** _b

math.pow = new_pow

Note that for this to work you’ll have to wrangle the Uncertain class to cope with an Uncertain instance as an input to __init__()

Answered By: brice

Python’s standard pow includes a simple hack that makes pow(2, 3, 2) faster than (2 ** 3) % 2 (of course, you’ll only notice that with large numbers).

Another big difference is how the two functions handle different input formats.

>>> pow(2, 1+0.5j)
(1.8810842093664877+0.679354250205337j)
>>> math.pow(2, 1+0.5j)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: can't convert complex to float

However, I have no idea why anyone would prefer math.pow over pow.

Answered By: Tom van der Woerdt

math.pow() implicitly converts its arguments to float:

>>> from decimal import Decimal
>>> from fractions import Fraction
>>> math.pow(Fraction(1, 3), 2)
0.1111111111111111
>>> math.pow(Decimal(10), -1)
0.1

but the built-in pow does not:

>>> pow(Fraction(1, 3), 2)
Fraction(1, 9)
>>> pow(Decimal(10), -1)
Decimal('0.1')

My goal is to provide an implementation of both the built-in pow() and of math.pow() for numbers with uncertainty

You can overload pow and ** by defining __pow__ and __rpow__ methods for your class.

However, you can’t overload math.pow (without hacks like math.pow = pow). You can make a class usable with math.pow by defining a __float__ conversion, but then you’ll lose the uncertainty attached to your numbers.

Answered By: dan04

Just adding %timeit comparison

In [1]: def pair_generator(): 
    ...:     yield (random.random()*10, random.random()*10) 
    ...:   

In [2]: %timeit [a**b for a, b in pair_generator()]                                                                    
538 ns ± 1.94 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)

In [3]: %timeit [math.pow(a, b) for a, b in pair_generator()]                                                          
632 ns ± 2.77 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
Answered By: zhukovgreen
Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.