Fastest way to compute e^x?

Question:

What is the fastest way to compute e^x, given x can be a floating point value.

Right now I have used the python’s math library to compute this, below is the complete code where in result = -0.490631 + 0.774275 * math.exp(0.474907 * sum) is the main logic, rest is file handling code which the question demands.

import math
import sys

def sum_digits(n):
   r = 0
   while n:
       r, n = r + n % 10, n // 10
   return r

def _print(string):
    fo = open("output.txt", "w+")
    fo.write(string)
    fo.close()

try:
    f = open('input.txt')
except IOError:
    _print("error")
    sys.exit()
data = f.read()
num = data.split('n', 1)[0]
try:
   val = int(num)
except ValueError:
    _print("error")
    sys.exit()

sum = sum_digits(int(num))
f.close()

if (sum == 2):
    _print("1")
else:
    result = -0.490631 + 0.774275 * math.exp(0.474907 * sum)
    _print(str(math.ceil(result)))

The rvalue of result is the equation of curve (which is the solution to a programming problem) which I derived from wolfarm-mathematica using my own data set.

But this doesn’t seem to pass the par criteria of the assessment !

I have also tried the newton-raphson way but the convergence for larger x is causing the problem, other than that, calculating the natural log ln(x) is a challenge there again !

I don’t have any language constraint so any solution is acceptable. Also if the python’s math library is fastest as some of the comments says then can anyone give an insight on the time complexity and execution time of this program, in short the efficiency of the program ?

Asked By: CMouse

||

Answers:

I don’t know if the exponential curve math is accurate in this code, but it certainly isn’t the slow point.

First, you read the input data in one read call. It does have to be read, but that loads the entire file. The next step takes the first line only, so it would seem more appropriate to use readline. That split itself is O(n) where n is the file size, at least, which might include data you were ignoring since you only process one line.

Second, you convert that line into an int. This probably requires Python’s long integer support, but the operation could be O(n) or O(n^2). A single pass algorithm would multiply the accumulated number by 10 for each digit, allocating one or two new (longer) longs each time.

Third, sum_digits breaks that long int down into digits again. It does so using division, which is expensive, and two operations as well, rather than using divmod. That’s O(n^2), because each division has to process every higher digit for each digit. And it’s only needed because of the conversion you just did.

Summing the digits found in a string is likely easier done with something like sum(int(c) for c in l if c.isdigit()) where l is the input line. It’s not particularly fast, as there’s quite a bit of overhead in the digit conversions and the sum might grow large, but it does make a single pass with a fairly tight loop; it’s somewhere between O(n) and O(n log n), depending on the length of the data, because the sum might grow large itself.

As for the unknown exponential curve, the existence of an exception for a low number is concerning. There’s likely some other option that’s both faster and more accurate if the answer’s an integer anyway.

Lastly, you have at least four distinct output data formats: error, 2, 3.0, 3e+20. Do you know which of these is expected? Perhaps you should be using formatted output rather than str to convert your numbers.

One extra note: If the data is really large, processing it in chunks will definitely speed things up (instead of running out of memory, needing to swap, etc). As you’re looking for a digit sum your size complexity can be reduced from O(n) to O(log n).

Answered By: Yann Vernier