timeit is useful for this and is included in the standard Python distribution.
import timeit timeit.Timer('for i in xrange(10): oct(i)').timeit()
For small algorithms you can use the module timeit
from python documentation:
def test(): "Stupid test function" L =  for i in range(100): L.append(i) if __name__=='__main__': from timeit import Timer t = Timer("test()", "from __main__ import test") print t.timeit()
Less accurately but still valid you can use module time like this:
from time import time t0 = time() call_mifuntion_vers_1() t1 = time() call_mifunction_vers_2() t2 = time() print 'function vers1 takes %f' %(t1-t0) print 'function vers2 takes %f' %(t2-t1)
The programming language doesn’t matter; measuring the runtime complexity of an algorithm works the same way regardless of the language. Analysis of Algorithms by Stanford on Google Code University is a very good resource for teaching yourself how to analyze the runtime complexity of algorithms and code.
Using a decorator for measuring execution time for functions can be handy. There is an example at http://www.zopyx.com/blog/a-python-decorator-for-measuring-the-execution-time-of-methods.
Below I’ve shamelessly pasted the code from the site mentioned above so that the example exists at SO in case the site is wiped off the net.
import time def timeit(method): def timed(*args, **kw): ts = time.time() result = method(*args, **kw) te = time.time() print '%r (%r, %r) %2.2f sec' % (method.__name__, args, kw, te-ts) return result return timed class Foo(object): @timeit def foo(self, a=2, b=3): time.sleep(0.2) @timeit def f1(): time.sleep(1) print 'f1' @timeit def f2(a): time.sleep(2) print 'f2',a @timeit def f3(a, *args, **kw): time.sleep(0.3) print 'f3', args, kw f1() f2(42) f3(42, 43, foo=2) Foo().foo()
I am not 100% sure what is meant by “running times of my algorithms written in python”, so I thought I might try to offer a broader look at some of the potential answers.
Algorithms don’t have running times; implementations can be timed, but an algorithm is an abstract approach to doing something. The most common and often the most valuable part of optimizing a program is analyzing the algorithm, usually using asymptotic analysis and computing the big O complexity in time, space, disk use and so forth.
A computer cannot really do this step for you. This requires doing the math to figure out how something works. Optimizing this side of things is the main component to having scalable performance.
You can time your specific implementation. The nicest way to do this in Python is to use timeit. The way it seems most to want to be used is to make a module with a function encapsulating what you want to call and call it from the command line with
python -m timeit ....
Using timeit to compare multiple snippets when doing microoptimization, but often isn’t the correct tool you want for comparing two different algorithms. It is common that what you want is asymptotic analysis, but it’s possible you want more complicated types of analysis.
You have to know what to time. Most snippets aren’t worth improving. You need to make changes where they actually count, especially when you’re doing micro-optimisation and not improving the asymptotic complexity of your algorithm.
If you quadruple the speed of a function in which your code spends 1% of the time, that’s not a real speedup. If you make a 20% speed increase on a function in which your program spends 50% of the time, you have a real gain.
To determine the time spent by a real Python program, use the stdlib profiling utilities. This will tell you where in an example program your code is spending its time.