FSharp runs my algorithm slower than Python

Question:

Years ago, I solved a problem via dynamic programming:

https://www.thanassis.space/fillupDVD.html

The solution was coded in Python.

As part of expanding my horizons, I recently started learning OCaml/F#. What better way to test the waters, than by doing a direct port of the imperative code I wrote in Python to F# – and start from there, moving in steps towards a functional programming solution.

The results of this first, direct port… are disconcerting:

Under Python:

  bash$ time python fitToSize.py
  ....
  real    0m1.482s
  user    0m1.413s
  sys     0m0.067s

Under FSharp:

  bash$ time mono ./fitToSize.exe
  ....
  real    0m2.235s
  user    0m2.427s
  sys     0m0.063s

(in case you noticed the “mono” above: I tested under Windows as well, with Visual Studio – same speed).

I am… puzzled, to say the least. Python runs code faster than F# ? A compiled binary, using the .NET runtime, runs SLOWER than Python’s interpreted code?!?!

I know about startup costs of VMs (mono in this case) and how JITs improve things for languages like Python, but still… I expected a speedup, not a slowdown!

Have I done something wrong, perhaps?

I have uploaded the code here:

https://www.thanassis.space/fsharp.slower.than.python.tar.gz

Note that the F# code is more or less a direct, line-by-line translation of the Python code.

P.S. There are of course other gains, e.g. the static type safety offered by F# – but if the resulting speed of an imperative algorithm is worse under F# … I am disappointed, to say the least.

EDIT: Direct access, as requested in the comments:

the Python code: https://gist.github.com/950697

the FSharp code: https://gist.github.com/950699

Asked By: ttsiodras

||

Answers:

Edit: I was wrong, it’s not a question of value type vs reference type. The performance problem was related to the hash function, as explained in other comments. I keep my answer here because there’s an interessant discussion. My code partially fixed the performance issue, but this is not the clean and recommended solution.

On my computer, I made your sample run twice as fast by replacing the tuple with a struct. This means, the equivalent F# code should run faster than your Python code. I don’t agree with the comments saying that .NET hashtables are slow, I believe there’s no significant difference with Python or other languages implementations. Also, I don’t agree with the “You can’t 1-to-1 translate code expect it to be faster”: F# code will generally be faster than Python for most tasks (static typing is very helpful to the compiler). In your sample, most of the time is spent doing hashtable lookups, so it’s fair to imagine that both languages should be almost as fast.

I think the performance issue is related to gabage collection (but I haven’t checked with a profiler). The reason why using tuples can be slower here than structures has been discussed in a SO question ( Why is the new Tuple type in .Net 4.0 a reference type (class) and not a value type (struct)) and a MSDN page (Building tuples):

If they are reference types, this
means there can be lots of garbage
generated if you are changing elements
in a tuple in a tight loop. […]
F# tuples were reference types, but
there was a feeling from the team that
they could realize a performance
improvement if two, and perhaps three,
element tuples were value types
instead. Some teams that had created
internal tuples had used value instead
of reference types, because their
scenarios were very sensitive to
creating lots of managed objects.

Of course, as Jon said in another comment, the obvious optimization in your example is to replace hashtables with arrays. Arrays are obviously much faster (integer index, no hashing, no collision handling, no reallocation, more compact), but this is very specific to your problem, and it doesn’t explain the performance difference with Python (as far as I know, Python code is using hashtables, not arrays).

To reproduce my 50% speedup, here is the full code: http://pastebin.com/nbYrEi5d

In short, I replaced the tuple with this type:

type Tup = {x: int; y: int}

Also, it seems like a detail, but you should move the List.mapi (fun i x -> (i,x)) fileSizes out of the enclosing loop. I believe Python enumerate does not actually allocate a list (so it’s fair to allocate the list only once in F#, or use Seq module, or use a mutable counter).

Answered By: Laurent

Dr Jon Harrop, whom I contacted over e-mail, explained what is going on:

The problem is simply that the program has been optimized for Python. This is common when the programmer is more familiar with one language than the other, of course. You just have to learn a different set of rules that dictate how F# programs should be optimized…
Several things jumped out at me such as the use of a “for i in 1..n do” loop rather than a “for i=1 to n do” loop (which is faster in general but not significant here), repeatedly doing List.mapi on a list to mimic an array index (which allocated intermediate lists unnecessarily) and your use of the F# TryGetValue for Dictionary which allocates unnecessarily (the .NET TryGetValue that accepts a ref is faster in general but not so much here)

… but the real killer problem turned out to be your use of a hash table to implement a dense 2D matrix. Using a hash table is ideal in Python because its hash table implementation has been extremely well optimized (as evidenced by the fact that your Python code is running as fast as F# compiled to native code!) but arrays are a much better way to represent dense matrices, particularly when you want a default value of zero.

The funny part is that when I first coded this algorithm, I DID use a table — I changed the implementation to a dictionary for reasons of clarity (avoiding the array boundary checks made the code simpler – and much easier to reason about).

Jon transformed my code (back :-)) into its array version, and it runs at 100x speed.

Moral of the story:

  • F# Dictionary needs work… when using tuples as keys, compiled F# is slower than interpreted Python’s hash tables!
  • Obvious, but no harm in repeating: Cleaner code sometimes means… much slower code.

Thank you, Jon — much appreciated.

EDIT: the fact that replacing Dictionary with Array makes F# finally run at the speeds a compiled language is expected to run, doesn’t negate the need for a fix in Dictionary’s speed (I hope F# people from MS are reading this). Other algorithms depend on dictionaries/hashes, and can’t be easily switched to using arrays; making programs suffer “interpreter-speeds” whenever one uses a Dictionary, is arguably, a bug. If, as some have said in the comments, the problem is not with F# but with .NET Dictionary, then I’d argue that this… is a bug in .NET!

EDIT2: The clearest solution, that doesn’t require the algorithm to switch to arrays (some algorithms simply won’t be amenable to that) is to change this:

let optimalResults = new Dictionary<_,_>()

into this:

let optimalResults = new Dictionary<_,_>(HashIdentity.Structural)

This change makes the F# code run 2.7x times faster, thus finally beating Python (1.6x faster). The weird thing is that tuples by default use structural comparison, so in principle, the comparisons done by the Dictionary on the keys are the same (with or without Structural). Dr Harrop theorizes that the speed difference may be attributed to virtual dispatch: “AFAIK, .NET does little to optimize virtual dispatch away and the cost of virtual dispatch is extremely high on modern hardware because it is a “computed goto” that jumps the program counter to an unpredictable location and, consequently, undermines branch prediction logic and will almost certainly cause the entire CPU pipeline to be flushed and reloaded”.

In plain words, and as suggested by Don Syme (look at the bottom 3 answers), “be explicit about the use of structural hashing when using reference-typed keys in conjunction with the .NET collections”. (Dr. Harrop in the comments below also says that we should always use Structural comparisons when using .NET collections).

Dear F# team in MS, if there is a way to automatically fix this, please do.

Answered By: ttsiodras

As Jon Harrop has pointed out, simply constructing the dictionaries using Dictionary(HashIdentity.Structural) gives a major performance improvement (a factor of 3 on my computer). This is almost certainly the minimally invasive change you need to make to get better performance than Python, and keeps your code idiomatic (as opposed to replacing tuples with structs, etc.) and parallel to the Python implementation.

Answered By: kvb

Hmm.. if the hashtable is the major bottleneck, then it is properly the hash function itself. Havn’t look at the specific hash function but For one of the most common hash functions namely

((a * x + b) % p) % q

The modulus operation % is painfully slow, if p and q is of the form 2^k – 1, we can do modulus with an and, add and a shift operation.

Dietzfelbingers universal hash function h_a : [2^w] -> [2^l]

lowerbound(((a * x) % 2^w)/2^(w-l))

Where is a random odd seed of w-bit.

It can be computed by (a*x) >> (w-l), which is magnitudes of speed faster than the first hash function. I had to implement a hash table with linked list as collision handling. It took 10 minutes to implement and test, we had to test it with both functions, and analyse the differens of speed. The second hash function had as I remember around 4-10 times of speed gain dependend on the size of the table.
But the thing to learn here is if your programs bottleneck is hashtable lookup the hash function has to be fast too

Answered By: kam