I was doing a little tidy up of the some of the books under my desk and I started to flick through Foundations of F# by Robert Pickering. It's a great book and I started to flick through it. I spotted the Fibonacci sequence and it got me thinking. How could I waste some of my afternoon?
I did the algorithm in F# and then begun to wonder if something in C# would be faster. That got me thinking about how to accurately measure performance. In the end I went down a bit of a rabbit hole but I found some interesting results.
The results above correspond to the names of the tests below. It's very difficult to create a true scientific test on your own computer but the tests were run an i5 Sandy Bridge with 8gb of ram. No other programs except Visual Studio and Windows 7 were running. I ran the test 5 times, each time it ran the tests 100,000 times and then it grouped on the test name and took the average and saved it to sheet 2 above. I then used a pivot table and aggregated / averaged again.
I was surprised and pleased that the F# methods seemed to run the fastest. Even more surprising was that some of the VB entries were next.
I wasn't surprised that the shorthand VB method was slower; even though really it should be just as fast as the C# shorthand equivalent if ? :. Obviously they don't boil down to the same thing in IL (I might just check that next).
It's worth noting that many of the results are really close together. They are in the correct order but are rounded to 4 decimal places. I think in most situations that's accurate enough.
The original algorithm was the fastest but as they were all so close; it's worth considering other things such as the most expressive syntax. As it turns out, I think the original F# algorithm is actually expressed the nicest way to by a long way.