Below table shows the benchmark results for reading 10 times 10000 values and 10000 times 10 values with links to the source code files in the different programming languages:
Language | 10 times 10000 values | 10000 times 10 values |
---|---|---|
F# | 6 seconds | 20 seconds |
Python | 26 seconds | 40 seconds |
Julia | 45 seconds | 72 seconds |
Go | 8 seconds | 25 seconds |
OCaml | 2.5 seconds | 48 seconds |
R | 110 seconds | NA |
The overall fastest version is the one written in F# but note that it's also the version I have tweaked the most. As I'm not very experienced in most of the languages so any performance tips are welcome. Note that I tried using memory mapped files in .NET and Python this improved performance when querying lots of values from the same file but also made it worse in other cases.
The implementation of the functionality is most of the times rather similar in the different languages. Some notable differences where:
- Julia apparently doesn't have a null value so I refrained from checking whether the read integer value was equal to the int32 minimum value (-2147483648).
- In Go converting the bytes to integers was faster with a custom function.
- I didn't find a function in the OCaml Core library to convert bytes to a 32-bit integer, but luckily I found one on Stack Overflow.
F#:
open System open System.IO let readValue (reader:BinaryReader) cellIndex = reader.BaseStream.Seek(int64 (cellIndex*4), SeekOrigin.Begin) |> ignore match reader.ReadInt32() with | Int32.MinValue -> None | v -> Some(v) let readValues indices fileName = use reader = new BinaryReader(File.Open(fileName, FileMode.Open, FileAccess.Read, FileShare.Read)) let values = Array.map (readValue reader) indices values
Python:
def read_values(filename, indices): # indices are sorted and unique values = [] with open(filename, 'rb') as f: for index in indices: f.seek(index*4L, os.SEEK_SET) b = f.read(4) v = struct.unpack("@i", b)[0] if v == -2147483648: v = None values.append(v) return values
Julia:
function readvalue(stream, position) seek(stream, position) return read(stream, Int32) end function readvalues(filename::String, indices) stream = open(filename, "r") try return Int32[readvalue(stream, index*4) for index in indices] finally close(stream) end end
Go:
import ("os") func bytes2int(b []byte) int32 { v := int32(0) for i := 0; i < 4; i++ { v = v | (int32(b[i]) << (uint(8*i))) } return v } func readValues(indices []int, filename string) []int32 { results := make([]int32, len(indices)) b := make([]byte, 4) f,_ := os.Open(filename) for i, cellIndex := range indices { f.Seek(int64(cellIndex*4), os.SEEK_SET) f.Read(b) value := bytes2int(b) // around 10-20% faster then binary.Read if value != -2147483648 { results[i] = value } else { results[i] = 99999 } } return results }
OCaml:
let input_le_int32 inchannel = (* http://stackoverflow.com/a/6031286/477367 *) let res = ref 0l in for i = 0 to 3 do let byte = input_byte inchannel in res := Int32.logor !res (Int32.shift_left (Int32.of_int byte) (8*i)) done; match !res with | -2147483648l -> None | v -> Some(v) let readvalue inchannel index = seek_in inchannel (index*4); input_le_int32 inchannel let readvalues (indices:int array) filename = let inchannel = open_in_bin filename in try let result = Array.map (readvalue inchannel) indices in close_in inchannel; result with e -> close_in_noerr inchannel; raise e
R:
read.values <- function(filename, indices) { conn <- file(filename, "rb") read.value <- function(index) { seek(conn, where=index*4) readBin(conn, integer(), size = 4, n = 1, endian = "little") } r <- sapply(indices,read.value) close(conn) r[r==-2147483648] <- NA r }
Any suggestions for improving the implementation in one of the above programming languages ? Which language would you like to compare my results with ? Which other language(s) do you expect to be faster than my benchmark results ? Can you help me out with a version in your favorite language or in C, fortran, Common Lisp, Scheme, Clojure, Java or J ?
4 comments:
Hi I just saw your benchmarks and noticed the bad timings for Julia, there isn't a date of when you published this, but I tried to improve the Julia implementation: http://git.io/v4xgA
10,000 times 10 values:
0.794152 seconds (487.72 k allocations: 26.247 MB, 0.24% gc time)
10 times 10,000 values:
1.480457 seconds (667.21 k allocations: 34.061 MB, 0.43% gc time)
Using a 4 cores Intel(R) Core(TM) i5-3210M CPU @ 2.50GHz, Windows 7, Julia 4.0
Which makes more sense!
Thanks for the effort. I've tested your code on my benchmark and the performance is the same as my original version. Initially I used Julia 0.3 and I've tested it on Julia 0.4.1 but that doesn't seem to change anything. Note that in my tests I read multiple files (nearly 40), which corresponds with the ~40 fold difference between your and my measurements.
Sorry not clear! So you read 40 files in Jullia and R but only one file with others. Is that what leads to the time difference or unfair comparison? Thanks
All my tests run on the same set of abaout 40 files but the Julia code from Ismael VC was only benchmarked on one file so for example when reading 10,000 times 10 values a total of 40 * 10000 * 10 = 4.000.000 numbers are read in my tests. All my code can be found here https://github.com/samuelbosch/blogbits/blob/master/geosrc/binreaders/
Post a Comment