-
-
Save johnmyleswhite/5773873 to your computer and use it in GitHub Desktop.
function foo(n::Integer) | |
x::Int = 0 | |
t1::Float64 = @elapsed for i in 1:n | |
x = x + 1 | |
end | |
@printf "Int: %f\n" t1 | |
y::BigInt = BigInt(0) | |
t2::Float64 = @elapsed for i in 1:n | |
y = y + BigInt(1) | |
end | |
@printf "BigInt: %f\n" t2 | |
end | |
foo(100_000_000) |
The comparable R script,
b <- Sys.time()
x <- 0
for (i in 1:100000000)
{
x <- x + 1
}
e <- Sys.time()
e - b
takes 57.10305 seconds.
The analogous Python script,
from time import *
b = time()
x = 0
for i in range(100000000):
x = x + 1
e = time()
e - b
takes 22.614732027053833 seconds.
In the python code it's better to use xrange instead of range as it takes less memory.
I am curious of how you can make the python code run faster.
Without knowing much about Julia, I would guess that the BigInt
constructor is taking a lot of the time; if you moved the constructor out of the loop, like what follows, it should be much faster (I realize this is only toy code, but it should indicate where the millisecs are going).
function foo(n::Integer)
x::Int = 0
t1::Float64 = @elapsed for i in 1:n
x = x + 1
end
@printf "Int: %f\n" t1
y::BigInt = BigInt(0)
t2::Float64 = @elapsed for i in 1:n
y = y + BigInt(1)
end
@printf "BigInt: %f\n" t2
y = BigInt(0)
z::BigInt = BigInt(1)
t2 = @elapsed for i in 1:n
y = y + z
end
@printf "BigInt(2): %f\n" t2
end
foo(10_000_000)
My Julia build is broken at the moment, but the Forio web REPL indicates ~ half the time going to the constructor.
@erasmospunk, Mark Roddy showed that xrange
and sum
instead of a loop gives you a big speedup, but still not the full Julia speed. (In fairness, beating Julia here is close to impossible: the machine code for the loop is about as tight as you can get.)
@shabbychef, You're right that the BigInt constructor is slowing this loop down a lot. I wrote things that way for simplicity, but this example shows that the Julia compiler is still not smart enough to fold many constants out during optimization. At the same time, I don't understand how the R and Python interpreters actually process these lines very well, so I wanted things to be as close to parallel as I knew how to make them.
On my machine, this gives timing of: