Skip to content

Instantly share code, notes, and snippets.

@satra
Created November 5, 2011 19:22
Show Gist options
  • Select an option

  • Save satra/1341902 to your computer and use it in GitHub Desktop.

Select an option

Save satra/1341902 to your computer and use it in GitHub Desktop.
matlab python linalg benchmark
>> disp('Eig');tic;data=rand(500,500);eig(data);toc;
disp('Svd');tic;data=rand(1000,1000);[u,s,v]=svd(data);s=svd(data);toc;
disp('Inv');tic;data=rand(1000,1000);result=inv(data);toc;
disp('Det');tic;data=rand(1000,1000);result=det(data);toc;
disp('Dot');tic;a=rand(1000,1000);b=inv(a);result=a*b-eye(1000);toc;
disp('Done');
Eig
Elapsed time is 0.840714 seconds.
Svd
Elapsed time is 1.931674 seconds.
Inv
Elapsed time is 0.478826 seconds.
Det
Elapsed time is 0.082375 seconds.
Dot
Elapsed time is 0.298401 seconds.
Done
>> disp('Eig');tic;data=rand(500,500);[a,b]=eig(data);toc;
disp('Svd');tic;data=rand(1000,1000);[u,s,v]=svd(data);s=svd(data);toc;
disp('Inv');tic;data=rand(1000,1000);result=inv(data);toc;
disp('Det');tic;data=rand(1000,1000);result=det(data);toc;
disp('Dot');tic;a=rand(1000,1000);b=inv(a);result=a*b-eye(1000);toc;
disp('Done');
Eig
Elapsed time is 0.511574 seconds.
Svd
Elapsed time is 1.913088 seconds.
Inv
Elapsed time is 0.181648 seconds.
Det
Elapsed time is 0.074878 seconds.
Dot
Elapsed time is 0.287743 seconds.
Done
>> disp('Eig');tic;data=rand(500,500);[a,b]=eig(data);toc;
disp('Svd');tic;data=rand(1000,1000);[u,s,v]=svd(data);s=svd(data);toc;
disp('Inv');tic;data=rand(1000,1000);result=inv(data);toc;
disp('Det');tic;data=rand(1000,1000);result=det(data);toc;
disp('Dot');tic;a=rand(1000,1000);b=inv(a);result=a*b-eye(1000);toc;
disp('Done');
Eig
Elapsed time is 0.548438 seconds.
Svd
Elapsed time is 2.069913 seconds.
Inv
Elapsed time is 0.176397 seconds.
Det
Elapsed time is 0.076114 seconds.
Dot
Elapsed time is 0.307366 seconds.
Done
In [3]: %timeit data=np.random.rand(500,500);a,b = np.linalg.eig(data)
1 loops, best of 3: 404 ms per loop
In [4]: %timeit data=np.random.rand(1000,1000);u,s,v = np.linalg.svd(data); snew=np.linalg.svd(data, full_matrices=False)
1 loops, best of 3: 2.11 s per loop
In [5]: %timeit data=np.random.rand(1000,1000);idata = np.linalg.inv(data);10 loops, best of 3: 198 ms per loop
In [6]: %timeit data=np.random.rand(1000,1000);ddata = np.linalg.det(data);
/Library/Frameworks/EPD64.framework/Versions/7.1/lib/python2.7/site-packages/numpy/linalg/linalg.py:1676: RuntimeWarning: overflow encountered in exp
return sign * exp(logdet)
10 loops, best of 3: 68.6 ms per loop
In [7]: %timeit data=np.random.rand(1000,1000);ddata = np.linalg.det(data);
10 loops, best of 3: 68.5 ms per loop
In [8]: %timeit data=np.random.rand(1000,1000);idata = np.linalg.inv(data); result = np.dot(data,idata)-np.eye(1000)
1 loops, best of 3: 319 ms per loop
Model Name: MacBook Pro
Model Identifier: MacBookPro5,2
Processor Name: Intel Core 2 Duo
Processor Speed: 2.93 GHz
Number of Processors: 1
Total Number of Cores: 2
L2 Cache: 6 MB
Memory: 8 GB
Bus Speed: 1.07 GHz
Boot ROM Version: MBP52.008E.B05
SMC Version (system): 1.42f4
System Version: Mac OS X 10.7 (11A511)
Kernel Version: Darwin 11.0.0
Boot Volume: OSX
Boot Mode: Normal
Secure Virtual Memory: Enabled
64-bit Kernel and Extensions: Yes
@scotthirsch
Copy link

What is the goal of this comparison? A few things puzzle me - why include random number generation in the timing? I'm assuming your interest is in the execution time of the the linear algebra routines. The other puzzle is why did you use random numbers? I'm guessing that performance of many of these algorithms could depend heavily on the structure of the data.

@satra
Copy link
Author

satra commented Nov 7, 2011

@scott: the goal was to essentially say that the performance of both were similar. as soon as i get to a power supply, i will update the benchmarks with taking the random number generator out of the loop and use the same data for both matlab and python. i simply don't have the time to show really specific comparisons. if you have data i can use, i'm happy to use the same for both. also, one additional purpose is to compare it to an equivalent boost routine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment