Skip to content

Instantly share code, notes, and snippets.

@jjhelmus
Created November 22, 2013 22:35
Show Gist options
  • Save jjhelmus/7608030 to your computer and use it in GitHub Desktop.
Save jjhelmus/7608030 to your computer and use it in GitHub Desktop.
#! /usr/bin/env python
import pyart
import numpy as np
from pyart.testing.sample_objects import make_empty_ppi_radar, \
_EXAMPLE_RAYS_FILE
def pproc(LP_solver, proc=1):
""" Phase process using LP_solver and number of processors. """
# make a example radar to phase process
radar = make_empty_ppi_radar(983, 80, 1)
radar.range['data'] = 117.8784 + np.arange(983) * 119.91698
f = np.load(_EXAMPLE_RAYS_FILE)
for field_name in f:
fdata = f[field_name]
fdata = np.tile(fdata, (80, 1))
radar.fields[field_name] = {'data': fdata}
f.close()
# phase processing
phidp, kdp = pyart.correct.phase_proc_lp(radar, 0.0, LP_solver=LP_solver,
proc=proc)
if __name__ == '__main__':
import pstats, cProfile
cProfile.runctx("pproc('cvxopt')", globals(), locals(), "Profile.prof")
s = pstats.Stats("Profile.prof")
s.strip_dirs().sort_stats("time").print_stats(10)
@jjhelmus
Copy link
Author

Results from timing using IPython's %timeit command

%timeit lp_speed_test.pproc(args) 
args: timing               

'pyglpk': 33.4 s per loop
'cvxopt': 34.3 s per loop
'cylp': 570 ms per loop
'cylp_mp', 1: 605 ms per loop

The speed does not improve moving to more processors as the problem is too small.

Increasing the number of rays to 8000 (and commenting the print statement from the cylp_mp processing):

args: timing               

'cylp': 23.3 s per loop
'cylp_mp', 1: 23.3 s per loop
'cylp_mp', 2: 19.3 s per loop
'cylp_mp', 4: 17.3 s per loop
'cylp_mp', 8: 17.4 s per loop

@kmuehlbauer
Copy link

Please test also on plain command line (from shell) with no ipython or idle involved, because this will improve dramatically. At least "idle" tried to handle the separate processes obviously through another pipe or something, which added much overhead. I'am very interested in the outcome, because the speedup here was significant.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment