-
-
Save nvie/4707624 to your computer and use it in GitHub Desktop.
# Alt 1: very explicit, but requires knowledge on what type of workers exist | |
from rq.workers.gevent import GeventWorker | |
w = GeventWorker(4) | |
# Alt 2: A bit like an "Abstract Factory", no need for knowing the implementing | |
# class names, backward compatible, and kind='forking' by default | |
from rq import Worker | |
w = Worker(4, kind='gevent') | |
# Alt 3: I don't think (2) works without really nasty tricks. This one is as | |
# simple as (2), but uses a convenient class method, so we can return an instance | |
# of the implementing subclass instead. Switching worker impls is as easy as | |
# changing a single string. | |
from rq import Worker | |
w = Worker.create('gevent', 4) | |
# Alt 4: A helper function that wraps (1) | |
from rq import make_worker | |
w = make_worker('gevent', 4) # will return a rq.workers.gevent.GeventWorker | |
# with max 4 processes |
@nvie I knew that 4 is the number of concurrent processes, but if I just look at the code it's not really clear.
Maybe you could force passing a keyword argument for num_processes to make it always readable.
Otherwise I would keep separate the standard Worker from something like ConcurrentWorker (which then might default to num_processes=2 for example), unless it really make sense that the default implementation should be concurrent.
Otherwise I would vote for option #2..
And anyway it's probably better if you leave the old alternatives instead of replacing them or old answers will not make much sense..
1 is the most explicit. You call a factory on a class and it returns an instance of that class. Whats the use case for needing to be able to type of worker on the fly, instead of importing?
@saulshanabrook: The main use case is that implementing custom worker scripts would be easier. The default rqworker
script would look something like this:
import sys
from rq import make_worker
...
if __name__ == '__main__':
args = parse_args()
worker = make_worker(args.concurrency_backend, args.num_processes)
worker.work()
Where the concurrency_backend
and num_processes
arguments could be command line flags, like for example:
$ rqworker -c gevent -n 4
Implementing custom worker scripts would be just as easy and can use the default rqworker
script as a reference implementation.
I'm really interested in concurrency. Did any of this get implemented?
I'm all curious what the final decision was on this one.
I've added another alternative. What about (4)?