- Can using either
threading
ormultiprocessing
for concurrent and parallel processing, respectively, of the data generator. - In the
threading
approach (model.fit_generator(..., pickle_safe=False)
), the generator can be run concurrently (but not parallel) in multiple threads, with each thread pulling the next available batch based on the shared state of the generator and placing it in a shared queue. However, the generator must be threadsafe (i.e. use locks at synchronization points). - Due to the Python global interpreter lock (GIL), the threading option generally does not benefit from >1 worker (i.e.
model.fit_generator(..., nb_worker=1)
is best). One possible use case in which >1 threads could be beneficial is the presence of exceptionally long IO times, during which the GIL will be released to enable concurrency. Note also that TensorFlow'ssession.run(...)
method also releases the GIL, thus allowing an actual thread to be run in parallel to a training iteration. To achieve the best performance with this approach, the wall-clock time for generating a single batch with the data generator should be less than that of a single training iteration. Otherwise, the data generation will become a bottleneck. - In the
multiprocessing
approach (model.fit_generator(..., pickle_safe=True)
), the generator can be copied and run in parallel by multiple processes. The key thing to realize here is that the multiprocessing approach will (by default) fork the current process for each worker process, and thus, each process will effectively start with a copy of the generator. Subsequently, each process will run their own "copy" of the generator in parallel, with no synchronization. Thus, while any generator will run without error, if one is not careful, this approach can result in the processes generating the exact same batches at (basically) the same time, (i.e. a deterministic generator will be evaluated in the same manner in each process). The issue here is that withn
processes, the model may see the same batch forn
consecutive steps, and an "epoch" may actually consist oftotal_batches/n
number of unique batches, rather thantotal_batches
. To fix this, the generator can be reformulated in a manner that relies on NumPy random numbers for generating batches, as theGeneratorEnqueuer
class will set a random seed withnp.random.seed()
for each process. - Due to the overhead of (de)serialization, the multiprocessing option generally only benefits from >1 worker (i.e.
model.fit_generator(..., nb_worker=8)
), and will generally result in much better performance than the threading option.
- In light of the above information, the
ImageDataGenerator
is "threadsafe", so it can be used with the "threading" approach above. However, it is not completely appropriate for the "multiprocessing" approach due to issue described above, despite not throwing any errors. If aImageDataGenerator
generator is used with the multiprocessing approach above, the behavior is such that the first epoch will suffer from the problem of the same, deterministic generator in each process, and thus the same batches will be produced at (basically) the same time by each process. If shuffling is used, then the generators will diverge in subsequent epochs due to theIterator
superclass randomly permuting the indices withindex_array = np.random.permutation(n)
at the start of each epoch, making use of the random seed set for each process.
Hi, i have a question about
ImageDataGenerator
. In your post, ImageDataGenerator is "threadsafe" and it can be used with the "threading" approach and in the threading approach (model.fit_generator(..., pickle_safe=False)), the generator can be run concurrently (but not parallel) in multiple threads. I did some experiments and found that using theImagedatagenerator
interface does not improve the performance of data generation using "threading" approach and even worse in "multiprocessing" approach. So how can i run ImageDataGenerator parallelly ? Thanks.