Related to [metaflow #907](Netflix/metaflow#907 and the related docs draft
To use the feature, the user must learn a brand new way of doing foreach
. This adds a high degree of congntive load, as the user must remember that for this particular use case and this use case only, they need to use self.next(..., num_parallel=...)
.
The api makes it very unclear where the parallelization is happening. For example, pytorch_lightning.trainer
has an argument gpus=-1,
which means that it will use all available gpus. In this case, what does num_parallel
add to this? The user has lots of cognitive overload to have to reason about where and what kind of parallelization is happening.