(See example query below)
-
Fetch a page of article IDs
/article-ids/?first=10
-
Batch fetch the articles represented by each ID
/articles/batch/{id1,id2,...id10}/
-
For each article, fetch a page of comment IDs
/articles/{n}/comment-ids/?first=3
(repeats 10 times) -
Fetch each comment batch, fetch the comments (can't be batched because previous steps won't complete at the same time)
/comments/batch/{id1,id2,id3}/
(repeats 10 times)
Total number of request: 22
-
Fetch a page of article IDs
/article-ids/?first=10
-
Batch fetch the articles represented by each ID
/articles/batch/{1,2,...n}/
-
Batch fetch pages of 3 comment IDs for each article
/articles/{n}/comment-ids/batch/?first=3
-
Fetch all the comments for all page batches.
/comments/batch/{1,2,...30}/
Total number of requests: 4
Why is the second way harder? Dataloader assumes the only necessary information for constructing a batch request is the list of IDs. When dealing with graph nodes this is correct. But when dealing with edges we need more information, specifically page size.