Started by asking it to write python code with an example from the Resevoir docs
can you turn this into a python3 request
curl --request GET \
--url 'https://api.reservoir.tools/tokens/v6?collection=0x3Fe1a4c1481c8351E91B64D5c398b159dE07cbc5&sortBy=tokenId&sortDirection=asc&limit=10&includeAttributes=true' \
--header 'accept: */*' \
--header 'x-api-key: demo-api-key'
Asked it to iterate over the 20k NFT collection
can you make it iterate 100 at a time, from 0-20,000
Told it a bit about how the response was formatted and how it could interface with the API
so actually instead of providing "offset", each response will provide a continuation key in the form of
{
...rest of response
"continuation": "abcdef123"
}
which you must use in the subsequent request params as "continuation": "abcdef123"
Now that it has a loop set up properly, I told it about the business logic I wanted it to include
great. lets exclude the "includeAttributes" param. the response will have the following shape
{
"tokens": [
{
"token": {
"tokenId": "0",
"image": "https://i.seadn.io/gcs/files/9acb975358b9caf654f9103f309c1b3e.png?w=500&auto=format"
},
},
],
"continuation": "MHhkNjY4YTJlMDAxZjMzODViOGJiYzVhODY4MmFjM2MwZDgzYzE5MTIyXzk5",
}
there will be many `tokens`. you will need to go through each token on each request to get the `token.image`, download the image, and output it to a folder called "./images/". name the image by the `token.tokenId`, so the resulting file should be at `./images/0.
some of the `token.images` will be gif and some will be .avif, so you will need to first determine which one it is, and then convert it to a webp image when you save the file.
also, you should resize the images so they are 100x100 pixels.
I noticed a bug, so I asked it to fix it
i found a bug. to fix it, you have to check if `token.image` is None, and if it is, use `token.collection.image` instead
It was running very slowly so I asked it to run the downloads in parallel. This is usually a relatively big engineering task.
this works great. can you please make it async so it can download images in parallel?
Once it did this, I just had it fix a few smaller bugs and it was set to go. In total, I spent 20 minutes prompting ChatGPT before it completely build this scraper. I probably would've spent 1-2 hours writing this myself and a junior dev would probably have spent a day or more.