Skip to content

Instantly share code, notes, and snippets.

@antonydevanchi
Last active March 6, 2022 04:56
Show Gist options
  • Save antonydevanchi/56be1c739e47c72e11ddabf853ddc6db to your computer and use it in GitHub Desktop.
Save antonydevanchi/56be1c739e47c72e11ddabf853ddc6db to your computer and use it in GitHub Desktop.
Download all images from undraw.co/illustrations
# npm -g i svgexport
# #ff6347 — change color
svgexport hiking_d24r.svg hiking_d24r.png "svg *[fill=\"#6c63ff\"]{fill: #ff6347}"
# oneliner
for ((i=1;i<30;i++)); do curl -s https://undraw.co/illustrations/load/$i | pup2 'a[data-src] json{}' | jq '.[] .children[] .children[] .children[] .src' -r -M | xargs -i aria2c {} -d ./imgs; done;
@antonydevanchi
Copy link
Author

Community — so sweet ^_^
Thank you!

@rseyferth
Copy link

rseyferth commented Feb 1, 2022

This updated one-liner worked for me:

for ((i=1;i<66;i++)); do curl -s https://undraw.co/api/illustrations\?page\=$i | jq '.illos[] | (.title, .image)' -r -M | sed -e 's/\(.*\)/\1/; s/\ /_/g'|xargs -n2 -L2 bash -c 'curl --silent --output ./$1.svg $2 > /dev/null' bash;done;

@isaacgr
Copy link

isaacgr commented Mar 6, 2022

I can't get the above script to work, so here's my Python script. The requests dependency is required.

#!/usr/bin/env python3

import os
import json
import requests
from multiprocessing.pool import ThreadPool

def build_index():
  page = 1
  urls = []

  while True:
    res = requests.get("https://undraw.co/api/illustrations?page={}".format(page))
    json_body = res.json()

    for item in json_body['illustrations']:
        title = item['title']
        url = item['image']

        print("Title: %s => URL: %s" % (title, url))
        urls.append([title, url])

    page = json_body['nextPage']
    print("Proceeding to Page %d" % page)

    if not json_body['hasMore']:
        print("Finished Gathering JSON.")
        return urls
        
def download_from_entry(entry):
    title, url = entry
    file_name = "%s.svg" % title.lower().replace(' ', '_')

    print("Downloading %s" % file_name)

    if not os.path.exists(file_name):
        res = requests.get(url, stream=True)
        
        if res.status_code is 200:
            path = "./images/%s" % file_name

            with open(path, 'wb') as f:
                for chunk in res:
                    f.write(chunk)
    
            return file_name

urls = build_index()
    
print("Downloading %d files." % len(urls))
    
results = ThreadPool(20).imap_unordered(download_from_entry, urls)

for path in results:
    print("Downloaded %s" % path)

print("Downloaded %d files." % len(urls))

This worked. Great stuff. Only thing is it should be
for item in json_body['illos']:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment