When I worked at WebDevStudios, we spoke in gifs. I guess, these days, that makes me old. Whatever. I miss them. It represented a shared language even when the source of the gif was unknown. (I remember distinctly how it felt when I found the Key and Peele sketch from which this gem was taken:)
(It kinda felt like that ☝️)
Justin Sternberg even wrote an Alfred workflow called WDS Giffy which predated (in practice, if not in actuality) the gif-sharing service Giphy that pointed to his personal collection of gifs. I can't say for certain, but I think most of us used that, at least while I was there.
I still have Giffy in Alfred, I just don't use it. I have my own gifs site that uses the (now ancient) GifDrop WordPress plugin developed by Mark Jaquith. But I got curious and wanted to see how Justin's Alfred workflow worked. Specifically, where the images came from.
This led me to a long conversation with ChatGPT.
You see, after I found the API endpoint feeding the Alfred WDS Giffy workflow, I decided it would be cool to write a script that could just...download all those gifs. I don't know that I want all of them, but there are a lot that I do want. And hey, maybe other people want them, too?
The API endpoint that powers the Alfred workflow is hard-coded into the script. The script asks you one question: where do you want to put the files. It will create a subdirectory in your ~/Pictures
folder with that name and start downloading. The script is specifically tailored to this API endpoint -- while it might work to swap something else in, it would have to assume the same structure and architecture as Justin's (which is to say, it might work, but don't expect any miracles).
I tried a bunch of different things. I wanted to be able to keep the original modified date -- because some of these are historic relics -- but I was running into issues with some files breaking. There are also inconsistencies in the API data, too. Some images are missing, some entries are corrupted, it's a lot of data to parse through (5,470 entities, to be specific). Since it's so many, I decided it should concurrently download files -- so it downloads them 10 at a time with curl
. It shows a running count of how many you've downloaded and how many there are total and it outputs any errors or issues to a download_log.txt
in the folder you specified.
When it's done, it deletes any files that have zero bytes, any temporary files it might have left behind and tells you how many files it's downloaded. I haven't yet gone through the error log, so there might be some things I could clean up even more, but it downloaded 5,446/5,470 which is a 99% success rate, so I'd call it pretty good, actually.
Here it is for your perusal and/or downloading pleasure.
The script was written with the assumption that you're on a Mac. If you're on Windows or Linux, you're kind of SOL because BASH and some of the tools used might not be 100% the same. But ChatGPT got me here and it could probably adjust the script to your use case.
If you want to run it yourself, download the download-sternberg-gifs.sh
file somewhere onto the computer you want to download the gifs to. In your terminal, cd
to that directory.
Run chmod +x ./download-sternberg-gifs.sh
to ensure it can execute.
You will need jq
and curl
to run the script. If a which jq
or which curl
come up empty, you can install either of these with Homebrew via brew install jq
or brew install curl
.
Assuming you have all your prereqs in place, you can run the script by just typing: ./download-sternberg-gifs.sh
. It will prompt you for a directory and then get to downloading.
If you have any issues with the script, let me know and I'll probably ask ChatGPT to fix them. 😄