My blog has ~250MB of photos in it (as I archived my Instagram here — a decade of photos) which means that every time I want some files from my blog's repo I've needed to pull down all that data to get at the files I want.
This isn't a problem on my laptop where I work on my blog (as it's already cloned), but I also keep my IndieKit config in there, which I need to copy to the server it runs from. I don't want to have to download hundreds of MB (and growing!) of photo data every time I want to update it.
So I built git-download-subpath
, which is a bash script around git's partial clone functionality.
$ git download-subpath
# usage: git-download-subpath <repo> <subpath> [destination]
$ git download-subpath https://github.com/by-jp/www.byjp.me indiekit
# Successfully downloaded to ./indiekit
This script (which you can find & download below) completes these steps:
- Clones a "no tree" copy of the repo to a temporary directory (just references to commits & the latest files, not the data itself)
- Completes a "no cone" sparse checkout of the subpath desired
- Moves that desired subpath over the working directory (or the destination, if specified)
I'm no bash expert, so there may be subtle bugs here; let me know if you spot them!
Copy the following bash to git-download-subpath
somewhere in your $PATH
(I keep it in /usr/local/bin
), and mark as executable with chmod +x git-download-subpath
: