Skip to content

Instantly share code, notes, and snippets.

@lexthor
Forked from ridem/Download-Shopify-CDN-Assets.md
Last active September 23, 2024 13:41
Show Gist options
  • Save lexthor/63ac60fe5bce357084a934503c677233 to your computer and use it in GitHub Desktop.
Save lexthor/63ac60fe5bce357084a934503c677233 to your computer and use it in GitHub Desktop.
Download all Shopify CDN assets from a store
function fetchPageAssets() {
var assets = $("#assets-table .next-input--readonly")
assets.each(function (index, input) {
files.push(input.value)
if (index + 1 == assets.length) {
var lastItem = $(input).parents("tr[bind-class]").attr('bind-class').replace(/\D/g,'')
$.ajax({
url: "/admin/settings/files?direction=next&last_id=" + lastItem + "&last_value=" + lastItem + "&limit=100&order=id+desc",
}).done(function (data) {
var mutationObserver = new MutationObserver(function (mutations, observer) {
mutations.some(function (mutation) {
if (mutation.target.id &&
mutation.target.id == "assets-area" &&
mutation.addedNodes[0].nextElementSibling &&
mutation.addedNodes[0].nextElementSibling.innerHTML.indexOf("empty") > -1
) {
downloadListFile()
observer.disconnect()
return true;
} else if (mutation.target.id &&
mutation.target.id == "assets-area" &&
mutation.previousSibling &&
mutation.previousSibling.className == "ui-layout ui-layout--full-width"
) {
fetchPageAssets()
observer.disconnect()
return true;
}
})
});
mutationObserver.observe(document, {
childList: true,
subtree: true
});
var newDoc = document.open("text/html", "replace");
newDoc.write(data);
newDoc.close();
})
}
})
}
function downloadListFile() {
var downloader = $("<a id='download-file' href='' download='shopify-files.html'></a>")
$(".ui-title-bar").append(downloader)
var data = 'data:application/octet-stream;base64,' + window.btoa(files.join("\r\n"));
$('#download-file').attr('href', data);
$('#download-file')[0].click();
}
var files = []
fetchPageAssets()

Instructions

  1. Go to your Shopify admin/settings/files page
  2. Open your browser Dev tools, go to the console
  3. Paste the content of the console_download_list.js file, and press enter
  4. Your browser will automatically fetch each page and download the list with the links of all the files on the CDN.
  5. Using your preffered code editor, edit the HTML file by adding each link in img tag.
  6. Open the HTML file you just edit in a browser then right click-save as. It will download the HTML file again along with all the images in the location you specify.
@mirzabhai
Copy link

Hi @lexthor, I tried your technique on my Shopify store and it worked great up to a point. When I try step 6, all my files download but many of them are not opening. "The file "1.png" could not be opened. It may be damaged or use a file format that Preview doesn't recognize" When I went directly to the HTML page that was generated to right click and save the image, it's recognizing it as a "Google's WebP" file. I think that's the problem. What gives? Thanks!

@lexthor
Copy link
Author

lexthor commented Jan 3, 2021

Hello @mirzabhai,
if you haven’t yet, try using google chrome for this as it’s converting the webp format to png/jpg automatically.
Also go into settings-files in your shopify store and see if that file actually exists.
if there aren’t too many files that you couldn’t download automatically try downloading them manually from your files in the shopify store.

@hafuuu
Copy link

hafuuu commented Apr 2, 2021

This code seemes likes don`t work anymore due to UI of file changes as shown attached picture.

Do you have any idea to get all files link with new UI?

Shopify console

@lexthor
Copy link
Author

lexthor commented Apr 2, 2021

Hello, hafuuu

Haven't noticed they've changed the UI.
I'll look into it and try to come up with a solution.

@hafuuu
Copy link

hafuuu commented Apr 2, 2021

Thanks for your quick reply!

this code was very helpful.
So, I`ll wait for it.

@softlimit-ben
Copy link

Running into the same issue as above. Thanks for working on this!

@alemens
Copy link

alemens commented Feb 4, 2022

My workaround for now was

  • download whole link list to a file with a chrome extension like this simple-mass-downloader
  • clean each line file my case with Visual Studio Code use regex .*_60x60.*\r?\n then search and replace _60x60 with empty
  • Finally download the list wget -P ~/Downloads -i ~/list.txt

@wasalwayshere
Copy link

wasalwayshere commented Aug 23, 2022

  1. Go to: admin/settings/files?limit=250 in Shopify

  2. I used paste+enter in console (Chrome) to output the list of URLs:

const arrOfImgsSources = [];
for(var i = 0; i < document.images.length; i++) {
    let str1 = "_60x60";
    let img_src =document.images[i].src;
    const isSubstring = img_src.includes(str1);
    if (isSubstring) {
        var clean = img_src.replace(/_60x60/g,'');
        arrOfImgsSources.push(clean);
        var filename = clean.replace(/^.*[\\\/]/, '')
        var clean_filename = filename.split('?')[0]
        //console.log(clean_filename)
        console.log(clean)
       
    }
};
//console.log(arrOfImgsSources);

Copy and pasted to a file named urls.txt, then in MacOS terminal, I ran:
wget --content-disposition --trust-server-names -i urls.txt

Then to clean up the filenames:

find . -type f -name "*\?*" -print0 | 
while IFS= read -r -d '' file; 
do 
    mv -f "$file" "`echo $file | cut -d? -f1`"; 
done

@alemens
Copy link

alemens commented Aug 27, 2022

const arrOfImgsSources = [];
for(var i = 0; i < document.images.length; i++) {
let str1 = "_60x60";
let img_src =document.images[i].src;
const isSubstring = img_src.includes(str1);
if (isSubstring) {
var clean = img_src.replace(/_60x60/g,'');
arrOfImgsSources.push(clean);
var filename = clean.replace(/^.*[\/]/, '')
var clean_filename = filename.split('?')[0]
//console.log(clean_filename)
console.log(clean)

}

};
//console.log(arrOfImgsSources);

That works! Thanks @wasalwayshere

@MarkusPayne
Copy link

Thanks @wasalwayshere this worked perfectly and was huge help!

@tyhallcsu
Copy link

  1. Go to: admin/settings/files?limit=250 in Shopify

    1. I used paste+enter in console (Chrome) to output the list of URLs:
const arrOfImgsSources = [];
for(var i = 0; i < document.images.length; i++) {
    let str1 = "_60x60";
    let img_src =document.images[i].src;
    const isSubstring = img_src.includes(str1);
    if (isSubstring) {
        var clean = img_src.replace(/_60x60/g,'');
        arrOfImgsSources.push(clean);
        var filename = clean.replace(/^.*[\\\/]/, '')
        var clean_filename = filename.split('?')[0]
        //console.log(clean_filename)
        console.log(clean)
       
    }
};
//console.log(arrOfImgsSources);

Copy and pasted to a file named urls.txt, then in MacOS terminal, I ran: wget --content-disposition --trust-server-names -i urls.txt

Then to clean up the filenames:

find . -type f -name "*\?*" -print0 | 
while IFS= read -r -d '' file; 
do 
    mv -f "$file" "`echo $file | cut -d? -f1`"; 
done

Thanks a bunch, this worked flawlessly

@tyhallcsu
Copy link

Bump; this code has saved me for 6mo+ thank you @wasalwayshere

@rahulbhanushali
Copy link

rahulbhanushali commented May 6, 2024

Clean up filenames part doesn't work on Macos, getting errors for cut -d? as bellow:
zsh: no matches found: -d?

For Macos, use below:

find . -type f -name "*\?*" -print0 | 
while IFS= read -r -d '' file; 
do 
    mv -f "$file" "`echo $file | cut -d '?' -f1`"; 
done

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment