Skip to content

Instantly share code, notes, and snippets.

@Neztore
Created December 3, 2020 01:39
Show Gist options
  • Select an option

  • Save Neztore/781394d132a3fc1fba8bfcb08b555d7c to your computer and use it in GitHub Desktop.

Select an option

Save Neztore/781394d132a3fc1fba8bfcb08b555d7c to your computer and use it in GitHub Desktop.
Basic gyazo image downloader to clone all of your images, minus any meta-data.
/*
Basic Gyazo file downloader.
How-to-use:
1. Dependencies
Ensure you are using a recent version of node (tested w/ node v12)
Install node-fetch
2. Folder
Create an "imgs" folder in your current directory
3. Cookie
Open the Gyazo site, open the network tab and scroll down.
Once you see a request that goes to https://gyazo.com/api/internal/images?page=1&per=40,
Go to Request headers and copy the contents of the "Cookie" header.
Put it into the env variable "cookie".
Notes:
- You lose meta-data. This isn't ideal, and is something I want to improve
- Randomly fails - I've definitely not worked out all of the bugs, but it works most of the time.
It starts from page 0 on restart but will fetch the image list and see that all images have been downloaded already,
and move on.
By Neztore, Edited 03/12/20
*/
const fetch = require("node-fetch");
const util = require('util');
const fs = require('fs');
const streamPipeline = util.promisify(require('stream').pipeline);
const { access } = fs.promises;
const PAGE_SIZE = 100;
async function getPage (pageNo) {
const url = `https://gyazo.com/api/internal/images?page=${pageNo}&per=${PAGE_SIZE}`;
const h = new fetch.Headers();
h.append("Cookie", process.env.cookie)
const res = await fetch(url, {
headers: h
});
try {
const r = await res.json();
if (!Array.isArray(r)) {
console.log("Warning: Is not Array!");
console.log(r);
return false;
}
// Remove already downloaded
if (r.length === 0) {
return false;
}
let out = [];
for (let item of r) {
const path = `./imgs/${item.image_id}.png`;
try {
await access(path, fs.constants.F_OK);
console.log(`> ${item.image_id} has already been downloaded. Skipping!`);
} catch (e) {
out.push(item);
}
}
return out;
} catch (e) {
console.log("Failed to fetch");
console.error(e);
}
}
async function downloadFile (img) {
// Check if already downloaded
const path = `./imgs/${img.image_id}.png`;
try {
await access(path, fs.constants.F_OK);
console.log(`>>> ${img.image_id} has already been downloaded. Skipping!`);
return false;
} catch (e) {
// do nothing - it doesn't exist.
}
let response;
try {
response = await fetch(img.url);
} catch (e) {
console.error(`Failed to download ${img.image_id}: ${img.metadata.title}.`)
console.error(`Reason: ${e.message}`);
return false;
}
if (response.ok) {
streamPipeline(response.body, fs.createWriteStream(path));
return true;
}
if (response.status === 404) {
console.error(`Failed to download ${img.image_id}: ${img.metadata.title}.`)
console.error("404: Not found.");
return false;
}
throw new Error(`unexpected response ${response.statusText}`);
}
async function run (pageNo, onceDone) {
console.log(`Downloading page ${pageNo}`);
const p = await getPage(pageNo);
if (!p) {
console.log(`Empty page: All pages downloaded.`);
return onceDone(true);
}
let downloadCount = 0;
let skipCount = 0;
async function wrap (no) {
if (!p[no]) {
return onceDone();
}
const d = await downloadFile(p[no]);
if (d) {
if (p[no].metadata) {
console.log(`Downloaded ${no} - ${p[no].metadata.title}: ${p[no].image_id}`)
} else {
console.log(`Downloaded ${no} - ${p[no].image_id}`)
}
downloadCount++;
} else {
skipCount++;
}
setTimeout(function () {
if (no < p.length - 1) {
return wrap(no+ 1);
} else {
console.log(`Page done. Total processed: ${PAGE_SIZE}. ${PAGE_SIZE - downloadCount} skipped | ${downloadCount} downloaded.`)
onceDone();
}
}, 200);
}
return await wrap(0);
}
async function top (no) {
await run(no, function (allDone) {
if (!allDone) {
setTimeout(top, 1000, no + 1);
console.log(`\n\n\n----\n\n`);
}
});
}
top(0).catch(console.error);
@sbhadr
Copy link
Copy Markdown

sbhadr commented Dec 3, 2020

I got lost. How do I find my way back home?

@ReduxGB
Copy link
Copy Markdown

ReduxGB commented Dec 3, 2020

I got lost. How do I find my way back home?

You don't, your stuck here now.

@quinnaissance
Copy link
Copy Markdown

Just wanted to say thanks, because this still worked flawlessly 4 years later, for 2000 images.

@Neztore
Copy link
Copy Markdown
Author

Neztore commented Dec 18, 2024

Glad to hear it helped. Surprised it still works - it barely worked at the time...

@joinemm
Copy link
Copy Markdown

joinemm commented Feb 9, 2025

in case someone still stumbles upon this page, just letting you know this doesn't work anymore. gyazo no longer sends the image_id for other than the latest 10 images

@Laesx
Copy link
Copy Markdown

Laesx commented Apr 2, 2025

in case someone still stumbles upon this page, just letting you know this doesn't work anymore. gyazo no longer sends the image_id for other than the latest 10 images

Still works, you can get a "pro" trial without introducing your credit card or any new information and the script works flawlessly.

@quinnaissance
Copy link
Copy Markdown

quinnaissance commented Apr 2, 2025

Still works, you can get a "pro" trial without introducing your credit card or any new information and the script works flawlessly.

Oh yeah I totally forgot to mention this is what I did as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment