Skip to content

Instantly share code, notes, and snippets.

@tobek
Last active November 12, 2024 18:43
Show Gist options
  • Save tobek/a17fa9101d7e28ddad26 to your computer and use it in GitHub Desktop.
Save tobek/a17fa9101d7e28ddad26 to your computer and use it in GitHub Desktop.
Save images from chrome inspector/dev tools network tab
/* open up chrome dev tools (Menu > More tools > Developer tools)
* go to network tab, refresh the page, wait for images to load (on some sites you may have to scroll down to the images for them to start loading)
* right click/ctrl click on any entry in the network log, select Copy > Copy All as HAR
* open up JS console and enter: var har = [paste]
* (pasting could take a while if there's a lot of requests)
* paste the following JS code into the console
* copy the output, paste into a text file
* open up a terminal in same directory as text file, then: wget -i [that file]
*/
var imageUrls = [];
har.log.entries.forEach(function (entry) {
// This step will filter out all URLs except images. If you just want e.g. just jpg's then check mimeType against "image/jpeg", etc.
if (entry.response.content.mimeType.indexOf("image/") !== 0) return;
imageUrls.push(entry.request.url);
});
console.log(imageUrls.join('\n'));
@felipebhz
Copy link

Thank you very much! Saved me a lot of manual work to save around 150+ images from a website.

@Pustur
Copy link

Pustur commented Dec 26, 2017

Thanks! saved me some time :)

@Norod
Copy link

Norod commented Sep 17, 2018

Thank you very much for this

@nitinsunny
Copy link

can you explain in detail?

@tobek
Copy link
Author

tobek commented Oct 10, 2019

can you explain in detail?

I updated the instructions somewhat, but basic familiarity with chrome dev tools and command line are needed, otherwise searching for a chrome extension that does it for you might be your best bet.

@wangyung
Copy link

thanks! this gist is very useful

@rgluis
Copy link

rgluis commented May 4, 2020

thanks!!

@protrolium
Copy link

fantastic demo. It does get sluggish for me and the inspector tended to freeze depending on the amount of images.

@VianaArthur
Copy link

Thank you.

@cutero
Copy link

cutero commented Sep 14, 2020

Than you!!! OMG :)

@umop3plsdn
Copy link

lmfaoooo this is super clever haha

@puziyi
Copy link

puziyi commented Jan 7, 2021

Thank you so much! And I found it does take some time to paste, so I write a Python script to get image URLs offline. See below, please.

import json
from haralyzer import HarParser, HarPage

# Download the .har file from Developer tools(roughly the same as your operations), and we can parse it offline.
# Even if we have many image files to be download, it will not take too much time to wait to paste.
with open('source_har.har', 'r') as f:
    har_parser = HarParser(json.loads(f.read()))

data = har_parser.har_data["entries"]
image_urls = []

for entry in data:
    if entry["response"]["content"]["mimeType"].find("image/") == 0:
        image_urls.append(entry["request"]["url"])
     
# Save the URL list to a text file directly.
with open('target_link.txt', 'w') as f:
    for link in image_urls:
        f.write("%s\n" % link)

@aykun1907
Copy link

@puziyi thanks for the python script!

@robinagata
Copy link

hi, I just created an account to respond to this thread. I followed the steps up until "7. copy the output, paste into a text file" because when I attempt to copy the output or right click in it, it freezes DevTools, and if I'm not on the tab for too long, DevTools will become blank unless I refresh. I'm also having trouble downloading Python, so I can't use the offline downloader script provided by @puziyi. how do I circumvent the first issue?

@robinagata
Copy link

I figured out a workaround a while ago by using Mozilla Firefox and following the steps from there. Now, my issue is at "8. open up a terminal in same directory as text file, then: wget -i [that file]" because when I input "wget -i [the file path]", Windows Terminal at first needed me to "Supply values for the following parameters: Uri:" and typing the target website comes back with an error. Should I go somewhere else because my problem deviates from the original topic?

@tobek
Copy link
Author

tobek commented Sep 8, 2021

when I input "wget -i [the file path]", Windows Terminal at first needed me to "Supply values for the following parameters: Uri:" and typing the target website comes back with an error

The instructions I wrote are for Linux. I didn't think Windows even had wget, but sounds like it does but with a different interface. Look up how to download files using a text file with a list of URLs in Windows.

@climardo
Copy link

Cool. Thanks!

@DrMemoryFish
Copy link

I don't understand what you mean by it takes a long time to paste? because when i paste, its instant then i get the message "undefined"

@puziyi
Copy link

puziyi commented May 27, 2022

I don't understand what you mean by it takes a long time to paste? because when i paste, its instant then i get the message "undefined"

It occurs in situations where one needs to download a bunch of images.

@gigberg
Copy link

gigberg commented Apr 3, 2023

update: we can also use charles or fiddler to proxy the chrome/firefox http traffic, then just select and save all image file to your cumputer, remember to add file extension like jpeg or png after that. It's effictive when you need download images with cookies. However this method won't keep the file order like what it is in Network Devtool panel.

@gigberg
Copy link

gigberg commented Apr 3, 2023

an example python code for download image from har with cookies, inspired by @puziyi

import json
import requests

with open('source_har.har', 'r', encoding="utf-8") as f:
    har_json = json.loads(f.read())

for i,entry in enumerate(har_json['log']["entries"]):
    if entry["response"]["content"]["mimeType"].find("image/jpeg") == 0:
      url = entry["request"]["url"]
      name = str(i) + '.jpeg'
      cookies = entry["request"]["cookies"][0]
      # when cookies's value is boolean, you need convert it to str
      cookies = {k:str(v) for k,v in cookies.items()}
      img = requests.get(url, cookies=cookies).content
      with open(name,'wb') as f:
        f.write(img)

@raebdam
Copy link

raebdam commented Jun 21, 2023

That worked on my windows10:

& 'C:\path\to\wget.exe' -r -nH --cut-dirs=<N> -P 'C:\Path\to\output' -i 'target_link.txt'

@selmyabed
Copy link

Thanks! saved me some time <3
if that comes to u " 'wget' is not recognized as an internal or external command"
Follow this ==> https://bobbyhadz.com/blog/wget-is-not-recognized-as-internal-or-external-command

@michal-gmail
Copy link

in windows's instead of
"wget -i [that file]"
use following command from PowerShell:

Get-Content [that file] | ForEach-Object { Invoke-WebRequest -Uri $_ -OutFile (Split-Path -Leaf $_) }

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment