-
-
Save graymouser/a33fbb75f94f08af7e36 to your computer and use it in GitHub Desktop.
/* | |
After purchasing a humble book bundle, go to your download page for that bundle. | |
Open a console window for the page and paste in the below javascript | |
*/ | |
$('a').each(function(i){ | |
if ($.trim($(this).text()) == 'MOBI') { | |
$('body').append('<iframe id="dl_iframe_'+i+'" style="display:none;">'); | |
document.getElementById('dl_iframe_'+i).src = $(this).data('web'); | |
} | |
}); | |
Ok, so building off what @oxguy3 wrote I came up with the following code to save the file as the actual title of the book or whatever. Likely it could be improved upon, so corrections are very much welcome.
Edit: Moved the below code to my own gist for saving/editing, see the updated versions there: https://gist.github.com/Woody2143/830d5eae396f5ddcae4f6b7668690659
var pattern = /(MOBI|EPUB|PDF( ?\(H.\))?|CBZ|Download)$/i;
var nodes = document.getElementsByTagName('a');
var downloadCmd = '';
for (i in nodes) {
var a = nodes[i];
if (a && a.text && pattern.test(a.text.trim()) && a.attributes['data-web']) {
var name = a.parentNode.parentNode.parentNode.parentNode.parentNode.getAttribute("data-human-name");
name = name.replace(/\s+/g, '_'); /* change spaces to underscores */
name = name.replace(/'/g, ''); /* don't want single quotes */
name = name.replace(/:/g, '_-'); /* change : to _- for looks */
name = name.replace(/,/g, ''); /* don't need commas */
name = name.replace(/&/g, 'and'); /* taking out the pesky & */
/* likely the below regex will need corrected at some point */
var extension = /https:\/\/dl\.humble\.com\/.*\.(.*)\?gamekey.*/.exec(a.attributes['data-web'].value);
name += '.' + extension[1];
downloadCmd += 'wget --output-document="' + name + '" --content-disposition "' + a.attributes['data-web'].value + "\"\n";
}
}
downloadCmd += "\n";
var output = document.createElement("pre");
output.textContent = downloadCmd;
document.getElementById("papers-content").prepend(output);
None of the above methods will work as Humble Bundle appears to have changed how their page renders, it hides the download URLs now.
That being said there is a single request to their API that returns a JSON response with a list of the files and download links that can be parsed to not only download the files but check their hashes as well. If I get around to writing this up I'll post a link to the gist here... Please stand by.
Hey @Woody2143, can you share those API request details?
@nvilagos <3
tyty
Couldn't get any of the JavaScript versions to work for me, so I wrote a quick python script (using the API that @jimmckeeth linked) that will download all the books to a directory in parallel, with the correct names, with the ability to select formats or just download all available.
No luck with any of the javascript ones or @achidlow's python script. The API seems to need to login before pulling the inventory :(
Small python script on my behalf - https://gist.github.com/tkan/9ed02fc0338b8d2562ae5af752384f7c
Note: it will only work for individual orders. So, rather than the whole inventory, you can download seperate orders at once with your order key.
My own personal browser-based solution. This is designed to be run from the Humble Library page.
$('div.text-holder').children('h2').each(function() {
$(this).click();
$('h4:contains(EPUB)').click();
});
It should download every EPUB file on the page. You could replace EPUB with another format and receive similar results for that format.
Or to download a single bundle off of its gamekey page:
$('span.label:contains(EPUB)').click()
Note that either of these solutions should be run from the browser console. Only verified to work in Chrome.
This worked for me to download all books, all versions:
$('div.download-buttons div.js-start-download a').each(function() {
$(this).click();
});
Here's my version that just creates wget
commands to paste into a terminal for every link on the page that points to humble's file domain, worked great on the books page that I tired.
cmds = "";
for (a of document.getElementsByTagName("a")) {
if (a.href.startsWith("https://dl.humble.com")) cmds += "wget --content-disposition \"" + a.href + "\"\n";
};
console.log(cmds);
https://gist.github.com/azdle/7317289a6f0401b6a95e2b568bc1a806
Chrome seems to truncate long URLs logged with console.log
. If you see URLs ending with "…
" using @azdle's script, replace console.log
with console.dir
.
Before finding this gist, I came up with this – it's almost the same thing, it just uses -O filename
instead of --content-disposition
:
var str=""; document.querySelectorAll("a[href*='dl.humble.com']").forEach(link => str += (`wget "${link.href}" -O "${link.href.replace('https://dl.humble.com/', '').replace(/\?.*/, '')}"; `)); console.dir(str)
https://gist.github.com/helb/fc326be114a225a2c408471bef890ee8
I've been working on a Perl script to do the downloads. You will need to login via the website first and grab the session cookie.
If you go to run it you'll need to manually tweak it some first: HumbleBundleDL
There is a working one (at least as I write this) from @tlc
https://gist.github.com/tlc/96292166c7253f86565f0d18e5f8ec41
I used
$('div.js-start-download a').each(function(){ $(this).trigger('click') });
for downloading all formats of all books just now.
Building on what @azdle wrote I have modified the script to only select PDF files and changed the syntax for Windows PowerShell's wget command:
cmds = "";
function removeExtra(a2){
a2 = a2.replace('https://dl.humble.com/','');
a2 = a2.substring(0, a2.indexOf('.'));
return a2;
}
for (a of document.getElementsByTagName("a")) {
if (a.href.startsWith("https://dl.humble.com") && a.href.includes("pdf")) cmds += "wget \"" + a.href + "\" -Outfile " + removeExtra(a.href) + ".pdf \n";
};
console.log(cmds);
It's ugly but it works,
- Iterates over each anchor tag
- Only selects if the URL starts with 'https://dl.humble.com' and contains 'pdf' (change for EPUB or other file type).
- Then names the Outfile the same as URL without 'https://dl.humble.com' or anything after the first '.' then appends .pdf at the end (again replace with any extension you prefer). Thus naming the file as it's title name without caps or spaces.
- Finally paste all console logs in a PowerShell window in the directory to save and they will automatically download.
Thanks @azdle, couldn't of done it without your code to start
FYI, for regular wget (e.g. Unix, Linux, Mac), it's just -O, not -Outfile (WHY does powershell have to be different?). So you need to modify the above to change -Outfile
to -O
Although Mac doesn't have wget installed by default, in that case using curl, modify the IF statement to be:
if (a.href.startsWith("https://dl.humble.com") && a.href.includes("pdf")) cmds += "curl \"" + a.href + "\" -o " + removeExtra(a.href) + ".pdf \n";
This code works if you set firefox to save pdf instead of preview it(Firefox > Preferences > Applications > Adobe PDF document : Save File):
function Book(title, author, formats) {
this.title = title;
this.author = author;
this.formats = formats;
};
// Change this to non-zero to download
var seconds_between_switch_book = 0; // 10;
var seconds_between_download = 0; // 3;
var books = [];
var rows = document.querySelectorAll('.subproduct-selector');
rows.forEach(function(item, item_index) {
setTimeout(function() {
item.click();
var title = item.querySelectorAll('h2')[0].title;
var author = item.querySelectorAll('p')[0].innerText;
var formats = [...document.querySelectorAll('div.js-download-button')].map(
download_item => download_item.querySelectorAll('h4')[0].innerText
)
books.push(new Book(title, author, formats));
document.querySelectorAll('div.js-download-button').forEach(function(download_item, download_index){
setTimeout(function() {
var format = download_item.querySelectorAll('h4')[0].childNodes[1].data;
console.log(item_index, download_index, title, format);
// uncomment this to download
//download_item.click();
}, seconds_between_download * 1000 * download_index);
});
}, seconds_between_switch_book * 1000 * item_index);
});
setTimeout(function(){
console.table(books);
copy(books);
}, (rows.length + 1) * 1000 * seconds_between_switch_book);
So I'm currently downloading just about everything to put in a Calibre library. Since some of the bundles have some repeat content (looking at you , Make) I updated the @KurtBurgess script to test the working directory for a copy of the current file and skip it if present:
cmds = "";
function buildCommand(a, ext) {
let filename = removeExtra(a.href);
ext = '.' + ext;
cmds += "If(Test-Path -Path \"" + filename + ext + "\") {Write-Warning \"" + filename + ext + " exists, skipping \"} Else { wget \"" + a.href + "\" -Outfile " + filename + ext + "}\n";
}
function removeExtra(a2){
a2 = a2.replace('https://dl.humble.com/','');
a2 = a2.substring(0, a2.indexOf('.'));
return a2;
}
for (a of document.getElementsByTagName("a")) {
if (a.href.startsWith("https://dl.humble.com") && a.href.includes("pdf")) buildCommand(a, 'pdf');
if (a.href.startsWith("https://dl.humble.com") && a.href.includes("epub")) buildCommand (a, 'epub');
if (a.href.startsWith("https://dl.humble.com") && a.href.includes("cbz")) buildCommand(a, 'cbz');
};
console.log(cmds);
Next steps, adding a bash variant, and seeing if I can remove the repeated if statements for a some
var pattern = /(MOBI|EPUB|PDF( ?\(H.\))?|CBZ|Download)$/i; var nodes = document.getElementsByTagName('a'); var downloadCmd = ''; for (i in nodes) { var a = nodes[i]; if (a && a.text && pattern.test(a.text.trim()) && a.attributes['href']) { downloadCmd += a.attributes['href'].value + "\"\n"; } } var output = document.createElement("pre"); output.textContent = downloadCmd; document.getElementById("papers-content").prepend(output);
Copy/Paste the links in one txt and run wget:
wget --no-check-certificate --content-disposition -r -H -np -nH -N --cut-dirs=1 -e robots=off -l1 -i ./linksfilename.txt -B 'https://dl.humble.com/'
A modified version of @kellerkindt
var nodes_a = document.querySelectorAll('.downloads a:not(.dlmd5)');
for (node of nodes_a) {
console.log('wget --content-disposition', node.href);
};
If you're using the above, you may need to place the generated link in double quotes so your shell interprets the ampersand literally. I tried to tweak this but I hit an issue with whitespace which would be easy for someone who actually knows Javascript to fix. Sadly this person is not me.
var nodes_a = document.querySelectorAll('.downloads a:not(.dlmd5)');
for (node of nodes_a) {
var tmp = node.href;
tmp = tmp.replace(/ /g,'')
console.log('wget --content-disposition \"'+tmp+"\"");
};
Maybe this works. Apologies for hackyness. I'm sure a better alteration is possible but like I say, I don't know javascript
I like my files to be organized, so here's my take on it.
const commands = [];
document.querySelectorAll('.row').forEach(row => {
const bookTitle = row.dataset.humanName;
[...row.querySelectorAll('.downloads .flexbtn a')].forEach(el => {
const downloadLink = el.href;
const fileName = /\.com\/([^?]+)/.exec(downloadLink)[1];
commands.push(`curl --create-dirs -o "${bookTitle}/${fileName}" "${downloadLink}"`);
});
});
console.log(commands.join('; '));
Instead of wget this uses curl, because wget's -O does not create directories automatically (and while -P does, -O and -P cannot be used together).
The resulting directory tree is like this:
.
├── Advanced Penetration Testing
│ ├── advancedpenetrationtesting.epub
│ └── advancedpenetrationtesting.pdf
├── Applied Cryptography: Protocols, Algorithms and Source Code in C, 20th Anniversary Edition
│ ├── applied_cryptography_protocols_algorithms_and_source_code_in_c.epub
│ └── applied_cryptography_protocols_algorithms_and_source_code_in_c.pdf
└── Cryptography Engineering: Design Principles and Practical Applications
├── cryptography_engineering_design_principles_and_practical_applications.epub
├── cryptography_engineering_design_principles_and_practical_applications.pdf
└── cryptography_engineering_design_principles_and_practical_applications.prc
I took @jmerle's code and changed the last line:
console.log(commands.join('; ');
to:
console.log(commands.join(' && ');
That way, it didn't try to download everything at once.
If you want to verify your downloads, here's the code to make the md5 hashes visible:
var md5_links = document.querySelectorAll(".dlmd5");
for (i in md5_links) {
md5_links[i].click();
}
OR...
If you are like me and have way too many book bundles, you might be interested in something like the following code.
function getTitle() {
var re = /^Humble\ Book\ Bundle\:\ (.*)\ \(/g;
return re.exec(document.title)[1];
}
function showHashes() {
document.querySelectorAll('.dlmd5').forEach(md5 => {
if (md5.innerText.trim() == 'md5') {
md5.click();
}
});
}
function gatherInfo() {
const data = [];
const bundleTitle = getTitle();
showHashes();
document.querySelectorAll('.row').forEach(row => {
const bookTitle = row.dataset.humanName;
[...row.querySelectorAll('.downloads .download')].forEach(dl => {
const downloadLink = dl.querySelector('.flexbtn a').href;
const filename = /\.com\/([^?]+)/.exec(downloadLink)[1];
const md5 = dl.querySelector('a.dlmd5').innerText.trim();
data.push({
"bundleTitle": bundleTitle,
"bookTitle": bookTitle,
"filename": filename,
"downloadLink": downloadLink,
"md5": md5
});
});
});
return data;
}
function downloadBookBundle() {
const commands = []
const md5Sums = [];
const info = gatherInfo();
for (var i in info) {
bundleTitle = info[i]["bundleTitle"];
bookTitle = info[i]["bookTitle"];
filename = info[i]["filename"];
downloadLink = info[i]["downloadLink"];
md5 = info[i]["md5"];
commands.push(`curl --create-dirs -o "${bundleTitle}/${bookTitle}/${filename}" "${downloadLink}"`);
md5Sums.push(`${md5} ${bundleTitle}/${bookTitle}/${filename}`);
};
console.log(commands.join(' && '));
console.log(md5Sums.join('\n'));
}
downloadBookBundle();
It is based upon's @jmerle's approach and is also forked here: https://gist.github.com/fsteffek/bf4ac1e3d2601629a6c9cca94b5649f6.
What does it do?
- It prints the command line command for curl to download your Humble Book Bundle. I modified it, so each bundle is saved into a separate folder:
.
├── Bundle Name
│ └── Book Name
│ └── Files
└── More Bundles
- It prints the content of an md5 file, which
md5sum
can read/check. Paste it into a file likehb_all_books.md5
...
5b3e6de1fc4c45be45b1299ea50a6a7d Essential Knowledge by MIT Press/Cloud Computing/cloudcomputing.epub
a14391f6971da830d064c2c0fd132019 Essential Knowledge by MIT Press/Cloud Computing/cloudcomputing.mobi
...
... and check it with md5sum -c hb_all_books.md5
.
Essential Knowledge by MIT Press/Cloud Computing/cloudcomputing.epub: OK
Essential Knowledge by MIT Press/Cloud Computing/cloudcomputing.mobi: OK
...
Feel free to tell me how to make this script more readable, convenient and generally just better.
My JavaScript fork of this script is still working today: https://gist.github.com/zuazo/a91ecbb97b90ef3ef9ce8caf361199a2
@oxguy3 Very nice. The only other thing I'd love to be able to get is the correctly formatted book titles to save the books under.... Anyone take a swing at that?