-
-
Save philipstanislaus/c7de1f43b52531001412 to your computer and use it in GitHub Desktop.
var saveBlob = (function () { | |
var a = document.createElement("a"); | |
document.body.appendChild(a); | |
a.style = "display: none"; | |
return function (blob, fileName) { | |
var url = window.URL.createObjectURL(blob); | |
a.href = url; | |
a.download = fileName; | |
a.click(); | |
window.URL.revokeObjectURL(url); | |
}; | |
}()); | |
saveBlob(file, 'test.zip'); |
This goes through ram first does it not? If you have a large blob, such as one larger than the amount of ram in the system, it can cause slow downs. Is there a way to write the blob directly to disk without ever even going through the ram?
@Wamy-Dev did you actually test that?
I have not verified this, but IIRC, the browser should ask the user how to handle the stream. If you choose to store a file, the browser should actually stream directly to disk.
Let me know if this assumption is wrong.
@Wamy-Dev did you actually test that?
I have not verified this, but IIRC, the browser should ask the user how to handle the stream. If you choose to store a file, the browser should actually stream directly to disk.
Let me know if this assumption is wrong.
yes This is incorrect. Please look here for reference. It stores to memory until its either filled up or the blob is done downloading so files larger than the systems memory will be filled up completely and paged memory will begin to be used which will cause a lot of problems.
Here's a TypeScript rework for all of you Angular friendoz:
const saveBlob = (function () {
const a = document.createElement('a');
document.body.appendChild(a);
a.setAttribute('style', 'display: none');
return function (blob, fileName) {
const url = window.URL.createObjectURL(blob);
a.href = url;
a.download = fileName;
a.click();
window.URL.revokeObjectURL(url);
};
})();
saveBlob(doc, `fileName`);
This has helped me tremendously. Thank you!!
Thanks so much.
Thanks, a refactoring with anchor element deletion:
const fileUrl = window.URL.createObjectURL(blob)
const anchorElement = document.createElement('a')
anchorElement.href = fileUrl
anchorElement.download = 'Filename.ext'
anchorElement.style.display = 'none'
document.body.appendChild(anchorElement)
anchorElement.click()
anchorElement.remove()
window.URL.revokeObjectURL(fileUrl)
If you have only blob url:
var saveBlob = (function () {
var a = document.createElement("a");
document.body.appendChild(a);
a.style = "display: none";
return function (blob, fileName) {
var url = window.URL.createObjectURL(blob);
a.href = url;
a.download = fileName;
a.click();
window.URL.revokeObjectURL(url);
};
}());
fetch('blob:https://some.blob.url').then((response) => response.blob().then((b) => saveBlob(b, 'file.ext')));
Is there a data limit using this blob & url solution?
Eg. can a blob containing a 50MB zip, or a 5GB video be 'download' saved in the client in this way?
If not, is there a client-side way of saving v.large blobs?
@ChrisRoald, if you need to deal with data streams that are on the order of client RAM, you should *not* be creating Blob
s that store the entire data stream in the first place, as they are inherently in-RAM objects.
Instead you should use showSaveFilePicker
/ FileSystemWritableFileStream
— or for Firefox this ServiceWorker-based polyfill, pending proper support.
life saver!
thanks a bunch!
Hey, there are no limits. This snippet does not actually download the file in JavaScript, it just creates a link and clicks it. The download will work through the browser via a stream the same way it would work if you opened any other binary file in the browser.