This code is a modification of quixel.py by ad044.
The key change is that it allows you to download your purchased assets by category and sub categories instead of all the assets. It will show the number of assets it will download and ask for confirmation.
I recommend running the same script twice to make sure no asset failed to download. if it says 0, then you can change the category.
Please avoid downloading everything in one go to avoid stress on the servers.
If someone ends up downloading everything, please tell us in the comments the total disk space it takes.
- copy dlMegascans.py and save it on disk.
- copy (preferably download) the very very long ms_asset_categories.json found here (to not make this page unviewable): "https://gist.github.com/maalrron/3ebe6514f8fba184311aa63fb68f841d"
- Get your authentication token from megascan:
- Login into https://quixel.com
- copy the script from below (gettoken.js)
- Open devtools (F12) -> Go to "Console" tab
- Paste in the script and press Enter. (if it doesn't let you paste, on firefox: https://www.youtube.com/watch?v=ekN2i953Nas at 01:14 / on chrome, type “allow pasting” then ENTER first.) if it returns "undefined", disable your add blocker.
- copy the token, it's the very long line (or multiple lines) of digits and characters.
- paste it in the token line of the script dlMegascans.py (between quotes).
- Set the path in the "download_path" line, e.g: "/home/Downloads".
- Set the path in the "json_file_path" line. e.g: "/home/Downloads/ms_asset_categories.json".
- Set the path where you want the cache.txt file to be created.
- set the target_category to the category and sub categories you want separated with /. one category or sub category at a time:
- It works by matching every word with the categories of each asset in the json file.
- The order doesn't matter but they need to be separated with /.
- It's not case sensitive and ignores any last "s" so "3d plant" will also download what's in "3D Plants"
-
Like in the original code, it creates a cache.txt file (in the same location as the script, or wherever you're cd at in the terminal) to avoid downloading the same assets multiple times.
-
find this line: with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor: max_workers is the number of file downloadable at the same time. 10 is a good compromise between speed and stress on the network/server. Higher and you might get failed downloads. Lower it if your network can't handle 10 but I don't recommend going higher (and definitely not higher than 20).
-
If someone can figure out how to set the resolution to download, (lowest, mid, highest), I would love to have this feature.
Disclaimer: I'm not a programmer, I'm a chatGPT steerer. The code was only lightly tested but seems to work fine
-
If you get {'statusCode': 401, 'error': 'Unauthorized', 'message': 'Expired token', 'attributes': {'error': 'Expired token'}} simply do step 3 again. This shouldn't happen if you're running the script shortly after setting it up. I don't know how often the toke changes but it's counted in hours so no rush.
-
If you get an error: Failed to download asset @@@, status code: 502 Response: 502 Bad Gateway, simply wait for the other assets to finish downloading and run the script again.
@WAUthethird hi William.
I'm having great success using your script, so a massive thanks for your effort in putting it out there.
I just want to enquire about one thing..
You mentioned that it's been done as to obtain the data for archival purposes primarily, but I note that the metadata PY files (large and small) are quite comprehensive..
Is there any way contents of this PY can be leveraged to create a useable folder structure? With my limited knowledge, I can see from a cursory look that a typical zip file (with nonsensical name), will be written alongside an asset 'name' (in the metadata PY file).
... This provides something... but obviously in comparison to some other scripts available, there is no real folder hierarchy available..
is there anything we can realistically do about creating this do you think? 🤔