Skip to content

Instantly share code, notes, and snippets.

@mrbid
Last active April 8, 2024 00:30
Show Gist options
  • Save mrbid/6a01c854b9279310f95d5601a8215574 to your computer and use it in GitHub Desktop.
Save mrbid/6a01c854b9279310f95d5601a8215574 to your computer and use it in GitHub Desktop.
[Itch.io Mirror] Share your favourite sources of free 3D content for games.

Mirrored from: https://itch.io/t/3519795/share-your-favourite-sources-of-free-3d-content-for-games#post-9386786

Websites where you can download free and paid human generated 3D assets:

https://downloadfree3d.com (my favourite)
https://www.turbosquid.com
https://www.cgtrader.com
https://sketchfab.com
https://www.thingiverse.com
https://free3d.com
https://poly.cam/explore
https://3d.si.edu/explore

Although these days I am more into 3D content from generative AI (text-to-3D):

https://imageto3d.org
https://meshy.ai
https://lumalabs.ai/genie
https://www.sudo.ai/3dgen
https://www.tripo3d.ai/

So much so that I have created two assets packs of hand picked content from two of these generative services:

https://archive.org/details/@mrbid

https://archive.org/details/meshy-collection-1.7z (800 unique assets)
https://archive.org/details/luma-generosity-collection-1.7z (3,700 unique assets)

Details concerning Generative AI:

The forefront/SOTA of this technology is maintained by a project called ThreeStudio at the moment.

Typically what happens is Stable Diffusion is used to generate consistent images of the same object from different viewing angles however regular Stable Diffusion models are not capable of this and so Zero123++ is used for this purpose. Once these images of the object have been produced from different view/camera angles they are then fed into a Neural Radiance Field (NeRF) this will output a point-cloud of densities and NerfAcc is commonly used for this purpose by most projects. Finally the point cloud is turned into a triangulated mesh using Nvidia's DMTet.

Stable-Dreamfusion can be attributed as the first project that really kicked off this academic field for text-to-3D solutions, and while there are pre-canned image-to-3D solutions available currently they tend not to perform quite as well as most public text-to-3D solutions.

If you are interested in learning more about generative 3D here are some links you can followup:

Various papers and git repositories related to the topic of text-to-3d:
https://paperswithcode.com/task/text-to-3d
https://github.com/topics/text-to-3d

Pre-Canned solutions to execute the entire process for you:
https://github.com/threestudio-project/threestudio (ThreeStudio)
https://github.com/bytedance/MVDream-threestudio
https://github.com/THU-LYJ-Lab/T3Bench

https://arxiv.org/pdf/2209.14988.pdf (DreamFusion)
https://dreamfusion3d.github.io/

https://arxiv.org/pdf/2211.10440.pdf (Magic3D)
https://research.nvidia.com/labs/dir/magic3d/

https://arxiv.org/pdf/2305.16213.pdf (ProlificDreamer)
https://arxiv.org/pdf/2106.09685.pdf (has a LoRA step)
https://ml.cs.tsinghua.edu.cn/prolificdreamer/

https://arxiv.org/pdf/2303.13873.pdf (Fantasia3D)
https://fantasia3d.github.io/

https://research.nvidia.com/labs/toronto-ai/ATT3D/
https://research.nvidia.com/labs/toronto-ai/GET3D/

Stable Diffusion:
https://easydiffusion.github.io/
https://civitai.com/
https://nightcafe.studio
https://starryai.com/
https://dreamlike.art/
https://www.mage.space/
https://www.midjourney.com/showcase
https://lexica.art/

Zero-shot generation of consistent images of the same object:
https://github.com/cvlab-columbia/zero123
https://zero123.cs.columbia.edu/

https://github.com/SUDO-AI-3D/zero123plus

https://github.com/One-2-3-45/One-2-3-45
https://one-2-3-45.github.io/

https://github.com/SUDO-AI-3D/One2345plus
https://sudo-ai-3d.github.io/One2345plus_page/

https://github.com/bytedance/MVDream
https://mv-dream.github.io/

https://liuyuan-pal.github.io/SyncDreamer/
https://github.com/liuyuan-pal/SyncDreamer

https://www.xxlong.site/Wonder3D/
https://github.com/xxlong0/Wonder3D

The above Zero-shot generation models tend to be generated from a dataset of 3D objects, at the moment Objaverse-XL is the largest dataset of 3D objects being 10+ million in size, although this does include data from Thingiverse which has no color or textural information. (these are datasets of download links to free 3D content, not datasets of the actual content itself)
https://github.com/allenai/objaverse-xl
https://objaverse.allenai.org/

The Neural Radiance Field (NeRF):
https://github.com/NVlabs/instant-ngp
https://github.com/Linyou/taichi-ngp-renderer
https://docs.nerf.studio/
https://github.com/nerfstudio-project/nerfacc
https://github.com/eladrich/latent-nerf
https://github.com/naver/dust3r

(CPU NeRF below)
https://github.com/Linyou/taichi-ngp-renderer
https://github.com/kwea123/ngp_pl
https://github.com/Kai-46/nerfplusplus

NeRF to 3D Mesh:
https://research.nvidia.com/labs/toronto-ai/DMTet/
https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/tutorial/dmtet_tu...

A lot of good resources can be found at: https://huggingface.co/

I've also written a Medium article which has a more wordy version of what I have written here with some image examples: https://james-william-fletcher.medium.com/text-to-3d-b607bf245031

If you are into the Voxel Art aesthetic you can Voxelize any 3D asset using the Free and Open Source Drububu.com Voxelizer or ObjToSchematic.

I maintain a project that allows users to create Voxel art in the web browser and export it as a 3D PLY file, called Woxel. It's like a simplified MagicaVoxel / Goxel but with a Minecraft style control system.

Although Woxel isn't the only voxel editor that runs in a Web Browser, there are more listed in my article here! And I have a more comprehensive list of Voxel Editors here.

Various other related projects to text-to-3D and image-to-3D:
https://github.com/pals-ttic/sjc
https://github.com/ashawkey/fantasia3d.unofficial
https://github.com/Gorilla-Lab-SCUT/Fantasia3D
https://github.com/baaivision/GeoDream
https://github.com/thu-ml/prolificdreamer
https://github.com/yuanzhi-zhu/prolific_dreamer2d
https://github.com/nv-tlabs/GET3D
https://github.com/alvinliu0/HumanGaussian
https://github.com/hustvl/GaussianDreamer
https://github.com/songrise/AvatarCraft
https://github.com/Gorilla-Lab-SCUT/tango
https://github.com/chinhsuanwu/dreamfusionacc (simplified implementation)
https://github.com/yyeboah/Awesome-Text-to-3D
https://github.com/atfortes/Awesome-Controllable-Generation
https://github.com/pansanity666/Awesome-Avatars
https://github.com/pansanity666/Awesome-pytorch-list
https://github.com/StellarCheng/Awesome-Text-to-3D
https://nvlabs.github.io/eg3d/
https://research.nvidia.com/labs/toronto-ai/nglod/
https://pratulsrinivasan.github.io/nerv/
https://jasonyzhang.com/ners/
https://github.com/eladrich/latent-nerf
https://github.com/nv-tlabs/LION
https://github.com/abhishekkrthakur/StableSAM
https://github.com/NVlabs/nvdiffrast
https://github.com/NVlabs/nvdiffrec
https://www.nvidia.com/en-us/omniverse/
https://blogs.nvidia.com/blog/gan-research-knight-rider-ai-omniverse/
https://github.com/maximeraafat/BlenderNeRF
https://github.com/colmap/colmap
https://github.com/EPFL-VILAB/omnidata/tree/main/omnidata_tools/torch
https://github.com/isl-org/MiDaS
https://github.com/isl-org/ZoeDepth
https://github.com/nv-tlabs/nglod
https://github.com/orgs/NVIDIAGameWorks
https://github.com/3DTopia/3DTopia
https://github.com/3DTopia/threefiner
https://github.com/liuyuan-pal/PointUtil
https://github.com/facebookresearch/co3d
https://github.com/BladeTransformerLLC/gauzilla
https://github.com/pierotofy/OpenSplat
https://github.com/chenhsuanlin/bundle-adjusting-NeRF
https://github.com/maturk/BARF-nerfstudio
https://lingjie0206.github.io/papers/NeuS/
https://github.com/CompVis/stable-diffusion
https://github.com/NVlabs/eg3d
https://github.com/rupeshs/fastsdcpu
https://github.com/3DTopia/LGM
https://github.com/VAST-AI-Research/TripoSR
https://github.com/ranahanocka/point2mesh
https://github.com/Fanghua-Yu/SUPIR
https://github.com/Mikubill/sd-webui-controlnet
https://huggingface.co/blog/MonsterMMORPG/supir-sota-image-upscale-better-than-m...
https://depth-anything.github.io/
https://github.com/philz1337x/clarity-upscaler

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment