Apple recently unveiled the Vision Pro. This type of device is mostly designed for augmented reality (AR), in the sense that it should be able to throw 3D models and rendering combined into a camera.
Normally with pass-through AR, one would expect to get the camera feed as an image and maybe some environment cubemap generated from the camera to apply proper lighting into the objects.
Evidently this becomes more complex with the type of pass-through that Apple aims for, which includes actually mixing depth with with real-life objects and interacting with their lighting (as in, hand can pass through 3D objects).
For this, Apple imposes a specific API where the game engine just cedes entire control of rendering such as:
- Models
- Materials (and shaders)
- Lights?
And uses their own APIs for this.
So, how to make this work seamlessly in Godot?
Godot has a singleton named RenderingServer. All rendering goes through it. It is very high level in nature.
The general aim of an implementation to support these types of devices would consist in the following goals:
- Duplicating all textures, materials and meshes between Godot internal rendering and the AR API.
- For textures based on viewports, they should if rendered to, be updated to the AR side (so we can have 2D rendered in Godot appearing in the AR world in a texture).
- When creating a scenario in RenderingServer, you must be able to specify if you want this scenario to be the device native AR world.
If you look at the default implementation for RenderingServer in Godot (RenderingServerDefault), it is quite modular:
As example, for everything texture/material/mesh, it points functions to RSG::mesh_storage, but the actual class is virtual ). Virtual implementations can all be found here.
Ideally, to do an implementation for VisionPro, the following steps should be taken:
Instead of pointing the macros to RSG::mesh_storage
as example, there could be a RenderingMeshStorage*
pointer that points to the
same place inside RenderingServerDefault (as a class member). This pointer is used by default.
This would allow an implementation to supply custom "wrapper" version of of the storage and scene classes.
As an example, a new RenderingMeshStorageVisionPRO
can be implemented. This is set as the class pointed to in RenderingServerDefault
when running on VisionPRO.
This "Wrapper" would have two functions:
- Register meshes received in the VisionPro
- Forward the calls to the actual implementation of
RenderingMeshStorage
inRSG::mesh_storage
If done for textures, materials, etc. too, this would allow to by default duplicate everything that Godot has on the VisionPro side.
Godot uses its own shader format. Apple uses MaterialX. Converting from one to the other should not be very hard. The key is to use Godot's ShaderLanguage parser.
This gives you access to the parse tree of the Godot shader, which can be flattened to anything else, being fully possible to create a MaterialX material from it from code.
Likewise when implementing scene management, but a little different:
RSG::scene
needs to determine when a scenario is created for the VisionPro. If this is the case, then it is not created natively for Godot
but it uses the Apple APIs. Unlike for storage, it does not forward these calls to Godot scenario if meant for the AR renderer.
The last missing piece of info is Audio. In this case I think its harder to do this in Godot AudioServer. I suggest creating a AudioStreamPlayerVisionPRO that, depending on the scene being in the AR world or not, can send audio to the Godot AudioServer or the AR APIs.
-
Q: Isn't duplicating wasteful?
-
A: Yes but it makes both rendering to texture and rendering to shared view work seamlessly. If you are making an AAA game, more finegrained view of what goes into what can be eventually be added.
-
Q: Isn't the indirection slower?
-
A: Not really it should not be.
It would be great to see Godot supporting Vision Pro.
Currently it seems some experimentation has already been done in this repository:
https://github.com/kevinw/GodotVision