Skip to content

Instantly share code, notes, and snippets.

@heckj
Created November 20, 2024 23:45
Show Gist options
  • Save heckj/c49b264823be8f1013c258ae991bb4a2 to your computer and use it in GitHub Desktop.
Save heckj/c49b264823be8f1013c258ae991bb4a2 to your computer and use it in GitHub Desktop.
Apple VisionOS Q&A from Nov 20th session
VisionOS notes
Q: Curious what tools or workflows designers are using to mock up things like volumes or immersive spaces considering things like Sketch and Figma are 2D or "window" based?
A: Great question! We find a lot of people start with our visionOS Apple Design Resources especially on Figma (https://www.figma.com/community/file/1253443272911187215) and animate flat views to sell the ideas to partners. Then we've seen folks move to tools like Spline (https://spline.design), Blender, etc. for 3D workflows. In particular, Spline has a visionOS mirror app (https://docs.spline.design/doc/spline-mirror-for-visionos/docaQJC8SwTF) that makes it much more efficient to prototype 3d environments!
Q: I'm generating mesh data in code and then programmatically creating objects in a volume. As the app progresses, I need to switch out some of those meshes. If this is happening fairly rapidly (multiple times per second), would it generally be better to keep a reference to the entity to be able to update it, or should I prefer to scan/parse through a scene and find the piece I want to update to keep the code more "decoupled"?
A: You could add a custom component to the entities you want to track, it could contain references to your meshes, and then swap out their meshes in a system. You could create a system that iterates over all entities with your component, so you won't need to scan through the scene each time, and you won't need to keep around a reference, since the ECS will update over the entities with your component automatically.
Q: I would love to know more about using PhysicsRevoluteJoint (beyond what's in the Apple Docs)?
A: Hi Tim. Thank you for the question. Here is an article which shows you how your create a PhysicsRevoluteJoint in RealityKit. https://developer.apple.com/documentation/realitykit/simulating-physics-joints-in-your-realitykit-app. Is there something specific that you wanted to know about the API?
I know a lot of folks have been posting on reddit.com/r/VisionPro after these events. There's also a Vision Pro Discord Channel: https://discord.com/invite/uPu4cDJGGV
Q: Is there any kind of documentation on transformations and interpolations between different coordinate spaces? Either from VisionPro or from iPhone
A: Hi Juan. Thank you for the question. We have a sample code and accompanying article which talks about moving entities between coordinate spaces in RealityKit. https://developer.apple.com/documentation/realitykit/transforming-entities-between-realitykit-coordinate-spaces There is a WWDC24 video talking about coordinate spaces in SwiftUI and RealityKit here: https://developer.apple.com/videos/play/wwdc2024/10153/?time=1178
Q: Is it possible to launch straight into an immersive space from a native app without first opening a window/volume and calling the openImmersiveSpace environment value much like how Unity apps behave on the platform?
A: Hi Tomas. This is possible. In your Info.plist set "Application Scene Manifest” > "Preferred Default Scene Session Role” to "Immersive Space Application Session Role".
For more detail visit - https://developer.apple.com/documentation/swiftui/immersivespace#Present-an-immersive-space-at-launch
Q: Can we start an app with both, a 2D Window AND a Volumetric Window ?
A: Quick answer: Yes! You can do any combination of 2d and volumetric window at launch. What we ask from a design side is to not jump people right into full immersion and gradually transition into 3d environments
Q: I use OrbitAnimation in my visionOS app to cause a child entity to orbit a parent entity. In visionOS 2, is there a different, recommended way of animating a child entity to orbit a parent entity? I watched the WWDC24 talk about the new timelines feature in Reality Composer Pro, but I wasn't sure if there was a benefit to redoing my orbit animation with the timelines feature.
A: Using a Timeline animation is an alternative way to animate your entities in RCP, and creating an Orbit animation in Timeline is equivalent to creating one in code, so it is up to personal preference. Timeline can be useful for artists and non-coders, so keep that in mind.
Q: Does opening a USDZ object from the Files app generate a special type of volume exclusive to the OS? I’m finding it challenging to create a similar setup, ensuring the object drops shadows and stays anchored to the bottom.
A: Quicklook is the system level viewer with built in gestures and easy interactions. If you wish to create something similar using the Model3D API might be the easiest option. But, RealityView will provide you with a lot more control. If you wish to add UI elements in 3D space within your viewer, attachments is what you may be looking for
Q: Hello! I'm a new developer and I'm wanting to dive into the 3D space. What would be a good starting point to start developing 3D applications for visionOS if I haven't used Unity?
A: Hi Dillon. Welcome to developing for visionOS. We have a great set of sample codes and articles which help you start building apps for visionOS. https://developer.apple.com/documentation/visionos/introductory-visionos-samples
Q: What is recommended best practice for importing a Blender 3D file into RCP? I assume as a .usdz file? Is there a WWDC24 session or other Apple resource that best explains this. I want to make sure I provide the right format/file to RCP from Blender.
A: USDZ, USDA, and USDC are all the same format and can be read by RCP. I personally recommend USDC and USDA when possible, because USDZ will include dependencies like textures that can become duplicated. We do not have any official guidance for third party software like Blender, but this is a common workflow and I frequently answer questions like this on the developer forums if want to ask in more detail there: https://developer.apple.com/forums/topics/spatial-computing
Q: Hello! I was wondering if there exists a way to have 3D curved text in Vision Pro apps. I didn't find it in the documentation nor online
A: Hello Davide, You can create your own mesh to do this, or you can use the system provided generateText method. This method is used in the Sample App: https://developer.apple.com/documentation/GroupActivities/customizing-spatial-persona-templates https://developer.apple.com/documentation/realitykit/meshresource/generatetext(_:extrusiondepth:font:containerframe:alignment:linebreakmode:)-5jn3l
Q: I am working on a Vision Pro app to visualize stresses from a structural analysis. It uses LowLevelMesh to show tetrahedra shaded with ShaderGraph materials. It works fine with 1 tetrahedron, 2 tetrahedrons, and 1000 tetrahedrons. It crashes with 5000 -- I need 2 million tetrahedra. Do you have suggestions about where I can, in a priorietary fashion, share code and get specific one-on-one help with this?
A: Hi Rich, the Developer Forums (https://developer.apple.com/forums/topics/spatial-computing) are always a great place to get assistance with specific coding questions! As far as low-level mesh goes, perhaps this article (https://developer.apple.com/documentation/realitykit/generating-interactive-geometry-with-realitykit) would be helpful? I believe it creates a mesh with ~250,000 vertices.
Q: An app could use windows, but need to put trigger on a virtual objet in the real world at a fixed location which does not change among launches of the app. For example to click on the side of your desk to move to the next slide for instance. It seems that when the user moves 10 feet away he can't click on the triggers. What should we do?
A: From the UX point of view, we discourage attaching controls to a permanent place in space unless it is absolutely necessary. We want users to have the ability to use your app no matter where they are (and whether there's a desk in front of them or not). If your app relies on a desk, we recommend always letting people select controls via indirect input (gaze and pinch) in addition to direct. More info about that here: https://developer.apple.com/design/human-interface-guidelines/eyes
Q: Hi there! thanks for organizing this webinar! Fairly technical question: a lot of our devs have a significant of code written in modern C++ (17 or up) dialects. Is there an easy way (or perhaps this will be covered today, or documentation is available) on how modern C++ can be integrated in a Swift app and immersive spaces?
A: Swift supports C++ interop. So if you have a large based of C++ code, you can call those functions from Swift. https://www.swift.org/documentation/cxx-interop/
Q: A PAINTING on a wall with unique identifier / 3D model trained with ML to recognise specific physical object.
The PAINTING is recognised, identified and a SPATIAL VIDEO appears AND plays automatically inside the frame.
How to implement such model?
A: Hi Martin, If I understand your question this WWDC talk might help: https://developer.apple.com/videos/play/wwdc2024/10101
Q: Are there any recommeded poly/vertex count limits? I've played around with some large models (>3 mil triangles) and they will load on Apple Vision Pro, but not iPhone/iPad (Pro or not). I may be asking the wrong question, as I'm new to AR development.
A: Hi Rob. This depends on the GPU. Check out the tech specs for each GPU. https://developer.apple.com/metal/Metal-Feature-Set-Tables.pdf
Q: Hi, In a fully immersive space, what is the best way to adjust the rotation of a text panel containing information so that it always faces the user in front of the usdz object for example ?
Currently, I am using the BillboardComponent, but it doesn't quite achieve the desired effect because its anchored at the same position. I wish that the text panel rotates entirely around the object, depending on the user's position.
A: Hello Dorian. There's no single API for this. I encourage you to file an enchantment request via - http://feedbackassistant.apple.com... It is possible to build a custom component (as in Entity Component System) do this. You can get the location of the Apple Vision Pro using https://developer.apple.com/documentation/arkit/worldtrackingprovider/4293525-querydeviceanchor then position the entity with the BillboardComponent along the vector between the device and the usdz object.
Q: Hi there, I'm just getting started in visionOS development and was curious if there was any way to anchor a window within a fully immersive view so that the window follows the user wherever they look and move within the immersive space (ex - creating a graphical HUD) - thanks!
A: Hey David, You can add a head anchor (`AnchorEntity(.head)`) to your RealityKit view and add child entities to place them relative to the head anchor. However, please consider avoiding head-anchored content for improved accessibility: https://developer.apple.com/documentation/visionos/improving-accessibility-support-in-your-app#Avoid-head-anchored-content
Q: Can I combine content from Unreal Engine with content from Xcode and Swift?
A: It is currently possible to do Full VR apps with Unreal Engine. It is possible to do Metal rendering with passthrough (i.e., rendering without RealityKit). I am unsure if Unreal yet supports mixed reality rendering via compositor services for AVP, you may want to reach out to Unreal development communities for more assistance on this.
Q: how does visionOS treats shadow? is it something that the developer need to think about or is it automatic? Always talking about shadow, some material can cast different shadow based on the direction of light... is it something that visionOS can do automatically or is in charge of the developer?
A: Shadows can be enabled using the Grounding Shadow Component https://developer.apple.com/documentation/realitykit/groundingshadowcomponent
Q: Can one use a shader in an immersive space for post processing effects?
A: Hi Matthew, this isn't currently possible with shaders in RealityKit. Please file a feedback request at https://feedbackassistant.apple.com with your use case if you'd like to see this changed. That being said, you can write custom shaders in Metal when using Compositor Services that should enable you to achieve such effects.
Q: Is there any example code for using the AVP main camera?
A: Hello Carlos. Here's an article with sample code. Note: the API requires an entitlement and a license for enterprise apis for visionOS. https://developer.apple.com/documentation/visionos/accessing-the-main-camera
Q: Is it possible to manage, control and debug the Apple Vision Pro Rendering Pipeline? I come from Unity and Unreal Engine. I wanted to know if it can be done with Apple's tools. Thank you very much
A: You can use metal gpu capture/replay to debug and control your own rendering pipeline.
However it is not possible to access/debug the apple internal pipeline.
https://developer.apple.com/documentation/xcode/capturing-a-metal-workload-in-xcode
https://developer.apple.com/documentation/xcode/analyzing-your-metal-workload
Q: Hello, I am developing an app for virtual “real” training scenarios for aid organizations. I am wondering: what are the dimensional limits of a Reality View? Would it be possible to place 3D content 2 miles away from a current point, so that I can see the object when I walk those 2 miles? Where are the limitations in this case? Unfortunately, I haven’t been able to test this yet.
A: Vision Pro has been designed for indoor use cases or small distances. But if it's just viewing a large object located far away from the user it might be easy to use a skysphere with a texture on it instead
Q: But let's say I have a native visionOS app that uses SwiftUI and in one of my windows I have a button that opens an Unreal Engine experience. So can I launch an Unreal experience from a SwiftUI button? Unreal Engine just exports an Xcode project I believe.
A: No, this is not possible.
Q: How can I detect that an entity has moved past the volume boundary (e.g. exited the volume space)?
A: Hi Edward. This is a very good question. Currently we do not have a way to detect this but we do have a debug visualization that changes the color of the entity when it goes beyond the bounds of the volume.
Q: In an app that opens a new scene (full-immersive Mixed Reality) having a large 3D object, e.g. a large virtual desk, a user’s position seems to be used as the origin. In case this scene is closed and reopened after moving to a slightly different location, the virtual desk changes its orientation and position(user’s new position as the origin). How to lock this position of this virtual object, so that if a user reopens that scene, the virtual object is found in the same older original location
A: Hi Karan, If I properly understand the question you want to get a world anchor then tie your model to one of those anchors. Here is some sample code related to that approach: https://developer.apple.com/documentation/visionos/tracking-points-in-world-space
Q: I've build an immersive player that lets you use shareplay to watch spatial livestreams, in realtime. What I want to be able to build is an immersive environment like Mount Hood, is there a way to learn about that?
A: Yes! Check out this video (and the related links at the bottom": https://developer.apple.com/videos/play/wwdc2024/10087
And since you mentioned media:
https://developer.apple.com/videos/play/wwdc2024/10115 This is specifically about immersive environments for media and video
Q: Hi~ When enlarging a 3D model using hand tracking, if you zoom in beyond the entity’s bounds, the 3D model appears cut off. Is there a way to expand these bounds infinitely?
A: Hi Ryan, Use an ImmersiveSpace to have an unlimited volume. https://developer.apple.com/documentation/visionos/creating-fully-immersive-experiences
Q: For some of these in-device shots in the presentation, how did you get the panning so smooth? I try to record and I have to use significant anti-jitter, but it doesn’t look smooth. It looks like a shot from a simulator instead?

In other words, I’m asking how to get stable/smooth footage from Apple Vision Pro.
A: Can you believe we've just gotten really really good at holding our heads still? 😅
Q: If I am opening a new window, how I do I push the main window the left and have the new window appear to the right? currently my new window appears right on top of the main window. I tried using pushWindow, but I wasn't able to get it working correctly
A: Hi Vik, we have an API that might help here! Take a look at the defaultWindowPlacement SwiftUI modifier: https://developer.apple.com/documentation/visionos/positioning-and-sizing-windows#Specify-initial-window-position
Q: Unreal does support mixed reality on visionOS now. but my question is more about whether or not it's possible to COMBINE a naitive SwiftUI app with Unreal Engine content
A: Yes, this specifically is not supported (combing the generated Unreal xcode project with your own xcode/swift Ui codebase). Unreal does generate an xcode project (similar to Unity), and you can edit that project, but I don't recommend this workflow and it is not supported.
Q: when i start a new project by default the canvas starts running. I find that even if i close the canvas the process keeps running and eating up cpu and gpu cycles. Is there a way to kill it other than restarting xcode?
A: Hi Ale, this would be a great feedback. Please file with details here (https://feedbackassistant.apple.com)
Q: So SwiftUI cannot launch a Metal-based immersive experience?
A: No, it is the combination of a generated Unreal project with another codebase that is not supported.
Q: I want to bring my existing iOS and iPadOS app to Vision Pro. There's one feature in the app, an in-app chat channel for support, that relies on a third party service whose framework isn't available for VisionOS... so I plan at this stage to just omit that feature in the VisionOS version of the app.
A: Hi Duncan: This approach seems reasonable. You can conditionally isolate or exclude visionOS-specific behavior. Here are a couple of examples that illustrate this for reference:
- https://developer.apple.com/documentation/visionos/bringing-your-app-to-visionos#Isolate-features-that-are-unavailable-in-visionOS
- https://developer.apple.com/documentation/visionos/bot-anist
Q: Hi, when I create an immersive experience, how can i let the user walk in this space like it's in real life? I mean, how can i make sure that his environment is safe and with no obstacle? I know that for example for meta platform there is the guardian... is there something similar in visionOS?
A: There's a ton of built in safety features that alleviate you needing to build this yourself. You can read more about best comfort practices here: https://developer.apple.com/design/human-interface-guidelines/immersive-experiences#Promoting-comfort
Q: I wasn't able to screenshot the code to placing a window next to the main window, is there a place in the documentation that has this example?
A: Hi Huy, here's a link that will be helpful! https://developer.apple.com/documentation/visionos/positioning-and-sizing-windows#Specify-initial-window-position
Q: What 3D model file formarts are recommended to use?
A: USD! There are different flavors of USD: USDZ, USDC, and USDA, and each can be read by RCP and RealityKit
Q: How to lock a specific app size (aspect ratio) and make it so that if the user resizes it, it will keep the specific aspect ratio and scale instead of changing in a single dimension affecting the placement of views, elements of the window etc?
A: Hi Martin: Thanks for your question. The aspectRatio(_:contentMode:) modifier should be able to help with this. See https://developer.apple.com/documentation/swiftui/view/aspectratio(_:contentmode:)-771ow for additional information.
Q: Can I create modals for things like DatePicker like in iPhone, that are presented on top of the view, or do I need to use another window?
A: Yes you can if you're in a Window. You can use `present(_:animated:completion:)` on visionOS just like you would do on iOS.
Q: Hello, what are the main things to consider if we are giving width, height values and any safe area considerations here ?
A: Great question! Prefer horizontal layouts to vertical ones – it helps with neck strain as rotating your head horizontally is easier than tilting. The defualt window size a great place to start. Ideally your initial window starts at that size or smaller, and then you can allow users to increase the size if that's their preference. "Safe areas" are a bit different on visionOS, but it's mostly about avoiding obscuring the system UI (window grabber on the bottom and lower corners).
https://developer.apple.com/design/human-interface-guidelines/spatial-layout Here's more info!
Q: When developing a TabletopKit game that starts in a shared experience. if the users become immersed in a custom immersive environment, the Personas disappear. Is it possible to play a shared TabletopKit game while immersed in a custom immersive environment at the same time?
A: Yes! Set SystemCoordinator.Configuration.supportsGroupImmersiveSpace property to true. Check out the doc here:
https://developer.apple.com/documentation/groupactivities/systemcoordinator/configuration-swift.struct/supportsgroupimmersivespace
Q: What are the technical limitations of Spatial Video support within apps? Can for example Spatial Videos be produced and displayed in 4K Dolby Vision, 90fps?
A: Hi Martin, we have a couple of articles that might be helpful about spatial video and photos. https://developer.apple.com/documentation/imageio/writing-spatial-photos and https://developer.apple.com/documentation/ImageIO/Creating-spatial-photos-and-videos-with-spatial-metadata
Q: A question about app design conventions: does it still make sense to use drag and drop in situations where you want to alter some property of an entity (for example, in a task manager app: dragging a task between "to do", "doing" or "done" sections) if said property could be made editable by the user with a button or a picker? because the difference in effort required between a tap or a drag gesture on visionOS (even if they are both supported by the platform) seems much bigger compared to iOS or iPadOS.
A: Drag and drop absolutely makes sense for list altering on visionOS as people are familiar with doing this their other Apple devices. However, we also encourage providing an alternative way of sorting especially if you're expecting your users' lists to be incredibly long. In that case you can use a Touch and Hold (long pinch) to open a context menu to move elements to other buckets.
Q: I'm with Joseph! :) We need some more artist-friendly, 3d pro controls (think Maya) for navigation in Reality Composer Pro.
A: Agreed! Please reach out to us with Feedback Assistant https://developer.apple.com/bug-reporting/, or you can post your questions on the developer forums and we can give you some more in depth help on your content creation pipeline issues there: https://forums.developer.apple.com/forums/topics/spatial-computing
Q: Good morning. Lately I was trying to load a 3D model from Reality Composer Pro in Xcode with the ModelEntity(named:, in:) but the load is always failing. It only works with the Entity initializer Entity(named:,in:). Is it a limitation of the ModelEntity initializer or may it be a bug?
A: Hi Alessandro. A ModelEntity that you are loading from Reality Composer Pro should have a ModelComponent in it. If it does and it still does not load using the ModelEntity(named:, in:) please do file a feedback using the feedbkac assistant https://feedbackassistant.apple.com
Q: Do you have any developers bringing in Camera feeds into AVP via NDI protocol today?
A: Video feed is currently only available via enterprise APIs on AVP. You can apply for access here https://developer.apple.com/go/?id=69613ca716fe11ef8ec848df370857f4
Q: Is there a built-in way to make it so the user can't close a window?
A: Hey Jenny, There's no way to disable closing a window. Similar to iOS, a user can always close your application. There might be a different API that achieves your desire. You can use ornaments to have a view always present alongside a window or volume. In the immersive space, you can use Attachments to display SwiftUI views alongside your RealityKit content. The visibility of ornaments and attachments are controlled by your app and are dismissed alongside your scene or by a custom affordance.
Q: Is it possible to access large 3D model(s) stored in the Device’s memory, instead of adding them in the app’s Reality Composer Pro scene as a USDA file ???
A: Hi Karan! Good question. Yes, you can create a RealityKit Entity from a USD file. https://developer.apple.com/documentation/realitykit/loading-entities-from-a-file
But there is no API to load an Entity from an in-memory buffer. You'd have to write them to a temp file first. This is excellent feeback to provide through https://feedbackassistant.apple.com
Q: Can Reality Composer Pro be used to resize existing USDZ models for use with object detection modeling in CreateML? I have some USDZ models from a vendor of a product, but they are much larger than real world size.
A: Hi Blake, A better tool to do this would be a DCC tool. Please feel free to file a feedback if you'd like to see features like this in Reality Composer Pro (https://feedbackassistant.apple.com)
Q: What is maximum extent of meshes or vertices or polygons that a scene can support, without crashing the App ?
A: It depends on the complexity of the scene. This session covers these topics in detail https://developer.apple.com/videos/play/wwdc2024/10186
Q: How to display 3D (Spatial Videos) within my app? Do the videos need to be stored inside the app or could they be streamed from a file server of some sort? If so, what would be recommended? My videos are flat in my app.
A: Hi Martin, we have an API to display spatial media! Take a look at Quick Look’s PreviewApplication API: https://developer.apple.com/documentation/quicklook/previewapplication
Q: Do volumes support import of reality files created by export from RealityKit
A: Hi George. It is possible to present scenes created in Reality Composer Pro in a volume using a RealityView. In Xcode, create a new project with its "initial scene" to "volume" and look in ContentView to see an example of this.
Q: Resending Question ;;;; Is it possible to access large 3D model(s) stored in the Device (VisionPro's) memory, instead of adding them in the app’s Reality Composer Pro scene as a USDA file ???
A: Hey Karan, You can load models that are bundled within your application. They do not need to be included in Reality Composer Pro. The Happy Beam sample code loads the fireworks.usdz file that is bundled with the app and not included in the RCP scene. https://developer.apple.com/documentation/visionos/happybeam
Q: Hi there! Do you have any best practices or suggestions for enabling users to navigate an immersive 3D model, such as a multi-story building with various rooms? We've considered using a pinch-to-move method with an anchored object on the floor, but we're curious if there are other effective techniques we might be overlooking. Thanks!
A: Hi Ryan! Depends on how life-scale this model is. If you're inside of the building as if this was real life, it's definitely preferable to navigate using a menu and fading in and out of different rooms to avoid motion sickness. We recommend avoiding a case where the user is moving the ground below and around them, it causes immediate comfort issues.
A: If it's not very life scale, as in we're looking at a diorama. Pinch and drag on control next to the model can definitely be a quick and delightful way to navigate between floors.
Q: Is it possible to use Vision Pro as a full-circle app for scanning 3D content (currently done with an iOS device) and displaying the same object in an immersive way? Or should 3D scanning still be done with other devices? Thank you.
A: Object Capture is the feature / API you may be looking for, which only works on iOS and macOS currently. https://developer.apple.com/documentation/realitykit/realitykit-object-capture/ It outputs a 3D usdz file that you can load into your visionOS project
Q: When I export a 3D model created in Blender to USD format and import it into Reality Composer Pro, the textures and animations are consistently missing. Are there any relevant documents or methods to address this issue?
A: Make sure you include textures and materials in the USD export dialog. We don't have official instructions on how to use third party software, but please ask this on the developer forums with more details if possible, as we frequently reply there:
A: https://forums.developer.apple.com/forums/topics/spatial-computing
Q: Are there any swift-oriented libraries available (open source or built in) that support reading and writing USD collections for programmatically building content that you can load later - akin to what's happening within RealityComposerPro? Is the avenue that's most easily available using USD directly through C++?
A: Hi Joe, this would be a great feedback to file to let us know the use cases that are important to you: https://feedbackassistant.apple.com
Q: From an accessibility voice over standpoint, is it bad practice to disable buttons in SwiftUI?
A: Hi Glen, thanks for focusing on accessibility! Buttons that are disabled are still announced to people using the accessiblity features. You can read more about other accessiblity options here: https://developer.apple.com/documentation/accessibility/integrating_accessibility_into_your_app
Q: Is it possible to use Vision Pro as a full-circle app for scanning 3D content (currently done with an iOS device) and displaying the same object in an immersive way? Or should 3D scanning still be done with other devices? Thank you.
A: Hi Todd, this isn't currently supported on Apple Vision Pro, but you're welcom to file a feedback request at https://feedbackassistant.apple.com with your use case! Also, this article has more information about Object Capture on iOS and macOS: https://developer.apple.com/documentation/realitykit/realitykit-object-capture/
Q: I am primarily interested in Metal immersive development and the it immediately becomes apparent that a heads up display written with SwiftUI would be very useful. Possibly if a SwiftUI view "parent" could be available as a metal texture to the immersive experience, we can composite on top of our render so that standard UI controls are available even in an immersive render.
A: Thanks for the question. You should be able to use (standard) SwiftUI views along with using Compositor Services for Metal rendering. The following session and related sample project is making use of a similar setup: https://developer.apple.com/videos/play/wwdc2024/10092
Q: I'm building an app that shows a list of data in its primary window. I include an option to open each data option in its own window. If that happens, and the user closes the main window, the user no longer has access to the main window. What's the recommended way to reopen the initial/main window?
A: Hey Josh, You'll want to use the openWindow enviroment value to reopen the initial window. Consider using the scenePhase property to listen to the changes of the scene’s operational state.
https://developer.apple.com/documentation/swiftui/environmentvalues/openwindow
https://developer.apple.com/documentation/swiftui/scenephase
Q: I am also concerned about the meshes. I’m recording organic material (such as plants) that, while somewhat solid, doesn’t always render easily in a USDZ format. My end goal is to not only scan the plants with an AVP unit but also display them in 3D virtually. Any suggestions?
A: If the plants can be kept rigid, you can scan them using Object Capture and turn them into 3D usdz models that can be loaded in your visionOS app https://developer.apple.com/documentation/realitykit/realitykit-object-capture/ Do checkout the sessions on Object Capture on developer.apple.com or on the Developer app
Q: If I use the "Reality Convertor App", is it better than converting an FBX file to USDZ from Blender ? Will this process keep the textures, meshes intact ? ,,,,,,, Thanks for the responses so far!
A: It is a good idea to use Reality Converter to check for any issues with your assets, even when exporting from Blender. Additionally, Reality Converter has UI for fixing textures during the export process, which is useful. So my answer is "not better, but different and useful" ;)
Personally, I would use Reality Converter even after I have exported from Blender, just as a sanity check to make sure I didn't miss anything in Blender.
Q: How do I load a scene that is located in a folder within the .rkassets folder? Or what would be the best practice to organize scenes in Reality Composer Pro for simplest access in Xcode with the available Entity constructors?
A: Hi Lukman. In Reality Composer Pro, if you right click on the entity and click on "Copy Object Path" you can get the path of the object that you want to load in your scene. In general, it makes sense to organize your assets into the smallest possible scenes so that you need to load only the ones you need for your app. Hope this helps !
Q: Are there any extra permissions we need to get from the user when we are creating an app to be used in the vision pro?
A: Hi Noah: There aren't extra permissions, necessarily. Most APIs behave similarly to other Apple platforms, where a programmatic request & accompanying usage description is needed.
For additional details, please see:
- https://developer.apple.com/documentation/visionos/checking-whether-your-app-is-compatible-with-visionos
- https://developer.apple.com/documentation/visionos/making-your-app-compatible-with-visionos
Q: I have built a custom ShaderGraph in RCP, that works well in visionOS, what's the best way to implement the same in my iOS app?
A: Hi Raghav. The RealityKit APIs are cross platform and what works in visionOS should work in iOS. To see the APIs in action across platforms see the WWDC video here
https://developer.apple.com/videos/play/wwdc2024/10103/
Q: I'm observing many users need time to get familiar with the spatial interactions on visionOS and iOS. I'm often unsure what onboarding options or tools to apply for—recorded video or image sequence demonstrating those spatial interactions, TipKit views, affordances on the 3d content itself, guided explanation by a person, demoing (like with Apple Pencil scribble on iPad in Settings)... Would love to hear your thoughts on this.
A: Hi Lukman, I’m sorry you've been noticing friction with interactivity. Definitely expected with a new platform, what we recommend is for you to use standard gestures in your app: people are familiar with these gestures from interacting with other apps in VisionOS and expect to use them everywhere. It reduces the need for having to teach how to interact with every app. Please refer to the Human Interface Guidelines for more information and watch the related videos where we elaborate on gestures and feedback.
Q: Can an app open another app programmatically?
A: Yes! The term apple uses is "universal links". Check out our documentation here: https://developer.apple.com/documentation/xcode/allowing-apps-and-websites-to-link-to-your-content/
Q: Is a remote rendering solution being developed for the AVP by Apple? I am inquiring in the context of industrial 3D models, which are typically highly complex in terms of polygon count and hierarchy, far exceeding the computational power of the AVP. This could be something akin to the private cloud compute solution that will be utilized for Apple Intelligence. Thanks!
A: Hey Jakub, We don't discuss future plans, but we'd love to hear more about this need and what limitations you're currently running into. Please file a feedback via http://feedbackassistant.apple.com.
Q: I'm looking for the Same thing as Joe: having the ability to read and write USD directly from swift. We used to have ModelIO, but there is no equivalent for USD. It would be great for example just to automate some tasks when editing USD files.
A: Thanks for sharing this, it would be great if you could send us feedback and let us know more about your usecase. http://feedbackassistant.apple.com
Q: Sorry for the repeat of this question. I selected a specific panalist the first time. :). Is this correct?: a "Window" is a WindowGroup scene type. A "volume" is a WindowGroup scene type with the windowStyle set to "volumetric". A "Space" is an "ImmersiveSpace" scene type.
A: Yup. That's spot on. Although, usually Immersive Space are referred to using their full name. There's the shared space which all app can coexist in as long as an immerisve space is not open.
Q: is object capture available on every iphone ?
A: Object Capture is available on iPad Pro 2021, iPhone 12 Pro, and later models
Q: Hi, does a window app allow directly translating ipad app gestures to a vision app? i.e. does multitouch like two finger tap has a corresponding Gesture on visionOs?
A: visionOS has standard Gestures that are specific to the platform. For instance, double touch might be a good fit for your use case. You can also create custom gestures using ARKit. Check out https://developer.apple.com/design/human-interface-guidelines/gestures to learn more
Q: Is it possible to create an app that would connect with two iPhones, display a window with the camera view from Cam A to the left eye and Cam B to the right eye? Sort of live 3D view of what the cameras are capturing?
A: Hi Martin, To build something like this you'd need to use compositor you can read more about that here https://developer.apple.com/documentation/compositorservices Keep in mind that many factors need to be considered to ensure the person using your app is comfortable.
Q: Whats the best way to position content according to predefined physical space (looking at a building and viewing cg content around it)?
A: Hi Oleg. There is no single API to place content according to a predefined physical space. I encourage you to research ARKit for visionOS. You may be able to achieve your goal with one of the existing apis, some of which include image tracking, object tracking, scene reconstruction and plane detection. Finally you can use a world anchor to persist a location in the physical world across app sessions. Keep in mind this persistence is not permanent.
Q: How can i access Object Capture on my to use them on Reality composer Pro?
A: The Object Capture API must be implemented by developers in their apps. You can find a sample app that you can use to capture objects here: https://developer.apple.com/documentation/realitykit/scanning-objects-using-object-capture
Q: Do you have a list of content library resources for Reality Composer Pro and/or a link to documentation illustrating how to find them on RCP? The current speaker mentioned they exist, including a resource that comes with RCP (?), but I'm not finding them.
A: Reality Composer Pro has a library of assets that you can access by clicking the "+" button. Curt used it earlier today when adding a Cube and Material. You can check out a tutorial here: https://https://developer.apple.com/documentation/visionos/designing-realitykit-content-with-reality-composer-pro#Add-assets-to-your-project
Q: besides the obvious stuff like making the layout responsive and resizable, are there some interesting best practices when designing apps who you expect will be mostly used in the shared space alongside other apps? especially when it comes to "secondary" apps, designed to be used alongside a main thing (like a video editing software, a browser, code editor, or anything similar in importance)
A: Hi Andrea, this is great question and consideration. Since the Shared Space is great for milti-tasking and running apps side-by-side, I would focus on your layout but also the amount of windows required for main workflows. Designing a cohesive expeirence with clearly established navigation will help people eaisly dip in and out of your app expeirnece. And depending on your expeirence, you could design a compact view that can be exanded to show more functionality when needed. Reference the Music app!
Q: Are there any hints or details about getting a texture that works nicely for a sky sphere? Details such as expected size, resolution, etc?
A: Yes, check out the sample project here, its a good reference for how to use sky spheres. https://developer.apple.com/documentation/realitykit/construct-an-immersive-environment-for-visionos
Q: What is the correct way to open all my apps windows automatically on app start (without any button interactioin)? Someting like onAppear in the root window?
A: Hey Lukas, If you wanted to open additional windows without user interaction you can use openWindow in the .onAppear callback of your main window. Please keep in mind that onAppear may be called more than once for every app launch, so you should keep additional state to not open the windows when it is not required.
Q: How can I export my reality kit models as USD?
A: Hi George, this is not currently supported. Please file a feedback with your use cases here: https://feedbackassistant.apple.com Thanks!
Q: Is there a good way to pin normal windows (not volumes) to real-world objects/locations? Like a timer to the top of the stove or a photo gallery window to the top of bookshelf? (Ideally with persistence?)
A: We have a component called Attachments that allow you to do just that. Unlike a "window" which always comes from a window grabber so people can move them, RealityKit Attachments let you pin a view to a particular place in space or one that you're tracking. https://developer.apple.com/documentation/realitykit/attachment
Q: Can I use Alembic files in the timeline for pre animated objects like water simulation from Houdini or any rbd stuff?
A: Hi Peter, could you file a feedback that describes your use cases here: https://feedbackassistant.apple.com Thanks!
Q: Since ECS Systems are registered with the .registerSystem() and then initialized by RealityKit, I'm trying to figure out the best way to inject dependencies, other than having the System refer to a global singleton. Is there a more standard way to set properties on systems from outside the system itself?
A: Hi Tim. I've had luck exposing a static variable on the stystem that I usually set after I register the system.
Q: For static, nonAR 3D scenes on iOS, macOS, do you recommend RealityKit or SceneKit? and why?
A: RealityKit is specifically built for 3D scenes, and now supports all Apple platforms (including visionOS). If you are building something new in 3D for visionOS, RealityKit is what I would recommend.
Q: is there an 'advanced reality composer pro' manual or something like that anywhere? i've looked at all the sessions and read the developer site already but it doesn't seem to go into enough depth imho
A: Hi Wayne, are there particular areas of Reality Composer Pro you'd like more information on?
Q: Is there a way to connect a HoverEffect shader graph node to a light source in reality composer pro?
A: Hi Tim, If you are talking about the HoverState node in shader graph
https://developer.apple.com/documentation/shadergraph/realitykit/hover-state-(realitykit)
Q: where can i found 0 to hero pathway for visionOS development.
A: We have a few resources great for someone new to development! The visionOS Pathway links a bunch of great getting started resources. And the SwiftUI Pathway page would be a good place to learn foundations: https://developer.apple.com/xcode/swiftui/ and
Q: What would be the recommended way to approach voice commands in vision os? I’m interested in creating a hands free experience where the user can do all of the interaction by just voice commands or maybe voice commands combined with eye gaze. Can voice command trigger a button with the same name as the command?
Can there be a gaze requirement for the button to trigger the voice command?
Can there be voice commands that do not require buttons to trigger?
A: Depends on the case really! We recommend using the default way to interact (aka indirect and direct hand and eye gestures) for general UI, menus, etc because they're the standard of the platform and will be available to everyone. If you have a particular case where both hands are occupied with a physical object, you can consider voice commands, but make sure it's explicit that a person has entered this type of "mode" and put onboarding and helper text on your UI to teach people that this functionality
Q: I agree with Wayne, is there an "Advanced Guide to Reality Composer Pro" anywhere instead of the manual? More like Tutorials for RCP? E.g. it is very very difficult to learn about the Shader Graph editor :'(
A: I agree! Please file a request using feedback assistant: https://developer.apple.com/bug-reporting/
Shader Graph uses many MaterialX nodes, which are also used in third party DCCs. If you search for MaterialX tutorials, those should be helpful for learning Shader Graph in RCP.
Q: I have a reality view that can save a reality file. How can I use this reality file with visonOS? Will it behave like a USD file?
A: Hi George. Like a usd file, you can create an entity from a reality file. More details here - https://developer.apple.com/documentation/realitykit/loading-entities-from-a-file
Q: I would like to add a slider to animate a slicing through an object for example, cutting a building from front to end to see the section of the building. Can you suggest the easiest method of doing this?
A: The deatils really matter here, so I'll do my best to answer at a high-level. Be cautious of using a control, slider or otherwise, that would be disconnected from the expeirnece and force someone to look at one target and the results are shown in a different area. Tying the interaction to the object as closely as possible will be the most comfortable. But a direct gesture on the object can work too! In the JigSpace app, you can hover your hand over the jet engine to reveal the wiring.
Q: When setting up occlusion for a share space of a virtual object, is that based on each viewer's perspective? so if three people are in a shared space each occlusion would be different based on if they are all standing in a circle?
A: Hi Michael! Yes, occlusion is based on each viewer's perspective. For context, an object with an occlusion material applied to it will hide any other objects rendered behind it, from the perspective of each viewer. You can learn more here: https://developer.apple.com/documentation/realitykit/occlusionmaterial
Q: Does reality composer pro support usdSkel animations, and can we blend between animations? For example I want a character to do an animation, then blend to another animation when I interact with it (like in video games), is that possible?
A: Yes, this should technically be possible, but we don't have a lot of documentation on this. Check out BlendTreeAnimation: https://developer.apple.com/documentation/realitykit/blendtreeanimation
Please reach out on the forums, I would love to answer this question more in depth there:
https://developer.apple.com/forums/topics/spatial-computing
Q: Would you ever add a Model3D to a RealityView? is it that I would normally use one or the other to display 3D content? If I use a RealityView then would I just add entities to it to display the 3D content?
A: Use ModelEntity when displaying 3D content inside a RealityView. Model3D is useful for including 3D content inline with other SwiftUI content, where ModelEntity works with RealityKit's ECS and allows you to place entities spatially.
Q: What is the best way to offer direct manpulation of an entity using RealityView? I'm trying to create a simple cube the user can simply pick up. Using things like drag gestures allows both direct and indirect, but seems to limit things like rotation relitive to the users hand position. Should I look into combing ARKit hand tracking/collision dectors to offer more accurate, direct object manipulation? Or is there something I'm missing with the built in gestures?
A: Hi John. I suspect you've seen Transforming RealityKit entities using gestures (https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures) and it sounds like you want something custom. This forum post has code that enables a person to pickup and throw an entity. https://developer.apple.com/forums/thread/761772?answerId=800676022#800676022
Q: Can we use hand tracking in RealityView to sequentially select different 3D entities, and once all selections are complete, use a hand gesture to throw them into the air, applying independent physics to each entity? How would this work according to visionOS technical documentation?
A: Hi Ryan, you can create custom gestures. For info on that you can start here: https://developer.apple.com/design/human-interface-guidelines/gestures You can allow entites to be selected with a custom component, you can learn more about that here: https://developer.apple.com/documentation/visionos/understanding-the-realitykit-modular-architecture
Q: can reference objects be an image or other identifying mark? For example identifying a logo or something that can then be used as an anchor for displaying other content
A: Hi Bart: SpatialTrackingSession supports image tracking, which might support this.
Please take a look at: https://developer.apple.com/documentation/RealityKit/SpatialTrackingSession
Q: Hi, rephrasing the question: on RealityKit, can an anchor objects texture material be altered by user input?
A: Hi cansu, you can set the texture of a Shader Graph material with the setParameter method (https://developer.apple.com/documentation/realitykit/shadergraphmaterial/setparameter(name:value:)). Alternatively, if you would like to alter the actual data of the texture itself instead of simply applying a new texture, consider taking a look at this article covering the LowLevelTexture API: https://developer.apple.com/documentation/realitykit/creating-a-dynamic-height-map-with-low-level-texture
Q: Hi, I would like to use MapKit Map as a horizontal plane around which the user can walk and interact with 3D pins on the map. Similar to "SAP Analytics Cloud" app - https://www.apple.com/newsroom/2024/04/apple-vision-pro-brings-a-new-era-of-spatial-computing-to-business/ (second image from top). If possible, I would appreciate some guidance with navigating which Kits and/or visionOS features might be useful for this task or how to address this 3D situation and map gestures, since I am completely new to this
A: Hi Lukas, this would be a great thing to file a feedback with your specific use cases. http://feedbackassistant.apple.com Thanks
Q: Does Apple provide some Icon from SF Symbols 6 specifically for the gesture comprehension ?
A: Hello! Why yes 👍 we've got "hand.pinch" "hand.tap" "hand.rays" and "hand.palm.facing" in SF Symbols 6
Q: Is it possible to have multiple timelines/actions in a USDZ for calling different animations programmatically? I was trying to use multiple actions in Blender and export them to USDZ so I could call them from code (i.e. run, walk, idle etc.), but couldn't figure it out. I was able to use animations in RCP, but seems like that just cuts up a single timeline.
A: You can have multiple timelines on the same Entity, however I think what you are asking about is multiple animations in the same USD which is not possible (this is a limitation of the USD format). The work around is to export all your animations as a single animation, each playing one after the other in a single animation, and then cutting them up in RCP.
Q: How could I stream let's say a flight simulator from the mac studio to Vision Pro and interact with the flight sim, if that's even possible. Part two is if not possible to interact how could I make this possible?
A: Hi Tibor, while this might be possible there are alot of comfort based questions you'd need to think through before implementing such a feature. The visionOS approach would involve compositor services. You can find more info about there here: https://developer.apple.com/documentation/compositorservices
Q: I'm building an app that allows users to re-create their room with GenAI. Is it possible to bound .usdz models to real-life user's environment objects (any objects, not just wall, table, window) without doing their pre-scanning? So it could be any of the objects and doesn't know about exact objects in advance. That's build just on the user's session. Is it technically possible right now?
A: Hi Danil, sounds great! You can get world anchors while the person is scanning the room. You can use those anchors to attach the entities your app creates. You can find more info about world anchors here: https://developer.apple.com/documentation/arkit/worldanchor
Q: Fingers are ok, but when are we gonna get some real controllers for AVP? :)
A: Hi Steve, visionOS does support the GameController framework (https://developer.apple.com/documentation/gamecontroller). This forum post has a code snippet to get you started - https://forums.developer.apple.com/forums/thread/759144?answerId=796201022
Q: Is adding SwiftUI components to a RealityKit scene covered in one of the public WWDC talks?
A: Hi Gen! Yes, there are a few! https://developer.apple.com/videos/play/wwdc2023/10113?time=511 and https://developer.apple.com/videos/play/wwdc2023/10273?time=711 are the first that come to mind. Let me know if that helps!
Q: What is best practice to parse data in and out of a terminal window (or external app using say SSH) to a VisionOS2 app?
A: Hi Jeremy, are you looking to send data to your visionOS app from your mac? If so the way to do that would be via a network connection, probably best done via Bonjour. Once connected your mac app could send data to the visionOS app that it's connected to.
Q: I'd like to know how to change the camera view in the solar system module of your HelloWorld app demo code, so that it looks at the sun and not from the sun.
A: The "camera" in a VR/MR app is always at the player's eyes (or more accurately, the device position) so it cannot be placed like you can place a camera in a non-spatial app or game. In order to swap the
(sorry pressed enter too early) In order to swap the camera, you'll need to actually change the position of the model in front of the user. So in code, you just place the Sun where the Earth used to be, and vice versa, to achieve the effect you want.
Q: when designing a somewhat complex productivity app for visionOS, is it better to keep everything limited to one window, or are there situations where it's better to put stuff in separate windows (I'm talking about 2D views strictly, not about 3D objects or views here), in either case, why? and in the latter case specifically: what's a good way to make it clear that multiple windows still relate to the same app?
A: Great question! Keeping info in one window helps people manage all their content in the same space, so we tent to encourage productivity apps to start off as one window. Where it can make sense to add in additional window (and here we don't want to add TOO many as then there's a ton of window management) if you need the space to compare two large pieces of content. For example: in Mail, we open a new window when someone is composing so they can reference previous emails sent as they're composing.
Q: Thanks Bill. It's for integrating VisionOS with ROS2. I'll look into that. If you have available developer links on that topic, please share. Cheers :)
A: super cool! from a robot? That's glorious, looking forward to seeing that out in the world :)
https://developer.apple.com/documentation/network
https://developer.apple.com/videos/play/wwdc2022/110339
Also feel free to follow up on the forums if you get started with that and need further help. http://forums.developer.apple.com
Q: API for detecting a window being closed
A: ScenePhase in SwiftUI - foreground, background, inactive, etc. on VisionOS, if you cap in the app, it tracks the entire app. If you capture in the view, it tracks that view. To know if a window was closed, key to capture the scenePhase from within a view, not within the app.
Q: I wonder how the ground plane in which a user can move around is defined? How large is this? How do I manipulate it?
A: Hi Peter, you'd want to get world anchors, and if I understand your question correctly you'd want the floor. For more info on that you could start here: https://developer.apple.com/documentation/arkit/worldanchor
Q: Would be possible to transform a portal into a fully mmersive space with a seamless transition?
A: Hi Jacopo, If you wondering on how you can move entities between coordinate spaces in RealityKit, then please see this article:
https://developer.apple.com/documentation/realitykit/transforming-entities-between-realitykit-coordinate-spaces
Q: There are many scattered tutorials for Vision OS. And the documentation is good. But the main problem is that there is no step-by-step path for someone who wants to learn Vision from scratch and has no background in VR Kit and realityKit. Can you suggest a step-by-step path?
A: Hi Milad. I think a good starting point would be the sample codes in this section in the developer documentation:
https://developer.apple.com/documentation/visionos/introductory-visionos-samples
Q: What are some best practices for locomotion in an immersive environment that is meant to be explored? like a first person VR game essentially
A: Great question! As with all VR experiences, moving inside of 3D space can be uncomfortable if it's happening without a person directly in control of that motion. A few things can help: showing your game in Progressive immersion can help FPV motion sickness as you can reduce the amount of movement happening in your periphery as you move. You should avoid moving, or having the player move the entire ground plane underneath them automatically or through a hand gesture - this feels very disorienting in VR.
Many game developers when they want to move people through automatically will use the cheat of placing people in a vehicle (like a boat or car) that helps people understand the metaphor that something else is controlling their movement. We do recommend reducing the speed of automatic movements.
Q: Is there a code example for share play in an mixed immersive example?
A: Yes! Check out the tabletopkit sample app, as it uses share play: https://developer.apple.com/documentation/tabletopkit/tabletopkitsample
Q: We have an app that renders models using Metal. Additionally, we utilize ray tracing for highly precise interactions with the graphics. Is there a plan to enable rendering with Metal in an AR environment? We are considering re-implementing the rendering with RealityKit, but it would require significantly more effort.
A: Metal rendering in an AR environment is possible, check out our sample app here and associated documentation: https://developer.apple.com/documentation/compositorservices/interacting-with-virtual-content-blended-with-passthrough
Q: What would be the good practice to synchronize content (such as position, rotation) of an object in a group session share play experience?
A: Your app is responsible for syncing entity transforms using messages. What's best is use case dependent. Here's a link that covers sending data - https://developer.apple.com/documentation/groupactivities/synchronizing-data-during-a-shareplay-activity Here's a forum post with general guidance on SharePlay in visionOS https://developer.apple.com/forums/thread/756301?answerId=789577022#789577022
Q: Linda, i respectfully disagree about navigation in VR being disorienting. When done properly (see Half Life ALYX, still the Gold Standard, for example) this is an effective way to navigate an immersive space. One of the very first things that took me out of Immersive was when I tried the Disney app and wanted to move in the Monsters Inc. scare floor. The boundary passthrough cut out too quickly. AVP is, after all, a "VR" headset with passthrough capabilities.
A: Half Life Alyx uses a controller for navigation so we're talking about different methods for locomotion then when people are dealing with hand gestures.
Q: How can we zoom out of a solar system in your World app? Like zooming out to see the same entities at a larger and larger scale?
A: A good way to fake a zoom-out effect is to scale the subject. The "camera" in vision os is always at the player's eyes (or more accurately, the device position) so its not possible to move the camera the way you might on a non-spatial app or game
Q: I fully realize Apple is pushing hand gestures, but both (game and/or individual) controllers should co-exist. :). Also, would LOVE then, to learn how to program finger/hand gestures to "throw" our presence (i.e. snap teleport) in an immersive volume. Again, lots of us already with VR legs need this type of navigation/teleportation functionality. :)
A: This should be possible! Check out our Happy Beam sample for an example of a custom gesture (players make a heart shape with their hands). These concepts should apply nicely to a custom gesture for navigating through a scene https://developer.apple.com/documentation/visionos/happybeam
Q: For object collision and environment analysis, the vision pro has some core object detection (not recognition). My question is: can we anchor content to an object. Here's an example usecase: we detect a barcode (using the latest APIs) and then we want to attach some label next to the barcode, but as we move or as the object moves, the label tracks the object. Is this supported "out of the box" ? Thank you so much for this great prestation and QA
A: Hi Pierre - visionOS supports, object tracking, image tracking and barcode detection (with a license for enterprise apis for visionOS). All of those APIs allow you to anchor content to the real wold item, but
the tracking is low frequency ~1Hz...
Here are some useful links: https://developer.apple.com/documentation/visionos/exploring_object_tracking_with_arkit https://developer.apple.com/documentation/visionos/tracking-images-in-3d-space https://developer.apple.com/documentation/visionos/locating-and-decoding
Finally the easiest way to anchor content to an image or an object is to use an AnchorEntity - https://developer.apple.com/documentation/realitykit/anchorentity
Q: Linda - we are able to use PS5 controller at system level in Vision Pro. I'd like to be able to use this to navigate in a fully immersive environment, especially to show clients their architectural spaces.
A: Yes we did a video this year talking through the different game input options (including controllers) that are available to developers right now. Anything else we encourage feedback on :) https://developer.apple.com/videos/play/wwdc2024/10094
Q: Also, the included 3D environments are great, but there needs to be a way for us to create and upload our own. Think of this like our own Oculus/Meta "home" spaces. They should be aa ubiquitous as 2d desktop wallpaper, which is editable by the user.
A: Hi Steve - You can't create a custom system wide environment, but you can create a custom environment for your app. Here's a session that explains how - https://developer.apple.com/videos/play/wwdc2024/10087/
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment