Would it be beneficial to wrap Toolbar buttons in this GlassEffectContainer? Or is Toolbar optimized by default?
Controls in a SwiftUI toolbar will be correctly handled for you behind the scenes! If you're placing the controls yourself using other layout primitives, or something like safeAreaBar though, that's when adding a GlassEffectContainer becomes important!
Would the code from this session be available online? and would the videos still be available to watch later?
Hello! The code from this session will not be made available later, since it's only being used to illustrate how to use Instruments to optimize your SwiftUI app. Today's sessions are being recorded and will be available in the future.
Does a good pattern exist to pass an action closure to a view minimising the impact - given they are hard to compare? Does an alternative exist that is more performant?
Try to capture as little as possible in closures e.g. by not relying on implicit captures (which usually capture self and therefore depend on the whole view value) and instead capture only the properties of the view value that you actually need in the closure.
Any reason why View Debugging → Rendering is disabled on Xcode? Does it only work on an actual device and not on a simulator?
I recommend filing a feedback to request an enhancement to this functionality to provide support for Simulator Run Destinations. Please share the FB number here so we can grab it.
In our app, a Combine Published object updates the UI. We can use either receive(on: RunLoop.main) or receive(on: DispatchQueue.main), and both appear to work. Is there a recommended choice between the two?
Both options will schedule work to be completed on the Main Thread and allow you to update your UI. The decision between these 2 depends on the exact details. Using DispatchQueue.main will result in your work executing on the main thread as soon as possible. Using RunLoop.main will schedule work onto the RunLoop and can result in delays to your UI updates occurring. Consider a scenario where you are scrolling, updating your Ul frequently while scrolling can degrade scrolling performance. In this scenario scheduling onto the RunLoop could result in a smoother scrolling experience.
However, if you need the UI to update as quickly as possible, scheduling onto DispatchQueue.main is the best choice.
So by default, the scroll views have transparent background, even though we might see it as white/black, depending on display mode?
Thanks for the great question! Yes, by default SwiftUI ScrollViews have a transparent background, certain other scrollable views may have additional backgrounds, for example List.
Is there any way to get text based logs out of Instruments and the SwiftUI Template for the various drill downs (cause & effect) that I see in UI? I want this for feeding into a AI chat sessions. I have has some success in using Copy / Copy all with AI, but I would like a more robust workflow.
In addition to copying the data out of the detail views in the Instruments window, you can go to View → Show Recorded Data and find the tables starting with "swiftui-" to access the raw recorded data. And finally, you can use the xotrace export command on a recorded trace file to export data in an XML format. If that doesn't fit your needs, please use Feedback Assistant and explain what you're looking for so we can take a look.
Should we ever embed ScrollViews or Lists in GlassEffectContainers if the items contained are using Liquid Glass?
GlassEffectContainer should be applied to conceptually grouped UI elements as the paths for the glass of each elements can blend together. The contents of a scroll view or list are almost always not all a part of the same conceptual group of elements, so glass effect container shouldn't wrap your list or scroll view.
Is there something to consider regarding performance or battery usage when using new Liquid Glass effects?
Yes, tune in to the presentation for great info about how to optimize your app for the new design.
Any guidance for understanding the causes of battery usage in more detail than the audio/networking/processing/display/other breakdown in Xcode organizer? Like, how can I figure out more specifically what networking behavior, etc. is involved?
Thanks for the question! If you're looking for understanding field data, I'd recommend using MetricKit: MNetworkTransferMetric for networking and respective other metrics for CPU, Display, GPU, Location and others. For profiling at-desk, you can use Xcode gauges or Instruments templates. New in Instruments 26, Power Profiler can show you a subsystem-level breakdown of your application's power usage. Please tune in for the currently running presentation that describes this tool in more detail or watch "Profile and optimize power usage in your app" WWDC 25 session.
Kudos for awesome tools (Instruments) - I use it all the time to help game studios optimize their games. For Battery drain I'd really love to get these battery stats (wattage drain) basically on granularity of gpu counters (i.e MMU / SM / ALU etc) + cpu (P/E core drain while under load). Thanks!
Thank you, we really appreciate your feedback. That's an interesting request! We'd appreciate if you could file a Feedback requesting this data and what you're trying to do with it so that we understand your requirements better. Use feedbackassistant.apple.com and choose Developer Tools → Instruments problem area so that the bug gets directly to our team.
Does the increase in latency scale linearly with amount of information in the context window for Foundation Models?
Yes! Latency does scale roughly linearly with context window size. Remember though that you can mitigate some of this latency by keeping the prefix of your prompt consistent so that you get the benefits of the prefix caching.
How do you improve perceived performance from the Foundation Models when you need to get a JSON output response? The demonstrations were showing plain text responses and not structured output. Can a JSON response be streamed?
With structured textual data like JSON, you likely won't get anything that parses correctly till the model is done streaming. If you just need structured data, you could instead try using a generable type which contains all the information you need.
Generable does support streaming, and guarantees correctness at all phases of generation.
Can several apps use the on-device model concurrently or is access allowed only by the front app? Besides, is it correct to assume that a single shared copy of the model's weights exists in memory at any time, rather than one copy per process?
Weights will be shared across several processes. Multiple apps can use Foundation Models simultaneously, but requests can get serialized due to resource constraints, so response time can differ depending on number of applications.
A hang is a noticeable delay in a discrete user interaction, and it's almost always the result of long-running work on the main thread. Long-running work on the main thread can also cause hitches, but for hitches, the threshold is lower. Discrete interaction delays only start becoming noticeable as hangs when the main thread is unresponsive for about 50 ms to 100 ms or longer. However, a delay as small as the length of a single refresh interval - generally between 8 ms and 16 ms, depending on the refresh rate of the display - can cause a hitch. Delays in the render server can also cause a hitch, but usually aren't long enough to cause a hang.
There's a great description of this in https://developer.apple.com/documentation/xcode/understanding-user-interface-responsiveness#Differentiate-hangs-and-hitches
It depends! Some tools within Instruments support Simulator devices, while others require physical devices. That said, to get a representative metrics of what your users will experience, we recommend to profile on a physical device. Ideally, you should always try it on the oldest devices supported by your deployment target to understand the lower-bound resource constraints.
So what format did Snap export from THEIR tool so that it could be ingested by Instruments? Where is this format documented?
It looks like Snap used a custom tool to visualize their trace files, and used signposts to add custom intervals to Instruments traces recorded at desk.
When does it make sense (if ever) to use @Binding in a child view to improve performance (vs. using let)? (Assume in this case that the child view does not update the value of the property passed in by the parent).
You should prefer using a let if you don't need to write back to the binding. In most cases reading a binding is equivalent to just passing the value directly, but in certain situations (such as if the binding is not generated directly from a @State), bindings can add additional overhead.
Would using a Timer to update my SwiftUI be costly in terms of performance? Say if I want to show the current time in hours, minutes and seconds but aside from updating a Text to show the time, I also have other views that rely on the value of how long the timer has been running?
Yeah, this is fine! If you don't need other events to happen in sync with the timer updating, we'd recommend using a date relative text, but if you need multiple UI elements to be in sync with a timer, there's nothing wrong with doing that. As Steven has emphasized though, be sure that you're only causing updates for views which actually need to change with the timer!
Canvas's performance scaling shouldn't be affected by using it in a visionOS multi-window context!
Thank you for joining us today! We will be sharing tools and strategies to improve your app performance. If you have more specific questions, we have a team of engineers who can help them answer for you today.
Are there any special tricks for improving performance of extensions, such as system extensions on macOS?
The best tool for profiling system extensions is Instruments. Tools like Time Profiler can help you to understand where the time is being spent. In order for Instruments to be able to attach to your extensions, you should make sure that debugger can attach to it.
Then, configure Instruments to target your local device and use "Attach" option in the target chooser. Some tips and tricks on signing can be found on this documentation website: https://developer.apple.com/documentation/DriverKit/debugging-and-testing-system-extensions?language=objc. If you need a primer on CPU Profiling, we recommend to watch the WWDC 25 session: Optimize CPU Performance with Instruments
On Macs, do third party displays without HDR mean there won't be as many rendering effects applied to liquid glass?
Thanks for the great question! Using a non-hdr display would not change the presence of effects applied to Liquid Glass. However, certain effects would be in standard dynamic range instead of high dynamic range.
Using KVO doesn't seem to react dynamically to state changes in the view. However reading AVPlayer.currentitem? state in iOS 26 does react to state changes without the user leaving and exiting the view. Is it a bug?
Thank you for your question! Unfortunately it's not a subject we have the expertise to answer today, but it's a great question to post on the Apple Developer Forums.
I'm interested in the best end most effective practices of using LiquidGlass while creating custom TabBars. Which recommendations can you give for creating custom elements like TabBars? The default TabBar has a default animation, is it possible to change it?
Thank you for your question! Today we are only answering general questions about performance. Please consider posting on the Apple Developer Forums to get a reply from the community or an engineer.
Are there any additional considerations for improving performance on visionS? Will you share those today?
Thanks for the question! Today we will not be presenting visionOS specific performance considerations. The optimizations shared by the presenter for Liquid Glass are applicable across all platforms including visionOS.
What is better (performance and response quality: 1) more precise (and maybe longer) field names or 2) short field names but longer @Guide descriptions?
The model will see all of the information you provide it, so I would encourage you to think of the in the same way you would if designing for a human to read. Think of the name like a variable name, and the guide like a doc comment. Would a human be more or less confused if you added more detail to the variable name? How about if that detail was in the doc comment instead?
This isn't supported today, but if that would be a useful feature for your use case, please capture that in Feedback Assistant! Please visit https://developer.apple.com/bug-reporting/ for more information.
Small followup to my original Instruments. Is there any structured description or schema for profiling information we get from SwiftUI or any other data we get out of "Show Recorded Data" in an API reference or header file?
At the top of xotrace export --xpath /trace-toc/run[@number=\"1\"]/data/table[@schema=\"swiftui-updates\"] --input /path/to/trace.trace there is a description of what the types and names of the columns are. You can change the run number and table schema name to look at a different run or table. Those columns will match what you see in Show Recorded Data.
If you're interested in an API for accessing this information, please file a feedback so we can prioritize that request and understand your use case!
You don't need to store it as @State and it can just be stored in a let or var.
How do you recommend keeping an Observable model object in sync with a backing store, like a database? A private backing var for the getter, with a didSet to propagate bindings to the DB? Or is it just better to write a custom observable object without the macro?
Should you choose to do this yourself, I would strongly encourage you to aggregate changes together at a greater scale than just a property change before propagating them to a db. In general, reacting synchronously to an individual property changes one at a time is not good for performance.
Can Foundation Model understand different languages? For example, my data and how they are described are in English but the language that I want to be generated would be in Danish?
Foundation Models on-device system model is multilingual, supporting any language that Apple Intelligence supports. Your prompt and target language can differ. For learning more on how to handle localization, please check out "Support languages and locales with Foundation Models" documentation article: https://developer.apple.com/documentation/foundationmodels/support-languages-and-locales-with-foundation-models
No, it means that the views value (think all stored properties of a view) was equal to the previous view value and therefore the View's body wasn't evaluated.
Steven talked about how SwiftUI needs to be able to determine if a view has changed and mentioned Equatable properties. Is it worth conforming the view struct to Equatable (and possibly writing a custom conformance)? Would this be different for iOS 26 and previous releases?
(not answered)