layout | title | date | categories | ||
---|---|---|---|---|---|
post |
The smooth resize test |
2019-06-17 07:26:42 -0700 |
|
When I was young, as we traveled my dad had a quick test for the quality of a Chinese restaurant: if the tea wasn't good, chances were the food wouldn't be great either. One time, we left before ordering, and I don't think we missed out on much.
Today is an exciting point in the evolution of native GUI in Rust. There is much exploration, and a number of promising projects, but I also think we don't yet know the recipe to make GUI truly great. As I develop my own vision in this space, druid, I hope more that the efforts will learn from each other and that an excellent synthesis will emerge, more so than simply hoping that druid will win.
In my work, I have come across a problem that is as seemingly simple, yet as difficult to get right, as making decent tea: handling smooth window resizing. Almost no GUI toolkits get it right, with some failing spectacularly. This is true across platforms, though Windows poses special challenges. It's also pretty easy to test (as opposed to sophisticated latency measurements, which I also plan to develop). I suggest it become one of the basic tests to evaluate GUI technology.
Why this particular test? Among other things, it's at the confluence of a number of subsystems, including interfaces with the underlying desktop OS. It also exposes some fundamental architectural decisions, especially regarding asynchrony.
The smooth resizing test also exposes issues at multiple layers – the staging of layout vs drawing within the GUI toolkit, whether requests from the platform can be handled synchronously, and complex interactions between graphics and window management in the platform itself, which the app may be able to control to at least some extent.
In typical immediate mode GUI (imgui), both layout and drawing happen in the same call hierarchy. To keep things reasonably deterministic, it's common for layout to be computed and stored, then drawing based on the last frame's layout, in other words a one-frame delay for layout to take hold.
I use imgui as an example because this phenomenon is well known, and is a tradeoff for the simplification that imgui brings. But it can happen in any system where there isn't rigorous staging of layout and drawing. To do this right, before any drawing occurs, there needs to be a layout phase where the size and position of each widget is determined, then drawing. Most traditional GUI toolkits get this right.
A simple game is written in imperative as a game loop. Each iteration of the loop processes input, updates the game state, and renders graphics. A refinement is to treat these steps as stages in a pipeline, at the very least splitting graphics rendering into a CPU-bound part (creating a command buffer), and a GPU-bound part. These stages are often couple asynchronously, meaning that the game loop can turn as soon as the command buffer is handed off to the GPU. Making the pipeline asynchronous can help increase throughput, exploiting parallelism between CPU and GPU, and also potentially multi-core as well. When it works, it's also a nice architectural simplification - it's easier to reason about a single stage in the pipeline.
The "swap chain", a mainstay of high performance graphics today, continues the pipeline concept further. Instead of drawing into a single buffer, or even having two (double buffering), a swapchain has a somewhat arbitrary number of buffers, and on each "presentation" swaps a new one to the front, freeing the last one for rendering a new frame. A long swapchain smooths out framerate when there are variations in render time, but at the expense of more latency.
Today, the flip model is recommended on Windows because of performance, but older models, involving copying of buffers to the "redirection surface", are still supported for compatibility.
All GUI frameworks are based on an "event loop," which is quite similar to the game loop. Unfortunately, the event loop often has complex, messy requirements, especially around threading and reentrancy. I think this is mostly for legacy reasons, as the foundations of UI were laid well before threading entered its modern age.
In an attempt to simplify the programming model, [winit] ran its own event loop in one thread, and the app logic in another, with asychronous coupling using channels between them. For the typical game pipeline, this was indeed a simplification, as the application thread could just be a normal Rust thread, without having to worry about which calls are thread-safe.