A general collection on my thoughts on operating system design. Almost definitly horribly organized. I'll probably sort these into a blog post eventually.
http://www.catb.org/esr/writings/taoup/html/ch20s03.html
I'd like to add an addendum. Binary file formats, or at least poorly documented binary file formats. The amount of time I've spent trying to get a lead on sketchups format...
Plan 9 is obviously nice.
My ideal for the future is to develop a file system remote interface (a la Plan 9) and then have it implemented across the Internet as the standard rather than HTML. That would be ultimate cool.
-- Ken Thompson
So there are ideals and implementantion details. What we want, and how to make it with the least amount of effort. Let's go over some of the ideals
mosh has proven the concept of a state synchronization protocol, and I think it's important for multi-user enviroments, like the type I hope will dominate VR/AR.
I've never met a GUI toolkit that wasn't shit. html/css seem like they might be one of the least painful, if you're good at it. But it's not exactly consistant across webpages. Android seems to be able to let users consume stuff, similar to web pages, but with more consistantly. Still sucks for creating content, although I'm sure part of that is that it's limited to touch devices.
Look at complicated UI based software, such as blender. It can't have any metaphors in common with other programs, because for the most part what it's trying to do is unique and hard.
Hell, sketchup is predicated on using different UI paradigms to edit the same type of 3D data.
So yeah, ui's are fucked. We need standardization, and we need the potential to implement completly custom stuff. Small tools, that do one thing well, connected to common state synchronized data? Tristan (and other power users I'm sure) rely heavily on standardized metaphors for keyboard shortcuts in blender. There's also mode switching in vim. We want something that's flexable enough you could do either, or something we haven't thought of yet, ideally as a third party module.
One proposel, after discussing it with tristan trim and traverse the elder, is to just use a dead simple openGL scene graph. 2D scenes would be generated by flattening an ortho view of 3D content. Other things we discussed were time dependent animations/transformations, to keep the scene graph in sync between multiple devices and users. Whenever you do a changes, you put time constraints on the changes. Either Null (it doesn't matter), an offset (take exactly this long), or a range depenedent on the servers tick rate. So a timestamp plus a value, and if it takes longer then that to get the news that the server has changes, simply ignore smooth rendering and jump to that point. Looks less good, and only really matters in the most realtime of situations.
One reason why we really want to seperate out the presentation from the logic is latency. Latency generally isn't that important, but for VR headsets it could be. If you're serving content from the intenet, you really want all the tiny changes to head tracking to happen instantly, and not be waiting on the server.