Here are some different things that seem potentially valuable which we could consider to produce next year, roughly ordered by how useful I think they would be, with the most useful at the top. It's possible I forgot some things.
Right now, I observe that not many people are making any XR thing on the web. Part of it might be that WebXR is new, but part is that it's relatively bad. Naively, my guess is that three.js and A-Frame tooling and performance (and WebXR browser features) is where most of the gap is, and making them more competitive with Unreal or Unity would be attacking that problem. Something like the Unity editor, for example, would go a long way. Improving networked-aframe could be useful. Adding browser features to make the experience of using WebXR smoother could be useful. I defer to Robert/John/Dom/Brian's judgement on specifics.
Someone at all-hands mentioned that Valve and Oculus are kind of waiting to see whether anyone produces actually first-class WebVR content in order to decide whether to do a good job supporting it. We could create and improve WebXR prototypes in A-Frame with the goal of trying to make the best possible WebXR tech demos and the best possible instructional material for how to make good A-Frame experiences, and try to raise the quality bar and lower the barrier to entry.
We can continue to improve and productionize Janus until it is a packaged free solution for having high-performance web game networking, and then try to evangelize it among game development and VR development communities. We have strong evidence that this product is useful because we saw that Photon was useful to Unity programmers, and we are basically building a version of Photon that is better in most ways.
Basically state storage, auth, maybe recording/mirroring. This is a prereq to make it easy for people to build Altspace-like things in WebXR, but the above three might also be practical prereqs to that and are more general, so I care more about them than I care about doing this.
We believe that having a way for a VR world to be accessible outside of a VR headset will dramatically increase its utility; as some evidence we know the relative success of Tau and of desktop mode in Altspace. But we know little about the best way to make a VR world usefully simultaneously accessible via AR, touchscreen, or text. We could start building prototypes to try to figure out what this would look like.
This is tempting since it's something that Mozillians might enjoy using, so there is a natural audience and a natural way to dogfood it. We could spend arbitrary amounts of time trying to improve this until we actually thought it was worth using regularly for meetings, and doing so would teach us more about the limits of our networking and client stacks.
This was on our original roadmap, but I wasn't as excited about it then as I was about most of the above, and neither am I now. We could do it if we liked, though. It just doesn't seem like a low-hanging fruit to making people's WebVR stuff better right now.
This seems valuable but hard. I don't know how to attack this.
This is down here because I personally concluded that I don't know how to make any useful VR-web-browser-like thing on current platforms. The things people suggested in the planning meeting at all-hands did not sound like useful products to me. 90% of my use of the web is about reading and writing text, and VR headsets are bad at reading and writing text, so I don't know how I can make a good web experience inside a VR headset. I am also confused about the distinction between a hypothetical VR web browser we would make and the first-party, privileged "VR window managers" that Steam, Oculus, and Microsoft are packaging for their platform; are those just better versions of anything we could do? Anyway, this task is shrouded in mystery as far as I'm concerned.