This is a light proposal for Shadow DOM (no pun intended). The purpose of this proposal is to flesh out how we can take the current state of Shadow DOM and allow certain aspects of it to be used separately and composed together. The goals are:
- Being able to use CSS encapsulation separately from DOM encapsulation (SSR is one beneficiary of this)
- Being able to enable DOM encapsulation on a previously CSS-only encapsulated node (rehydration)
- Maintaining backward compatibility
The idea of CSS only encapsulation has been previously tried and aborted with <style scoped />
. I've been told that it was abandoned because it was slow. I'm only speculating, but the only difference between the encapsulation model is that <style scoped />
worked for the parent and entire tree below it. It also had to factor in descendant <style scoped />
elements.
The way I'm proposing that CSS encapsulation works is the same way it does now, it can simply be used without the DOM encapsulation aspect. For this to work, you need to know the outer boundary (host) and the inner boundary (slot). Given that there's already ways to flag the inner aspect (via ), we only need a way to signify the outer boundary. I propose using a composed
attribute on the host.
<div composed>
<style>
p { border: 1px solid blue; }
</style>
<p>
<slot></slot>
</p>
</div>
There's probably similar ways to do this, but the important point is that you can declaratively enable encapsulation for CSS.
If you could enable CSS-only encapsulation, it's pretty trivial to serialise a DOM / shadow tree on the server. This has two benefits.
You might be using custom elements / shadow DOM to template out your layout, but there may be no need to actually upgrade it if it's static and all it does is render once. This means that you don't need to deliver the custom element definitions, the template engine, and your templates for a subset of your components because they're display-only.
If you're upgrading components, you may want to defer their upgrades, or optimise them. CSS-only encapsulation would enable you to deliver HTML that looks like it would on initial upgrade so there's no jank.
Many bots that don't execute JavaScript, or that may parse content differently, can still have access to the content because it's all accessible via the HTML string.
Currently if you have an <x-app />
component that renders your page, and you want it scraped, bots other than GoogleBot won't read the content.
<x-app>
#shadow-root
can't see this
</x-app>
To me, this is unacceptable because it breaks the web. Sure, some bots might catch up, but not all, and should they? This also makes shadow DOM not viable until they do. Do we want to hamstring web components in such a way?
This is what it'd look like with CSS-only encapsulation.
<x-app composed>
can see this
</x-app>
Once your custom element is delivered to the page, it can be upgraded. The next section describes how this occurs.
Given a CSS-only encapsulated element, we can quite easily apply DOM encapsulation. Let's take the following example.
<div composed>
<style></style>
<p><slot>slotted content</slot></p>
</div>
To enable DOM encapsulation, we could follow the current model and use attachShadow()
. When this is called, the following steps take place to perform what we're calling: rehydration.
- Remove content.
- Attach shadow root.
- Add previous light DOM as the shadow root content.
- For each slot, append its content as light DOM to the host if it doesn't have a
default
attribute.
The <slot default />
attribute is a way to tell the rehydration algorithm that it should not re-parent its content because it's representing the default content of the slot.
The above tree would end up looking something like:
<div composed>
#shadow-root
<style></style>
<p><slot></slot></p>
slotted content
</div>
- Unslotted content isn't taken into account yet.
Since attachShadow()
already exists, and couples both DOM and CSS encapsulation, nothing changes here. This is also why there's no separate way to do DOM only (without CSS) encapsulation. While it makes sense to have CSS-only encapsulation, I don't think it makes sense to have DOM-only because it would be confusing to have something hidden (in the shadow) in a node tree, that is affected by global CSS.
Posting some quick thoughts.
I'll see if I can track down someone who understands the performance of
scoped
and can give me more info on if this approach is a viable alternative. I'll also ask them how to measure this kind of stuff :)I don't think scroll jank is as much of a concern as uncanny valley. I get this a lot in Inbox where I can see my email message headers but clicking on them does nothing because the JS bundle hasn't finished loading/parsing/executing.
Anyway, this idea still seems interesting because the current state of trying to use
:not(:defined)
is not awesome and likely leads to style duplication.Even though I've advocated for this approach in the past, I think it's actually a mistake to design your app this way. An alternative I would offer is putting all of the content in the light DOM:
It's a lot harder to structure your app this way but I don't think it's impossible. The Polymer team tells me this is what EA is doing (view source on that page). Another alternative would be to just use Rendertron to handle bots.
Personally, I think the most compelling story for declarative/composed shadow dom would be as a performance primitive. It can produce an experience that's "faster" than shadow dom. I'm using quotes because "fast" is so subjective (fast paint vs fast interactive).
Is this use case covered by the declarative shadow root proposal safari is batting around? I need to catch up on that thread... :\