Discussion:
Imperative API for Node Distribution in Shadow DOM (Revisited)
(too old to reply)
Ryosuke Niwa
2015-04-25 07:14:24 UTC
Permalink
Hi all,

In today's F2F, I've got an action item to come up with a concrete workable proposal for imperative API. I had a great chat about this afterwards with various people who attended F2F and here's a summary. I'll continue to work with Dimitri & Erik to work out details in the coming months (our deadline is July 13th).

https://gist.github.com/rniwa/2f14588926e1a11c65d3 <https://gist.github.com/rniwa/2f14588926e1a11c65d3>

Imperative API for Node Distribution in Shadow DOM

There are two approaches to the problem depending on whether we want to natively support redistribution or not.

To recap, a redistribution of a node (N_1) happens when it's distributed to an insertion point (I_1) inside a shadow root (S_1), and I_1's parent also has a shadow root which contains an insertion point which ends picking up N_1. e.g. the original tree may look like:

(host of S_1) - S_1
+ N_1 + (host of S_2) - S_2
+ I_1 + I_2
Here, (host of S_1) has N_1 as a child, and (host of S_2) is a child of S_1 and has I_1 as a child. S_2 has I_2 as a child. The composed tree, then, may look like:

(host of S_1)
+ (host of S_2)
+ I_2
+ N_1
<https://gist.github.com/rniwa/2f14588926e1a11c65d3#redistribution-is-implemented-by-authors>Redistribution is implemented by authors

In this model, we can add insertAt and remove on content element and expose distributedNodes defined as follows:

insertAt(Node nodeToDistribute, long index) - Inserts nodeToDistribute to the list of the distributed nodes at index. It throws if nodeToDistribute is not a descendent (or a direct child if wanted to keep this constraint) of the shadow host of the ancestor shadow root of containt or if index is larger than the length of distributedNodes.
remove(Node distributedNode) - Remove distributedNode from the list distributed nodes. Throws if distributedNodes doesn't contain this node.
distributedNodes - Returns an array of nodes that are distributed into this insertion point in the order they appear.
In addition, content fires a synchrnous distributionchanged event when distributedNodeschanges (in response to calls to insertAt or remove).

<https://gist.github.com/rniwa/2f14588926e1a11c65d3#pros>Pros

Very simple / very primitive looking.
Defers the exact mechanism/algorithm of re-distributions to component authors.
We can support distributing any descendent, not just direct children, to any insertion points. This was not possible with select attribute especially with the presence of multiple generations of shadow DOM due to perfomance problems.
Allows distributed nodes to be re-ordered (select doesn't allow this).
<https://gist.github.com/rniwa/2f14588926e1a11c65d3#cons>Cons

Each component needs to manually implement re-distributions by recursively traversing through distributedNodes of content elements inside distributedNodes of the content element if it didn't want to re-distribute everything. This is particularly challenging because you need to listen to distributionchanged event on every such content element. We might need something aking to MutationObserver's subtree option to monitor this if we're going this route.
It seems hard to support re-distribution natively in v2.
<https://gist.github.com/rniwa/2f14588926e1a11c65d3#redistribution-is-implemented-by-uas>Redistribution is implemented by UAs

In this model, the browser is responsible for taking care of redistributions. Namely, we would like to expose distributionPool on the shadow root which contains the ordered list of nodes that could be distributed (because they're direct children of the host) or re-distributed. Conceptually, you could think of it as a depth first traversal of distributedNodes of every content element. Because this list contains every candidate for (re)distribution, it's impractical to include every descendent node especially if we wanted to do synchronous updates so we're back to supporting only direct children for distribution.

In this proposal, we add a new callback distributeCallback(NodeList distributionPool) as an arguemnt (probably inside a dictionary) to createShadowRoot. e.g.

var shadowRoot = element.createShadowRoot({
distributedCallback: function (distributionPool) {
... // code to distribute nodes
}
});
Unfortunately, we can't really use insertAt and remove in model because distributionPoolmaybe changed under the foot by (outer) insertion points in the light DOM if this shadow root to attached to a host inside another shadow DOM unless we manually listen to distributionchangedevent on every content (which may recursively appear in distributedNodes of those content).

One way to work around this problem is let UA also propagate changes to distributionPool to each nested shadow DOM. That is, when distributionPool of a shadow root gets modified due to changes to distributionPools of direct children (of the shadow host) that are content elements themselves, UA will automatically invoke distributedCallback to trigger a distribution.

We also expose distribute() on ShadowRoot to allow arbitrary execution (e.g. when its internal state changes) of this distribution propagation mechanism. Components will use this function to listen to changes in DOM.

We could also trigger this propagation mechanism at the end of micro task (via MutationObserver) when direct children of a shadow host is mutated.

In terms of actual distribution, we only need to expose add(Node) on content element. Because all candidates are distributed each time, we can clear distributed nodes from every insertion point in the shadow DOM. (Leaving them in tact doesn't make sense because some of the nodes that have been distributed in the past may no longer be available).

There is an alternative approach to add something like done() or redistribute to specifically trigger redistribution but some authors may forget to make this extra function call because it's not required in normal cases.

<https://gist.github.com/rniwa/2f14588926e1a11c65d3#pros-1>Pros

Components don't have to implement complicated redistribution algorithms themselves.
Allows distributed nodes to be re-ordered (select doesn't allow this).
<https://gist.github.com/rniwa/2f14588926e1a11c65d3#cons-1>Cons

Redistribution algorithm is not simple
At a slightly higher abstraction level


- R. Niwa
Ryosuke Niwa
2015-04-25 07:17:08 UTC
Permalink
Just to clarity, I obviously haven't had a time to discuss this with my colleagues so I don't know which one (or something else entirely) we (Apple) end up endorsing/opposing at the end.
Post by Ryosuke Niwa
Hi all,
In today's F2F, I've got an action item to come up with a concrete workable proposal for imperative API. I had a great chat about this afterwards with various people who attended F2F and here's a summary. I'll continue to work with Dimitri & Erik to work out details in the coming months (our deadline is July 13th).
https://gist.github.com/rniwa/2f14588926e1a11c65d3 <https://gist.github.com/rniwa/2f14588926e1a11c65d3>
Imperative API for Node Distribution in Shadow DOM
There are two approaches to the problem depending on whether we want to natively support redistribution or not.
(host of S_1) - S_1
+ N_1 + (host of S_2) - S_2
+ I_1 + I_2
(host of S_1)
+ (host of S_2)
+ I_2
+ N_1
<https://gist.github.com/rniwa/2f14588926e1a11c65d3#redistribution-is-implemented-by-authors>Redistribution is implemented by authors
insertAt(Node nodeToDistribute, long index) - Inserts nodeToDistribute to the list of the distributed nodes at index. It throws if nodeToDistribute is not a descendent (or a direct child if wanted to keep this constraint) of the shadow host of the ancestor shadow root of containt or if index is larger than the length of distributedNodes.
remove(Node distributedNode) - Remove distributedNode from the list distributed nodes. Throws if distributedNodes doesn't contain this node.
distributedNodes - Returns an array of nodes that are distributed into this insertion point in the order they appear.
In addition, content fires a synchrnous distributionchanged event when distributedNodeschanges (in response to calls to insertAt or remove).
<https://gist.github.com/rniwa/2f14588926e1a11c65d3#pros>Pros
Very simple / very primitive looking.
Defers the exact mechanism/algorithm of re-distributions to component authors.
We can support distributing any descendent, not just direct children, to any insertion points. This was not possible with select attribute especially with the presence of multiple generations of shadow DOM due to perfomance problems.
Allows distributed nodes to be re-ordered (select doesn't allow this).
<https://gist.github.com/rniwa/2f14588926e1a11c65d3#cons>Cons
Each component needs to manually implement re-distributions by recursively traversing through distributedNodes of content elements inside distributedNodes of the content element if it didn't want to re-distribute everything. This is particularly challenging because you need to listen to distributionchanged event on every such content element. We might need something aking to MutationObserver's subtree option to monitor this if we're going this route.
It seems hard to support re-distribution natively in v2.
<https://gist.github.com/rniwa/2f14588926e1a11c65d3#redistribution-is-implemented-by-uas>Redistribution is implemented by UAs
In this model, the browser is responsible for taking care of redistributions. Namely, we would like to expose distributionPool on the shadow root which contains the ordered list of nodes that could be distributed (because they're direct children of the host) or re-distributed. Conceptually, you could think of it as a depth first traversal of distributedNodes of every content element. Because this list contains every candidate for (re)distribution, it's impractical to include every descendent node especially if we wanted to do synchronous updates so we're back to supporting only direct children for distribution.
In this proposal, we add a new callback distributeCallback(NodeList distributionPool) as an arguemnt (probably inside a dictionary) to createShadowRoot. e.g.
var shadowRoot = element.createShadowRoot({
distributedCallback: function (distributionPool) {
... // code to distribute nodes
}
});
Unfortunately, we can't really use insertAt and remove in model because distributionPoolmaybe changed under the foot by (outer) insertion points in the light DOM if this shadow root to attached to a host inside another shadow DOM unless we manually listen to distributionchangedevent on every content (which may recursively appear in distributedNodes of those content).
One way to work around this problem is let UA also propagate changes to distributionPool to each nested shadow DOM. That is, when distributionPool of a shadow root gets modified due to changes to distributionPools of direct children (of the shadow host) that are content elements themselves, UA will automatically invoke distributedCallback to trigger a distribution.
We also expose distribute() on ShadowRoot to allow arbitrary execution (e.g. when its internal state changes) of this distribution propagation mechanism. Components will use this function to listen to changes in DOM.
We could also trigger this propagation mechanism at the end of micro task (via MutationObserver) when direct children of a shadow host is mutated.
In terms of actual distribution, we only need to expose add(Node) on content element. Because all candidates are distributed each time, we can clear distributed nodes from every insertion point in the shadow DOM. (Leaving them in tact doesn't make sense because some of the nodes that have been distributed in the past may no longer be available).
There is an alternative approach to add something like done() or redistribute to specifically trigger redistribution but some authors may forget to make this extra function call because it's not required in normal cases.
<https://gist.github.com/rniwa/2f14588926e1a11c65d3#pros-1>Pros
Components don't have to implement complicated redistribution algorithms themselves.
Allows distributed nodes to be re-ordered (select doesn't allow this).
<https://gist.github.com/rniwa/2f14588926e1a11c65d3#cons-1>Cons
Redistribution algorithm is not simple
At a slightly higher abstraction level
- R. Niwa
Anne van Kesteren
2015-04-25 16:28:57 UTC
Permalink
Post by Ryosuke Niwa
In today's F2F, I've got an action item to come up with a concrete workable
proposal for imperative API. I had a great chat about this afterwards with
various people who attended F2F and here's a summary. I'll continue to work
with Dimitri & Erik to work out details in the coming months (our deadline
is July 13th).
https://gist.github.com/rniwa/2f14588926e1a11c65d3
I thought we came up with something somewhat simpler that didn't
require adding an event or adding remove() for that matter:

https://gist.github.com/annevk/e9e61801fcfb251389ef

I added an example there that shows how you could implement <content
select>, it's rather trivial with the matches() API. I think you can
derive any other use case easily from that example, though I'm willing
to help guide people through others if it is unclear. I guess we might
still want positional insertion as a convenience though the above
seems to be all you need primitive-wise.
--
https://annevankesteren.nl/
Hayato Ito
2015-04-25 17:00:46 UTC
Permalink
Thank you, guys.
I really appreciate if you guys could use the W3C bug, 18429, to discuss
this kind of specific topic about Shadow DOM so that we can track the
progress easily in one place. I'm not fan of the discussion being scattered.
:)

https://www.w3.org/Bugs/Public/show_bug.cgi?id=18429
Post by Ryosuke Niwa
Post by Ryosuke Niwa
In today's F2F, I've got an action item to come up with a concrete
workable
Post by Ryosuke Niwa
proposal for imperative API. I had a great chat about this afterwards
with
Post by Ryosuke Niwa
various people who attended F2F and here's a summary. I'll continue to
work
Post by Ryosuke Niwa
with Dimitri & Erik to work out details in the coming months (our
deadline
Post by Ryosuke Niwa
is July 13th).
https://gist.github.com/rniwa/2f14588926e1a11c65d3
I thought we came up with something somewhat simpler that didn't
https://gist.github.com/annevk/e9e61801fcfb251389ef
I added an example there that shows how you could implement <content
select>, it's rather trivial with the matches() API. I think you can
derive any other use case easily from that example, though I'm willing
to help guide people through others if it is unclear. I guess we might
still want positional insertion as a convenience though the above
seems to be all you need primitive-wise.
--
https://annevankesteren.nl/
Ryosuke Niwa
2015-04-25 17:19:56 UTC
Permalink
Sure, I'll put the summary of discussion there later.

- R. Niwa
Post by Hayato Ito
Thank you, guys.
I really appreciate if you guys could use the W3C bug, 18429, to discuss this kind of specific topic about Shadow DOM so that we can track the progress easily in one place. I'm not fan of the discussion being scattered. :)
https://www.w3.org/Bugs/Public/show_bug.cgi?id=18429
Post by Anne van Kesteren
Post by Ryosuke Niwa
In today's F2F, I've got an action item to come up with a concrete workable
proposal for imperative API. I had a great chat about this afterwards with
various people who attended F2F and here's a summary. I'll continue to work
with Dimitri & Erik to work out details in the coming months (our deadline
is July 13th).
https://gist.github.com/rniwa/2f14588926e1a11c65d3
I thought we came up with something somewhat simpler that didn't
https://gist.github.com/annevk/e9e61801fcfb251389ef
I added an example there that shows how you could implement <content
select>, it's rather trivial with the matches() API. I think you can
derive any other use case easily from that example, though I'm willing
to help guide people through others if it is unclear. I guess we might
still want positional insertion as a convenience though the above
seems to be all you need primitive-wise.
--
https://annevankesteren.nl/
Hayato Ito
2015-04-25 17:53:39 UTC
Permalink
Thanks. I am really glad to see more and more guys are thinking about
Shadow DOM.
I know distribution/re-distributions is a tough issue. A lot of exciting
things are waiting for you. :)
Post by Ryosuke Niwa
Sure, I'll put the summary of discussion there later.
- R. Niwa
Thank you, guys.
I really appreciate if you guys could use the W3C bug, 18429, to discuss
this kind of specific topic about Shadow DOM so that we can track the
progress easily in one place. I'm not fan of the discussion being
scattered. :)
https://www.w3.org/Bugs/Public/show_bug.cgi?id=18429
Post by Ryosuke Niwa
Post by Ryosuke Niwa
In today's F2F, I've got an action item to come up with a concrete
workable
Post by Ryosuke Niwa
proposal for imperative API. I had a great chat about this afterwards
with
Post by Ryosuke Niwa
various people who attended F2F and here's a summary. I'll continue to
work
Post by Ryosuke Niwa
with Dimitri & Erik to work out details in the coming months (our
deadline
Post by Ryosuke Niwa
is July 13th).
https://gist.github.com/rniwa/2f14588926e1a11c65d3
I thought we came up with something somewhat simpler that didn't
https://gist.github.com/annevk/e9e61801fcfb251389ef
I added an example there that shows how you could implement <content
select>, it's rather trivial with the matches() API. I think you can
derive any other use case easily from that example, though I'm willing
to help guide people through others if it is unclear. I guess we might
still want positional insertion as a convenience though the above
seems to be all you need primitive-wise.
--
https://annevankesteren.nl/
Ryosuke Niwa
2015-04-25 17:12:44 UTC
Permalink
Post by Anne van Kesteren
Post by Ryosuke Niwa
In today's F2F, I've got an action item to come up with a concrete workable
proposal for imperative API. I had a great chat about this afterwards with
various people who attended F2F and here's a summary. I'll continue to work
with Dimitri & Erik to work out details in the coming months (our deadline
is July 13th).
https://gist.github.com/rniwa/2f14588926e1a11c65d3
I thought we came up with something somewhat simpler that didn't
That's the second approach I mentioned. Like I mentioned in the gist, this model assumes that redistribution is done by UA and only direct children can be distributed. I realized that those constraints are no longer necessary given we don't have content select or multiple generations of shadow DOM.
Post by Anne van Kesteren
https://gist.github.com/annevk/e9e61801fcfb251389ef
I added an example there that shows how you could implement <content
select>, it's rather trivial with the matches() API. I think you can
derive any other use case easily from that example, though I'm willing
to help guide people through others if it is unclear. I guess we might
still want positional insertion as a convenience though the above
seems to be all you need primitive-wise.
--
https://annevankesteren.nl/
Travis Leithead
2015-04-25 17:49:00 UTC
Permalink
Nice work folks, and thanks for writing this up so quickly! Anne's Gist captured exactly what I was thinking this would look like.

One nit: it would be nice if the callback could be registered from _inside_ the shadowRoot, but I couldn't come up with a satisfactory way to do that without adding more complexity. :)

-----Original Message-----
From: Ryosuke Niwa [mailto:***@apple.com]
Sent: Saturday, April 25, 2015 10:13 AM
To: Anne van Kesteren
Cc: WebApps WG; Erik Bryn; Dimitri Glazkov
Subject: Re: Imperative API for Node Distribution in Shadow DOM (Revisited)
Post by Anne van Kesteren
Post by Ryosuke Niwa
In today's F2F, I've got an action item to come up with a concrete workable
proposal for imperative API. I had a great chat about this afterwards with
various people who attended F2F and here's a summary. I'll continue to work
with Dimitri & Erik to work out details in the coming months (our deadline
is July 13th).
https://gist.github.com/rniwa/2f14588926e1a11c65d3
I thought we came up with something somewhat simpler that didn't
That's the second approach I mentioned. Like I mentioned in the gist, this model assumes that redistribution is done by UA and only direct children can be distributed. I realized that those constraints are no longer necessary given we don't have content select or multiple generations of shadow DOM.
Post by Anne van Kesteren
https://gist.github.com/annevk/e9e61801fcfb251389ef
I added an example there that shows how you could implement <content
select>, it's rather trivial with the matches() API. I think you can
derive any other use case easily from that example, though I'm willing
to help guide people through others if it is unclear. I guess we might
still want positional insertion as a convenience though the above
seems to be all you need primitive-wise.
--
https://annevankesteren.nl/
Olli Pettay
2015-04-25 20:17:34 UTC
Permalink
Post by Anne van Kesteren
Post by Ryosuke Niwa
In today's F2F, I've got an action item to come up with a concrete workable
proposal for imperative API. I had a great chat about this afterwards with
various people who attended F2F and here's a summary. I'll continue to work
with Dimitri & Erik to work out details in the coming months (our deadline
is July 13th).
https://gist.github.com/rniwa/2f14588926e1a11c65d3
I thought we came up with something somewhat simpler that didn't
https://gist.github.com/annevk/e9e61801fcfb251389ef
That is pretty much exactly how I was thinking the imperative API to work.
(well, assuming errors in the example fixed)

An example explaining how this all works in case of nested shadow trees would be good.
I assume the more nested shadow tree just may get some nodes, which were already distributed, in the distributionList.

How does the distribute() behave? Does it end up invoking distribution in all the nested shadow roots or only in the callee?

Should distribute callback be called automatically at the end of the microtask if there has been relevant[1] DOM mutations since the last
manual call to distribute()? That would make the API a bit simpler to use, if one wouldn't have to use MutationObservers.
(even then one could skip distribution say for example during page load time and do a page level "distribute all the stuff" once all the data is ready
etc, if wanted.).




-Olli

[1] Assuming we want to distribute only direct children, then any child list change or any attribute change in the children
might cause distribution() automatically.
Post by Anne van Kesteren
I added an example there that shows how you could implement <content
select>, it's rather trivial with the matches() API. I think you can
derive any other use case easily from that example, though I'm willing
to help guide people through others if it is unclear. I guess we might
still want positional insertion as a convenience though the above
seems to be all you need primitive-wise.
Ryosuke Niwa
2015-04-25 20:58:13 UTC
Permalink
Post by Olli Pettay
Post by Anne van Kesteren
Post by Ryosuke Niwa
In today's F2F, I've got an action item to come up with a concrete workable
proposal for imperative API. I had a great chat about this afterwards with
various people who attended F2F and here's a summary. I'll continue to work
with Dimitri & Erik to work out details in the coming months (our deadline
is July 13th).
https://gist.github.com/rniwa/2f14588926e1a11c65d3
I thought we came up with something somewhat simpler that didn't
https://gist.github.com/annevk/e9e61801fcfb251389ef
That is pretty much exactly how I was thinking the imperative API to work.
(well, assuming errors in the example fixed)
An example explaining how this all works in case of nested shadow trees would be good.
I assume the more nested shadow tree just may get some nodes, which were already distributed, in the distributionList.
Right, that was the design we discussed.
Post by Olli Pettay
How does the distribute() behave? Does it end up invoking distribution in all the nested shadow roots or only in the callee?
Yes, that's the only reason we need distribute() in the first place. If we didn't have to care about redistribution, simply exposing methods to insert/remove distributed nodes on content element is sufficient.
Post by Olli Pettay
Should distribute callback be called automatically at the end of the microtask if there has been relevant[1] DOM mutations since the last
manual call to distribute()? That would make the API a bit simpler to use, if one wouldn't have to use MutationObservers.
That's a possibility. It could be an option to specify as well. But there might be components that are not interested in updating distributed nodes for the sake of performance for example. I'm not certain forcing everyone to always update distributed nodes is necessarily desirable given the lack of experience with an imperative API for distributing nodes.
Post by Olli Pettay
[1] Assuming we want to distribute only direct children, then any child list change or any attribute change in the children
might cause distribution() automatically.
I think that's a big if now that we've gotten rid of "select" attribute and multiple generations of shadow DOM. As far as I could recall, one of the reasons we only supported distributing direct children was so that we could implement "select" attribute and multiple generations of shadow DOM. If we wanted, we could always impose such a restriction in a declarative syntax and inheritance mechanism we add in v2 since those v2 APIs are supposed to build on top of this imperative API.

Another big if is whether we even need to let each shadow DOM select nodes to redistribute. If we don't need to support filtering distributed nodes in insertion points for re-distribution (i.e. we either distribute everything under a given content element or nothing), then we don't need all of this redistribution mechanism baked into the browser and the model where we just have insert/remove on content element will work.

- R. Niwa
Olli Pettay
2015-04-26 03:19:30 UTC
Permalink
Post by Ryosuke Niwa
Post by Anne van Kesteren
Post by Ryosuke Niwa
In today's F2F, I've got an action item to come up with a concrete workable proposal for imperative API. I had a great chat about this
afterwards with various people who attended F2F and here's a summary. I'll continue to work with Dimitri & Erik to work out details in the
coming months (our deadline is July 13th).
https://gist.github.com/rniwa/2f14588926e1a11c65d3
https://gist.github.com/annevk/e9e61801fcfb251389ef
That is pretty much exactly how I was thinking the imperative API to work. (well, assuming errors in the example fixed)
An example explaining how this all works in case of nested shadow trees would be good. I assume the more nested shadow tree just may get some
nodes, which were already distributed, in the distributionList.
Right, that was the design we discussed.
How does the distribute() behave? Does it end up invoking distribution in all the nested shadow roots or only in the callee?
Yes, that's the only reason we need distribute() in the first place. If we didn't have to care about redistribution, simply exposing methods to
insert/remove distributed nodes on content element is sufficient.
Should distribute callback be called automatically at the end of the microtask if there has been relevant[1] DOM mutations since the last manual
call to distribute()? That would make the API a bit simpler to use, if one wouldn't have to use MutationObservers.
That's a possibility. It could be an option to specify as well. But there might be components that are not interested in updating distributed
nodes for the sake of performance for example. I'm not certain forcing everyone to always update distributed nodes is necessarily desirable given
the lack of experience with an imperative API for distributing nodes.
[1] Assuming we want to distribute only direct children, then any child list change or any attribute change in the children might cause
distribution() automatically.
I think that's a big if now that we've gotten rid of "select" attribute and multiple generations of shadow DOM.
It is not clear to me at all how you would handle the case when a node has several ancestors with shadow trees, and each of those want to distribute
the node to some insertion point.
Also, what is the use case to distribute non-direct descendants?
Post by Ryosuke Niwa
As far as I could recall, one of
the reasons we only supported distributing direct children was so that we could implement "select" attribute and multiple generations of shadow
DOM. If we wanted, we could always impose such a restriction in a declarative syntax and inheritance mechanism we add in v2 since those v2 APIs
are supposed to build on top of this imperative API.
Another big if is whether we even need to let each shadow DOM select nodes to redistribute. If we don't need to support filtering distributed
nodes in insertion points for re-distribution (i.e. we either distribute everything under a given content element or nothing), then we don't need
all of this redistribution mechanism baked into the browser and the model where we just have insert/remove on content element will work.
- R. Niwa
Hayato Ito
2015-04-27 01:11:15 UTC
Permalink
I think Polymer folks will answer the use case of re-distribution.

So let me just show a good analogy so that every one can understand
intuitively what re-distribution *means*.
Let me use a pseudo language and define XComponent's constructor as follows:

XComponents::XComponents(Title text, Icon icon) {
this.text = text;
this.button = new XButton(icon);
...
}

Here, |icon| is *re-distributed*.

In HTML world, this corresponds the followings:

The usage of <x-component> element:
<x-components>
<x-text>Hello World</x-text>
<x-icon>My Icon</x-icon>
</x-component>

XComponent's shadow tree is:

<shadow-root>
<h1><content select="x-text"></content></h1>
<x-button><content select="x-icon"></content></x-button>
</shadow-root>

Re-distribution enables the constructor of X-Component to pass the given
parameter to other component's constructor, XButton's constructor.
If we don't have a re-distribution, XComponents can't create X-Button using
the dynamic information.

XComponents::XCompoennts(Title text, Icon icon) {
this.text = text;
// this.button = new xbutton(icon); // We can't! We don't have
redistribution!
this.button = new xbutton("icon.png"); // XComponet have to do
"hard-coding". Please allow me to pass |icon| to x-button!
...
}
Post by Ryosuke Niwa
Post by Ryosuke Niwa
Post by Olli Pettay
Post by Anne van Kesteren
Post by Ryosuke Niwa
In today's F2F, I've got an action item to come up with a concrete
workable proposal for imperative API. I had a great chat about this
Post by Ryosuke Niwa
Post by Olli Pettay
Post by Anne van Kesteren
Post by Ryosuke Niwa
afterwards with various people who attended F2F and here's a
summary. I'll continue to work with Dimitri & Erik to work out details in
the
Post by Ryosuke Niwa
Post by Olli Pettay
Post by Anne van Kesteren
Post by Ryosuke Niwa
coming months (our deadline is July 13th).
https://gist.github.com/rniwa/2f14588926e1a11c65d3
I thought we came up with something somewhat simpler that didn't
https://gist.github.com/annevk/e9e61801fcfb251389ef
That is pretty much exactly how I was thinking the imperative API to
work. (well, assuming errors in the example fixed)
Post by Ryosuke Niwa
Post by Olli Pettay
An example explaining how this all works in case of nested shadow trees
would be good. I assume the more nested shadow tree just may get some
Post by Ryosuke Niwa
Post by Olli Pettay
nodes, which were already distributed, in the distributionList.
Right, that was the design we discussed.
Post by Olli Pettay
How does the distribute() behave? Does it end up invoking distribution
in all the nested shadow roots or only in the callee?
Post by Ryosuke Niwa
Yes, that's the only reason we need distribute() in the first place. If
we didn't have to care about redistribution, simply exposing methods to
Post by Ryosuke Niwa
insert/remove distributed nodes on content element is sufficient.
Post by Olli Pettay
Should distribute callback be called automatically at the end of the
microtask if there has been relevant[1] DOM mutations since the last manual
Post by Ryosuke Niwa
Post by Olli Pettay
call to distribute()? That would make the API a bit simpler to use, if
one wouldn't have to use MutationObservers.
Post by Ryosuke Niwa
That's a possibility. It could be an option to specify as well. But
there might be components that are not interested in updating distributed
Post by Ryosuke Niwa
nodes for the sake of performance for example. I'm not certain forcing
everyone to always update distributed nodes is necessarily desirable given
Post by Ryosuke Niwa
the lack of experience with an imperative API for distributing nodes.
Post by Olli Pettay
[1] Assuming we want to distribute only direct children, then any child
list change or any attribute change in the children might cause
Post by Ryosuke Niwa
Post by Olli Pettay
distribution() automatically.
I think that's a big if now that we've gotten rid of "select" attribute
and multiple generations of shadow DOM.
It is not clear to me at all how you would handle the case when a node has
several ancestors with shadow trees, and each of those want to distribute
the node to some insertion point.
Also, what is the use case to distribute non-direct descendants?
Post by Ryosuke Niwa
As far as I could recall, one of
the reasons we only supported distributing direct children was so that
we could implement "select" attribute and multiple generations of shadow
Post by Ryosuke Niwa
DOM. If we wanted, we could always impose such a restriction in a
declarative syntax and inheritance mechanism we add in v2 since those v2
APIs
Post by Ryosuke Niwa
are supposed to build on top of this imperative API.
Another big if is whether we even need to let each shadow DOM select
nodes to redistribute. If we don't need to support filtering distributed
Post by Ryosuke Niwa
nodes in insertion points for re-distribution (i.e. we either distribute
everything under a given content element or nothing), then we don't need
Post by Ryosuke Niwa
all of this redistribution mechanism baked into the browser and the
model where we just have insert/remove on content element will work.
Post by Ryosuke Niwa
- R. Niwa
Olli Pettay
2015-04-27 16:42:41 UTC
Permalink
Post by Hayato Ito
I think Polymer folks will answer the use case of re-distribution.
I wasn't questioning the need for re-distribution. I was questioning the need to distribute grandchildren etc -
and even more, I was wondering what kind of algorithm would be sane in that case.

And passing random not-in-document, nor in-shadow-DOM elements to be distributed would be hard too.
Post by Hayato Ito
So let me just show a good analogy so that every one can understand intuitively what re-distribution *means*.
XComponents::XComponents(Title text, Icon icon) {
this.text = text;
this.button = new XButton(icon);
...
}
Here, |icon| is *re-distributed*.
<x-components>
<x-text>Hello World</x-text>
<x-icon>My Icon</x-icon>
</x-component>
<shadow-root>
<h1><content select="x-text"></content></h1>
<x-button><content select="x-icon"></content></x-button>
</shadow-root>
Re-distribution enables the constructor of X-Component to pass the given parameter to other component's constructor, XButton's constructor.
If we don't have a re-distribution, XComponents can't create X-Button using the dynamic information.
XComponents::XCompoennts(Title text, Icon icon) {
this.text = text;
// this.button = new xbutton(icon); // We can't! We don't have redistribution!
this.button = new xbutton("icon.png"); // XComponet have to do "hard-coding". Please allow me to pass |icon| to x-button!
...
}
Post by Ryosuke Niwa
Post by Anne van Kesteren
Post by Ryosuke Niwa
In today's F2F, I've got an action item to come up with a concrete workable proposal for imperative API. I had a great chat about this
afterwards with various people who attended F2F and here's a summary. I'll continue to work with Dimitri & Erik to work out details in the
coming months (our deadline is July 13th).
https://gist.github.com/rniwa/2f14588926e1a11c65d3
https://gist.github.com/annevk/e9e61801fcfb251389ef
That is pretty much exactly how I was thinking the imperative API to work. (well, assuming errors in the example fixed)
An example explaining how this all works in case of nested shadow trees would be good. I assume the more nested shadow tree just may get some
nodes, which were already distributed, in the distributionList.
Right, that was the design we discussed.
How does the distribute() behave? Does it end up invoking distribution in all the nested shadow roots or only in the callee?
Yes, that's the only reason we need distribute() in the first place. If we didn't have to care about redistribution, simply exposing methods to
insert/remove distributed nodes on content element is sufficient.
Should distribute callback be called automatically at the end of the microtask if there has been relevant[1] DOM mutations since the last manual
call to distribute()? That would make the API a bit simpler to use, if one wouldn't have to use MutationObservers.
That's a possibility. It could be an option to specify as well. But there might be components that are not interested in updating distributed
nodes for the sake of performance for example. I'm not certain forcing everyone to always update distributed nodes is necessarily desirable given
the lack of experience with an imperative API for distributing nodes.
[1] Assuming we want to distribute only direct children, then any child list change or any attribute change in the children might cause
distribution() automatically.
I think that's a big if now that we've gotten rid of "select" attribute and multiple generations of shadow DOM.
It is not clear to me at all how you would handle the case when a node has several ancestors with shadow trees, and each of those want to distribute
the node to some insertion point.
Also, what is the use case to distribute non-direct descendants?
Post by Ryosuke Niwa
As far as I could recall, one of
the reasons we only supported distributing direct children was so that we could implement "select" attribute and multiple generations of shadow
DOM. If we wanted, we could always impose such a restriction in a declarative syntax and inheritance mechanism we add in v2 since those v2 APIs
are supposed to build on top of this imperative API.
Another big if is whether we even need to let each shadow DOM select nodes to redistribute. If we don't need to support filtering distributed
nodes in insertion points for re-distribution (i.e. we either distribute everything under a given content element or nothing), then we don't need
all of this redistribution mechanism baked into the browser and the model where we just have insert/remove on content element will work.
- R. Niwa
Ryosuke Niwa
2015-04-27 21:18:24 UTC
Permalink
Post by Hayato Ito
I think Polymer folks will answer the use case of re-distribution.
So let me just show a good analogy so that every one can understand intuitively what re-distribution *means*.
XComponents::XComponents(Title text, Icon icon) {
this.text = text;
this.button = new XButton(icon);
...
}
Here, |icon| is *re-distributed*.
<x-components>
<x-text>Hello World</x-text>
<x-icon>My Icon</x-icon>
</x-component>
<shadow-root>
<h1><content select="x-text"></content></h1><!-- (1) -->
<x-button><content select="x-icon"></content></x-button><!-- (2) -->
</shadow-root>
I have a question as to whether x-button then has to select which nodes to use or not. In this particular example at least, x-button will put every node distributed into (2) into a single insertion point in its shadow DOM.

If we don't have to support filtering of nodes at re-distribution time, then the whole discussion of re-distribution is almost a moot because we can just treat a content element like any other element that gets distributed along with its distributed nodes.

- R. Niwa
Hayato Ito
2015-04-27 21:38:40 UTC
Permalink
Post by Hayato Ito
Post by Hayato Ito
I think Polymer folks will answer the use case of re-distribution.
So let me just show a good analogy so that every one can understand
intuitively what re-distribution *means*.
Post by Hayato Ito
Let me use a pseudo language and define XComponent's constructor as
XComponents::XComponents(Title text, Icon icon) {
this.text = text;
this.button = new XButton(icon);
...
}
Here, |icon| is *re-distributed*.
<x-components>
<x-text>Hello World</x-text>
<x-icon>My Icon</x-icon>
</x-component>
<shadow-root>
<h1><content select="x-text"></content></h1><!-- (1) -->
<x-button><content select="x-icon"></content></x-button><!-- (2) -->
</shadow-root>
I have a question as to whether x-button then has to select which nodes to
use or not. In this particular example at least, x-button will put every
node distributed into (2) into a single insertion point in its shadow DOM.
If we don't have to support filtering of nodes at re-distribution time,
then the whole discussion of re-distribution is almost a moot because we
can just treat a content element like any other element that gets
distributed along with its distributed nodes.
x-button can select.
You might want to take a look at the distribution algorithm [1], where the
behavior is well defined.

[1]: http://w3c.github.io/webcomponents/spec/shadow/#distribution-algorithms

In short, the distributed nodes of <content select="x-icons"> will be the
next candidates of nodes from where insertion points in the shadow tree
<x-button> hosts can select.
Post by Hayato Ito
- R. Niwa
Ryosuke Niwa
2015-04-27 21:58:37 UTC
Permalink
Post by Ryosuke Niwa
Post by Hayato Ito
I think Polymer folks will answer the use case of re-distribution.
So let me just show a good analogy so that every one can understand intuitively what re-distribution *means*.
XComponents::XComponents(Title text, Icon icon) {
this.text = text;
this.button = new XButton(icon);
...
}
Here, |icon| is *re-distributed*.
<x-components>
<x-text>Hello World</x-text>
<x-icon>My Icon</x-icon>
</x-component>
<shadow-root>
<h1><content select="x-text"></content></h1><!-- (1) -->
<x-button><content select="x-icon"></content></x-button><!-- (2) -->
</shadow-root>
I have a question as to whether x-button then has to select which nodes to use or not. In this particular example at least, x-button will put every node distributed into (2) into a single insertion point in its shadow DOM.
If we don't have to support filtering of nodes at re-distribution time, then the whole discussion of re-distribution is almost a moot because we can just treat a content element like any other element that gets distributed along with its distributed nodes.
x-button can select.
You might want to take a look at the distribution algorithm [1], where the behavior is well defined.
I know we can in the current spec but should we support it? What are concrete use cases in which x-button or other components need to select nodes in nested shadow DOM case?

- R. Niwa
Hayato Ito
2015-04-27 22:16:42 UTC
Permalink
Could you clarify what you are trying to achieve? If we don't support,
everything would be weird.

I guess you are proposing the alternative of the current pool population
algorithm and pool distribution algorithm.
I appreciate you could explain what are expected result using algorithms.
Post by Hayato Ito
Post by Hayato Ito
Post by Hayato Ito
I think Polymer folks will answer the use case of re-distribution.
So let me just show a good analogy so that every one can understand
intuitively what re-distribution *means*.
Post by Hayato Ito
Let me use a pseudo language and define XComponent's constructor as
XComponents::XComponents(Title text, Icon icon) {
this.text = text;
this.button = new XButton(icon);
...
}
Here, |icon| is *re-distributed*.
<x-components>
<x-text>Hello World</x-text>
<x-icon>My Icon</x-icon>
</x-component>
<shadow-root>
<h1><content select="x-text"></content></h1><!-- (1) -->
<x-button><content select="x-icon"></content></x-button><!-- (2) -->
</shadow-root>
I have a question as to whether x-button then has to select which nodes
to use or not. In this particular example at least, x-button will put
every node distributed into (2) into a single insertion point in its shadow
DOM.
If we don't have to support filtering of nodes at re-distribution time,
then the whole discussion of re-distribution is almost a moot because we
can just treat a content element like any other element that gets
distributed along with its distributed nodes.
x-button can select.
You might want to take a look at the distribution algorithm [1], where
the behavior is well defined.
I know we can in the current spec but should we support it? What are
concrete use cases in which x-button or other components need to select
nodes in nested shadow DOM case?
- R. Niwa
Steve Orvell
2015-04-27 18:47:14 UTC
Permalink
Here's a minimal and hopefully simple proposal that we can flesh out if
this seems like an interesting api direction:

https://gist.github.com/sorvell/e201c25ec39480be66aa

We keep the currently spec'd distribution algorithm/timing but remove
`select` in favor of an explicit selection callback. The user simply
returns true if the node should be distributed to the given insertion point.

Advantages:
* the callback can be synchronous-ish because it acts only on a specific
node when possible. Distribution then won't break existing expectations
since `offsetHeight` is always correct.
* can implement either the currently spec'd `select` mechanism or the
proposed `slot` mechanism
* can easily evolve to support distribution to isolated roots by using a
pure function that gets read only node 'proxies' as arguments.

Disadvantages:
* cannot re-order the distribution
* cannot distribute sub-elements
Post by Ryosuke Niwa
Post by Olli Pettay
Post by Anne van Kesteren
Post by Ryosuke Niwa
In today's F2F, I've got an action item to come up with a concrete
workable
Post by Olli Pettay
Post by Anne van Kesteren
Post by Ryosuke Niwa
proposal for imperative API. I had a great chat about this afterwards
with
Post by Olli Pettay
Post by Anne van Kesteren
Post by Ryosuke Niwa
various people who attended F2F and here's a summary. I'll continue
to work
Post by Olli Pettay
Post by Anne van Kesteren
Post by Ryosuke Niwa
with Dimitri & Erik to work out details in the coming months (our
deadline
Post by Olli Pettay
Post by Anne van Kesteren
Post by Ryosuke Niwa
is July 13th).
https://gist.github.com/rniwa/2f14588926e1a11c65d3
I thought we came up with something somewhat simpler that didn't
https://gist.github.com/annevk/e9e61801fcfb251389ef
That is pretty much exactly how I was thinking the imperative API to
work.
Post by Olli Pettay
(well, assuming errors in the example fixed)
An example explaining how this all works in case of nested shadow trees
would be good.
Post by Olli Pettay
I assume the more nested shadow tree just may get some nodes, which were
already distributed, in the distributionList.
Right, that was the design we discussed.
Post by Olli Pettay
How does the distribute() behave? Does it end up invoking distribution
in all the nested shadow roots or only in the callee?
Yes, that's the only reason we need distribute() in the first place. If
we didn't have to care about redistribution, simply exposing methods to
insert/remove distributed nodes on content element is sufficient.
Post by Olli Pettay
Should distribute callback be called automatically at the end of the
microtask if there has been relevant[1] DOM mutations since the last
Post by Olli Pettay
manual call to distribute()? That would make the API a bit simpler to
use, if one wouldn't have to use MutationObservers.
That's a possibility. It could be an option to specify as well. But
there might be components that are not interested in updating distributed
nodes for the sake of performance for example. I'm not certain forcing
everyone to always update distributed nodes is necessarily desirable given
the lack of experience with an imperative API for distributing nodes.
Post by Olli Pettay
[1] Assuming we want to distribute only direct children, then any child
list change or any attribute change in the children
Post by Olli Pettay
might cause distribution() automatically.
I think that's a big if now that we've gotten rid of "select" attribute
and multiple generations of shadow DOM. As far as I could recall, one of
the reasons we only supported distributing direct children was so that we
could implement "select" attribute and multiple generations of shadow DOM.
If we wanted, we could always impose such a restriction in a declarative
syntax and inheritance mechanism we add in v2 since those v2 APIs are
supposed to build on top of this imperative API.
Another big if is whether we even need to let each shadow DOM select nodes
to redistribute. If we don't need to support filtering distributed nodes
in insertion points for re-distribution (i.e. we either distribute
everything under a given content element or nothing), then we don't need
all of this redistribution mechanism baked into the browser and the model
where we just have insert/remove on content element will work.
- R. Niwa
Ryosuke Niwa
2015-04-27 20:45:25 UTC
Permalink
https://gist.github.com/sorvell/e201c25ec39480be66aa <https://gist.github.com/sorvell/e201c25ec39480be66aa>
It seems like with this API, we’d have to make O(n^k) calls where n is the number of distribution candidates and k is the number of insertion points, and that’s bad. Or am I misunderstanding your design?
We keep the currently spec'd distribution algorithm/timing but remove `select` in favor of an explicit selection callback.
What do you mean by keeping the currently spec’ed timing? We certainly can’t do it at “style resolution time” because style resolution is an implementation detail that we shouldn’t expose to the Web just like GC and its timing is an implementation detail in JS. Besides that, avoiding style resolution is a very important optimizations and spec’ing when it happens will prevent us from optimizing it away in the future/

Do you mean instead that we synchronously invoke this algorithm when a child node is inserted or removed from the host? If so, that’ll impose unacceptable runtime cost for DOM mutations.

I think the only timing UA can support by default will be at the end of micro task or at UA-code / user-code boundary as done for custom element lifestyle callbacks at the moment.
The user simply returns true if the node should be distributed to the given insertion point.
* the callback can be synchronous-ish because it acts only on a specific node when possible. Distribution then won't break existing expectations since `offsetHeight` is always correct.
“always correct” is somewhat stronger statement than I would state here since during UA calls these shouldDistributeToInsertionPoint callbacks, we'll certainly see transient offsetHeight values.

- R. Niwa
Ryosuke Niwa
2015-04-27 21:43:34 UTC
Permalink
Post by Ryosuke Niwa
https://gist.github.com/sorvell/e201c25ec39480be66aa <https://gist.github.com/sorvell/e201c25ec39480be66aa>
It seems like with this API, we’d have to make O(n^k)
I meant to say O(nk). Sorry, I'm still waking up :(
Post by Ryosuke Niwa
calls where n is the number of distribution candidates and k is the number of insertion points, and that’s bad. Or am I misunderstanding your design?
We keep the currently spec'd distribution algorithm/timing but remove `select` in favor of an explicit selection callback.
What do you mean by keeping the currently spec’ed timing? We certainly can’t do it at “style resolution time” because style resolution is an implementation detail that we shouldn’t expose to the Web just like GC and its timing is an implementation detail in JS. Besides that, avoiding style resolution is a very important optimizations and spec’ing when it happens will prevent us from optimizing it away in the future/
Do you mean instead that we synchronously invoke this algorithm when a child node is inserted or removed from the host? If so, that’ll impose unacceptable runtime cost for DOM mutations.
I think the only timing UA can support by default will be at the end of micro task or at UA-code / user-code boundary as done for custom element lifestyle callbacks at the moment.
The user simply returns true if the node should be distributed to the given insertion point.
* the callback can be synchronous-ish because it acts only on a specific node when possible. Distribution then won't break existing expectations since `offsetHeight` is always correct.
“always correct” is somewhat stronger statement than I would state here since during UA calls these shouldDistributeToInsertionPoint callbacks, we'll certainly see transient offsetHeight values.
- R. Niwa
Steve Orvell
2015-04-27 22:15:58 UTC
Permalink
IMO, the appeal of this proposal is that it's a small change to the current
spec and avoids changing user expectations about the state of the dom and
can explain the two declarative proposals for distribution.
Post by Ryosuke Niwa
It seems like with this API, we’d have to make O(n^k) calls where n is the
number of distribution candidates and k is the number of insertion points,
and that’s bad. Or am I misunderstanding your design?
I think you've understood the proposed design. As you noted, the cost is
actually O(n*k). In our use cases, k is generally very small.

Do you mean instead that we synchronously invoke this algorithm when a
Post by Ryosuke Niwa
child node is inserted or removed from the host? If so, that’ll impose
unacceptable runtime cost for DOM mutations.
I think the only timing UA can support by default will be at the end of
micro task or at UA-code / user-code boundary as done for custom element
lifestyle callbacks at the moment.
Running this callback at the UA-code/user-code boundary seems like it would
be fine. Running the more complicated "distribute all the nodes" proposals
at this time would obviously not be feasible. The notion here is that since
we're processing only a single node at a time, this can be done after an
atomic dom action.

“always correct” is somewhat stronger statement than I would state here
Post by Ryosuke Niwa
since during UA calls these shouldDistributeToInsertionPoint callbacks,
we'll certainly see transient offsetHeight values.
Yes, you're right about that. Specifically it would be bad to try to read
`offsetHeight` in this callback and this would be an anti-pattern. If
that's not good enough, perhaps we can explore actually not working
directly with the node but instead the subset of information necessary to
be able to decide on distribution.

Can you explain, under the initial proposal, how a user can ask an
element's dimensions and get the post-distribution answer? With current dom
api's I can be sure that if I do parent.appendChild(child) and then
parent.offsetWidth, the answer takes child into account. I'm looking to
understand how we don't violate this expectation when parent distributes.
Or if we violate this expectation, what is the proposed right way to ask
this question?

In addition to rendering information about a node, distribution also
effects the flow of events. So a similar question: when is it safe to call
child.dispatchEvent such that if parent distributes elements to its
shadowRoot, elements in the shadowRoot will see the event?
Post by Ryosuke Niwa
Here's a minimal and hopefully simple proposal that we can flesh out if
https://gist.github.com/sorvell/e201c25ec39480be66aa
It seems like with this API, we’d have to make O(n^k) calls where n is the
number of distribution candidates and k is the number of insertion points,
and that’s bad. Or am I misunderstanding your design?
We keep the currently spec'd distribution algorithm/timing but remove
`select` in favor of an explicit selection callback.
What do you mean by keeping the currently spec’ed timing? We certainly
can’t do it at “style resolution time” because style resolution is an
implementation detail that we shouldn’t expose to the Web just like GC and
its timing is an implementation detail in JS. Besides that, avoiding style
resolution is a very important optimizations and spec’ing when it happens
will prevent us from optimizing it away in the future/
Do you mean instead that we synchronously invoke this algorithm when a
child node is inserted or removed from the host? If so, that’ll impose
unacceptable runtime cost for DOM mutations.
I think the only timing UA can support by default will be at the end of
micro task or at UA-code / user-code boundary as done for custom element
lifestyle callbacks at the moment.
The user simply returns true if the node should be distributed to the
given insertion point.
* the callback can be synchronous-ish because it acts only on a specific
node when possible. Distribution then won't break existing expectations
since `offsetHeight` is always correct.
“always correct” is somewhat stronger statement than I would state here
since during UA calls these shouldDistributeToInsertionPoint callbacks,
we'll certainly see transient offsetHeight values.
- R. Niwa
Hayato Ito
2015-04-27 22:31:58 UTC
Permalink
I think there are a lot of user operations where distribution must be
updated before returning the meaningful result synchronously.
Unless distribution result is correctly updated, users would take the dirty
result.

For example:
- element.offsetWidth: Style resolution requires distribution. We must
update distribution, if it's dirty, before calculation offsetWidth
synchronously.
- event dispatching: event path requires distribution because it needs a
composed tree.

Can the current HTML/DOM specs are rich enough to explain the timing when
the imperative APIs should be run in these cases?

For me, the imperative APIs for distribution sounds very similar to the
imperative APIs for style resolution. The difficulties of both problems
might be similar.
Post by Steve Orvell
IMO, the appeal of this proposal is that it's a small change to the
current spec and avoids changing user expectations about the state of the
dom and can explain the two declarative proposals for distribution.
Post by Ryosuke Niwa
It seems like with this API, we’d have to make O(n^k) calls where n is
the number of distribution candidates and k is the number of insertion
points, and that’s bad. Or am I misunderstanding your design?
I think you've understood the proposed design. As you noted, the cost is
actually O(n*k). In our use cases, k is generally very small.
Do you mean instead that we synchronously invoke this algorithm when a
Post by Ryosuke Niwa
child node is inserted or removed from the host? If so, that’ll impose
unacceptable runtime cost for DOM mutations.
I think the only timing UA can support by default will be at the end of
micro task or at UA-code / user-code boundary as done for custom element
lifestyle callbacks at the moment.
Running this callback at the UA-code/user-code boundary seems like it
would be fine. Running the more complicated "distribute all the nodes"
proposals at this time would obviously not be feasible. The notion here is
that since we're processing only a single node at a time, this can be done
after an atomic dom action.
“always correct” is somewhat stronger statement than I would state here
Post by Ryosuke Niwa
since during UA calls these shouldDistributeToInsertionPoint callbacks,
we'll certainly see transient offsetHeight values.
Yes, you're right about that. Specifically it would be bad to try to read
`offsetHeight` in this callback and this would be an anti-pattern. If
that's not good enough, perhaps we can explore actually not working
directly with the node but instead the subset of information necessary to
be able to decide on distribution.
Can you explain, under the initial proposal, how a user can ask an
element's dimensions and get the post-distribution answer? With current
dom api's I can be sure that if I do parent.appendChild(child) and then
parent.offsetWidth, the answer takes child into account. I'm looking to
understand how we don't violate this expectation when parent distributes.
Or if we violate this expectation, what is the proposed right way to ask
this question?
In addition to rendering information about a node, distribution also
effects the flow of events. So a similar question: when is it safe to call
child.dispatchEvent such that if parent distributes elements to its
shadowRoot, elements in the shadowRoot will see the event?
Post by Ryosuke Niwa
Here's a minimal and hopefully simple proposal that we can flesh out if
https://gist.github.com/sorvell/e201c25ec39480be66aa
It seems like with this API, we’d have to make O(n^k) calls where n is
the number of distribution candidates and k is the number of insertion
points, and that’s bad. Or am I misunderstanding your design?
We keep the currently spec'd distribution algorithm/timing but remove
`select` in favor of an explicit selection callback.
What do you mean by keeping the currently spec’ed timing? We certainly
can’t do it at “style resolution time” because style resolution is an
implementation detail that we shouldn’t expose to the Web just like GC and
its timing is an implementation detail in JS. Besides that, avoiding style
resolution is a very important optimizations and spec’ing when it happens
will prevent us from optimizing it away in the future/
Do you mean instead that we synchronously invoke this algorithm when a
child node is inserted or removed from the host? If so, that’ll impose
unacceptable runtime cost for DOM mutations.
I think the only timing UA can support by default will be at the end of
micro task or at UA-code / user-code boundary as done for custom element
lifestyle callbacks at the moment.
The user simply returns true if the node should be distributed to the
given insertion point.
* the callback can be synchronous-ish because it acts only on a specific
node when possible. Distribution then won't break existing expectations
since `offsetHeight` is always correct.
“always correct” is somewhat stronger statement than I would state here
since during UA calls these shouldDistributeToInsertionPoint callbacks,
we'll certainly see transient offsetHeight values.
- R. Niwa
Ryosuke Niwa
2015-04-27 22:54:03 UTC
Permalink
I think there are a lot of user operations where distribution must be updated before returning the meaningful result synchronously.
Unless distribution result is correctly updated, users would take the dirty result.
Indeed.
- element.offsetWidth: Style resolution requires distribution. We must update distribution, if it's dirty, before calculation offsetWidth synchronously.
- event dispatching: event path requires distribution because it needs a composed tree.
Can the current HTML/DOM specs are rich enough to explain the timing when the imperative APIs should be run in these cases?
It certainly doesn't tell us when style resolution happens. In the case of event dispatching, it's impossible even in theory unless we somehow disallow event dispatching within our `distribute` callbacks since we can dispatch new events within the callbacks to decide to where a given node gets distributed. Given that, I don't think we should even try to make such a guarantee.

We could, however, make a slightly weaker guarantee that some level of conditions for the user code outside of `distribute` callbacks. For example, I can think of three levels (weakest to strongest) of self-consistent invariants:
1. every node is distributed to at most one insertion point.
2. all first-order distributions is up-to-date (redistribution may happen later).
3. all distributions is up-to-date.
For me, the imperative APIs for distribution sounds very similar to the imperative APIs for style resolution. The difficulties of both problems might be similar.
We certainly don't want to (in fact, we'll object to) spec the timing for style resolution or what even style resolution means.

- R. Niwa
Anne van Kesteren
2015-04-30 12:17:50 UTC
Permalink
Post by Hayato Ito
I think there are a lot of user operations where distribution must be
updated before returning the meaningful result synchronously.
Unless distribution result is correctly updated, users would take the dirty
result.
- element.offsetWidth: Style resolution requires distribution. We must
update distribution, if it's dirty, before calculation offsetWidth
synchronously.
- event dispatching: event path requires distribution because it needs a
composed tree.
Can the current HTML/DOM specs are rich enough to explain the timing when
the imperative APIs should be run in these cases?
The imperative API I proposed leaves the timing up to whenever
distribute() is invoked by the developer. Currently at best that can
be done from mutation observers. And I think that's fine for v1.
element.offsetWidth et al are bad APIs that we should not accommodate
for. The results they return will be deterministic, but they should
not cause further side effects such as distribution and therefore the
results might appear incorrect I suppose depending on what point of
view you have.

We discussed this point at the meeting.
Post by Hayato Ito
For me, the imperative APIs for distribution sounds very similar to the
imperative APIs for style resolution. The difficulties of both problems
might be similar.
Only if you insist on coupling them are they similar. And only if you
insist on semantics that are identical to <content select>. This is
the very reason why <content select> is not acceptable as it would
require solving that problem. Whereas an imperative API free of the
warts of element.offsetWidth would not have to.
--
https://annevankesteren.nl/
Ryosuke Niwa
2015-04-27 22:42:38 UTC
Permalink
IMO, the appeal of this proposal is that it's a small change to the current spec and avoids changing user expectations about the state of the dom and can explain the two declarative proposals for distribution.
It seems like with this API, we’d have to make O(n^k) calls where n is the number of distribution candidates and k is the number of insertion points, and that’s bad. Or am I misunderstanding your design?
I think you've understood the proposed design. As you noted, the cost is actually O(n*k). In our use cases, k is generally very small.
I don't think we want to introduce O(nk) algorithm. Pretty much every browser optimization we implement these days are removing O(n^2) algorithms in the favor of O(n) algorithms. Hard-baking O(nk) behavior is bad because we can't even theoretically optimize it away.
Do you mean instead that we synchronously invoke this algorithm when a child node is inserted or removed from the host? If so, that’ll impose unacceptable runtime cost for DOM mutations.
I think the only timing UA can support by default will be at the end of micro task or at UA-code / user-code boundary as done for custom element lifestyle callbacks at the moment.
Running this callback at the UA-code/user-code boundary seems like it would be fine. Running the more complicated "distribute all the nodes" proposals at this time would obviously not be feasible. The notion here is that since we're processing only a single node at a time, this can be done after an atomic dom action.
Indeed, running such an algorithm each time node is inserted or removed will be quite expensive.
“always correct” is somewhat stronger statement than I would state here since during UA calls these shouldDistributeToInsertionPoint callbacks, we'll certainly see transient offsetHeight values.
Yes, you're right about that. Specifically it would be bad to try to read `offsetHeight` in this callback and this would be an anti-pattern. If that's not good enough, perhaps we can explore actually not working directly with the node but instead the subset of information necessary to be able to decide on distribution.
I'm not necessarily saying that it's not good enough. I'm just saying that it is possible to observe such a state even with this API.
Can you explain, under the initial proposal, how a user can ask an element's dimensions and get the post-distribution answer? With current dom api's I can be sure that if I do parent.appendChild(child) and then parent.offsetWidth, the answer takes child into account. I'm looking to understand how we don't violate this expectation when parent distributes. Or if we violate this expectation, what is the proposed right way to ask this question?
You don't get that guarantee in the design we discussed on Friday [1] [2]. In fact, we basically deferred the timing issue to other APIs that observe DOM changes, namely mutation observers and custom elements lifecycle callbacks. Each component uses those APIs to call distribute().
In addition to rendering information about a node, distribution also effects the flow of events. So a similar question: when is it safe to call child.dispatchEvent such that if parent distributes elements to its shadowRoot, elements in the shadowRoot will see the event?
Again, the timing was deferred in [1] and [2] so it really depends on when each component decides to distribute.

- R. Niwa

[1] https://gist.github.com/rniwa/2f14588926e1a11c65d3
[2] https://gist.github.com/annevk/e9e61801fcfb251389ef
Tab Atkins Jr.
2015-04-27 23:06:50 UTC
Permalink
Post by Ryosuke Niwa
IMO, the appeal of this proposal is that it's a small change to the current spec and avoids changing user expectations about the state of the dom and can explain the two declarative proposals for distribution.
It seems like with this API, we’d have to make O(n^k) calls where n is the number of distribution candidates and k is the number of insertion points, and that’s bad. Or am I misunderstanding your design?
I think you've understood the proposed design. As you noted, the cost is actually O(n*k). In our use cases, k is generally very small.
I don't think we want to introduce O(nk) algorithm. Pretty much every browser optimization we implement these days are removing O(n^2) algorithms in the favor of O(n) algorithms. Hard-baking O(nk) behavior is bad because we can't even theoretically optimize it away.
You're aware, obviously, that O(n^2) is a far different beast than
O(nk). If k is generally small, which it is, O(nk) is basically just
O(n) with a constant factor applied.

~TJ
Tab Atkins Jr.
2015-04-27 23:23:34 UTC
Permalink
Post by Tab Atkins Jr.
Post by Ryosuke Niwa
IMO, the appeal of this proposal is that it's a small change to the current spec and avoids changing user expectations about the state of the dom and can explain the two declarative proposals for distribution.
It seems like with this API, we’d have to make O(n^k) calls where n is the number of distribution candidates and k is the number of insertion points, and that’s bad. Or am I misunderstanding your design?
I think you've understood the proposed design. As you noted, the cost is actually O(n*k). In our use cases, k is generally very small.
I don't think we want to introduce O(nk) algorithm. Pretty much every browser optimization we implement these days are removing O(n^2) algorithms in the favor of O(n) algorithms. Hard-baking O(nk) behavior is bad because we can't even theoretically optimize it away.
You're aware, obviously, that O(n^2) is a far different beast than
O(nk). If k is generally small, which it is, O(nk) is basically just
O(n) with a constant factor applied.
To make it clear: I'm not trying to troll Ryosuke here.

He argued that we don't want to add new O(n^2) algorithms if we can
help it, and that we prefer O(n). (Uncontroversial.)

He then further said that an O(nk) algorithm is sufficiently close to
O(n^2) that he'd similarly like to avoid it. I'm trying to
reiterate/expand on Steve's message here, that the k value in question
is usually very small, relative to the value of n, so in practice this
O(nk) is more similar to O(n) than O(n^2), and Ryosuke's aversion to
new O(n^2) algorithms may be mistargeted here.

~TJ
Ryosuke Niwa
2015-04-28 23:32:58 UTC
Permalink
Post by Tab Atkins Jr.
Post by Tab Atkins Jr.
Post by Ryosuke Niwa
IMO, the appeal of this proposal is that it's a small change to the current spec and avoids changing user expectations about the state of the dom and can explain the two declarative proposals for distribution.
Post by Ryosuke Niwa
It seems like with this API, we’d have to make O(n^k) calls where n is the number of distribution candidates and k is the number of insertion points, and that’s bad. Or am I misunderstanding your design?
I think you've understood the proposed design. As you noted, the cost is actually O(n*k). In our use cases, k is generally very small.
I don't think we want to introduce O(nk) algorithm. Pretty much every browser optimization we implement these days are removing O(n^2) algorithms in the favor of O(n) algorithms. Hard-baking O(nk) behavior is bad because we can't even theoretically optimize it away.
You're aware, obviously, that O(n^2) is a far different beast than
O(nk). If k is generally small, which it is, O(nk) is basically just
O(n) with a constant factor applied.
To make it clear: I'm not trying to troll Ryosuke here.
He argued that we don't want to add new O(n^2) algorithms if we can
help it, and that we prefer O(n). (Uncontroversial.)
He then further said that an O(nk) algorithm is sufficiently close to
O(n^2) that he'd similarly like to avoid it. I'm trying to
reiterate/expand on Steve's message here, that the k value in question
is usually very small, relative to the value of n, so in practice this
O(nk) is more similar to O(n) than O(n^2), and Ryosuke's aversion to
new O(n^2) algorithms may be mistargeted here.
Thanks for clarification. Just as Justin pointed out [1], one of the most important use case of imperative API is to dynamically insert as many insertion points as needed to wrap each distributed node. In such a use case, this algorithm DOES result in O(n^2).

In fact, it could even result in O(n^3) behavior depending on how we spec it. If the user code had dynamically inserted insertion points one by one and UA invoked this callback function for each insertion point and each node. If we didn't, then author needs a mechanism to let UA know that the condition by which insertion points select a node has changed and it needs to re-distribute all the nodes again.

- R. Niwa

[1] https://lists.w3.org/Archives/Public/public-webapps/2015AprJun/0325.html <https://lists.w3.org/Archives/Public/public-webapps/2015AprJun/0325.html>
Justin Fagnani
2015-04-28 23:54:51 UTC
Permalink
Post by Steve Orvell
IMO, the appeal of this proposal is that it's a small change to the
current spec and avoids changing user expectations about the state of the
dom and can explain the two declarative proposals for distribution.
It seems like with this API, we’d have to make O(n^k) calls where n is the
number of distribution candidates and k is the number of insertion points,
and that’s bad. Or am I misunderstanding your design?
I think you've understood the proposed design. As you noted, the cost is
actually O(n*k). In our use cases, k is generally very small.
I don't think we want to introduce O(nk) algorithm. Pretty much every
browser optimization we implement these days are removing O(n^2) algorithms
in the favor of O(n) algorithms. Hard-baking O(nk) behavior is bad because
we can't even theoretically optimize it away.
You're aware, obviously, that O(n^2) is a far different beast than
O(nk). If k is generally small, which it is, O(nk) is basically just
O(n) with a constant factor applied.
To make it clear: I'm not trying to troll Ryosuke here.
He argued that we don't want to add new O(n^2) algorithms if we can
help it, and that we prefer O(n). (Uncontroversial.)
He then further said that an O(nk) algorithm is sufficiently close to
O(n^2) that he'd similarly like to avoid it. I'm trying to
reiterate/expand on Steve's message here, that the k value in question
is usually very small, relative to the value of n, so in practice this
O(nk) is more similar to O(n) than O(n^2), and Ryosuke's aversion to
new O(n^2) algorithms may be mistargeted here.
Thanks for clarification. Just as Justin pointed out [1], one of the most
important use case of imperative API is to dynamically insert as many
insertion points as needed to wrap each distributed node. In such a use
case, this algorithm DOES result in O(n^2).
I think I said it was a possibility opened by an imperative API, but I
thought it would be very rare (as will be any modification of the shadow
root in the distribution callback). I think that accomplishing decoration
by inserting an insertion point per distributed node is probably a
degenerate case and it would be better if we supported decoration, but that
seems like a v2+ type feature.

-Justin
Post by Steve Orvell
In fact, it could even result in O(n^3) behavior depending on how we spec
it. If the user code had dynamically inserted insertion points one by one
and UA invoked this callback function for each insertion point and each
node. If we didn't, then author needs a mechanism to let UA know that the
condition by which insertion points select a node has changed and it needs
to re-distribute all the nodes again.
- R. Niwa
[1]
https://lists.w3.org/Archives/Public/public-webapps/2015AprJun/0325.html
Steve Orvell
2015-04-27 23:41:46 UTC
Permalink
Post by Ryosuke Niwa
Again, the timing was deferred in [1] and [2] so it really depends on when
each component decides to distribute.
I want to be able to create an element <x-foo> that acts like other dom
elements. This element uses Shadow DOM and distribution to encapsulate its
details.

Let's imagine a 3rd party user named Bob that uses <div> and <x-foo>. Bob
knows he can call div.appendChild(element) and then immediately ask
div.offsetHeight and know that this height includes whatever the added
element should contribute to the div's height. Bob expects to be able to do
this with the <x-foo> element also since it is just another element from
his perspective.

How can I, the author of <x-foo>, craft my element such that I don't
violate Bob's expectations? Does your proposal support this?
Post by Ryosuke Niwa
Post by Steve Orvell
IMO, the appeal of this proposal is that it's a small change to the
current spec and avoids changing user expectations about the state of the
dom and can explain the two declarative proposals for distribution.
Post by Steve Orvell
Post by Ryosuke Niwa
It seems like with this API, we’d have to make O(n^k) calls where n is
the number of distribution candidates and k is the number of insertion
points, and that’s bad. Or am I misunderstanding your design?
Post by Steve Orvell
I think you've understood the proposed design. As you noted, the cost is
actually O(n*k). In our use cases, k is generally very small.
I don't think we want to introduce O(nk) algorithm. Pretty much every
browser optimization we implement these days are removing O(n^2) algorithms
in the favor of O(n) algorithms. Hard-baking O(nk) behavior is bad because
we can't even theoretically optimize it away.
Post by Steve Orvell
Post by Ryosuke Niwa
Do you mean instead that we synchronously invoke this algorithm when a
child node is inserted or removed from the host? If so, that’ll impose
unacceptable runtime cost for DOM mutations.
Post by Steve Orvell
Post by Ryosuke Niwa
I think the only timing UA can support by default will be at the end of
micro task or at UA-code / user-code boundary as done for custom element
lifestyle callbacks at the moment.
Post by Steve Orvell
Running this callback at the UA-code/user-code boundary seems like it
would be fine. Running the more complicated "distribute all the nodes"
proposals at this time would obviously not be feasible. The notion here is
that since we're processing only a single node at a time, this can be done
after an atomic dom action.
Indeed, running such an algorithm each time node is inserted or removed
will be quite expensive.
Post by Steve Orvell
Post by Ryosuke Niwa
“always correct” is somewhat stronger statement than I would state here
since during UA calls these shouldDistributeToInsertionPoint callbacks,
we'll certainly see transient offsetHeight values.
Post by Steve Orvell
Yes, you're right about that. Specifically it would be bad to try to
read `offsetHeight` in this callback and this would be an anti-pattern. If
that's not good enough, perhaps we can explore actually not working
directly with the node but instead the subset of information necessary to
be able to decide on distribution.
I'm not necessarily saying that it's not good enough. I'm just saying
that it is possible to observe such a state even with this API.
Post by Steve Orvell
Can you explain, under the initial proposal, how a user can ask an
element's dimensions and get the post-distribution answer? With current dom
api's I can be sure that if I do parent.appendChild(child) and then
parent.offsetWidth, the answer takes child into account. I'm looking to
understand how we don't violate this expectation when parent distributes.
Or if we violate this expectation, what is the proposed right way to ask
this question?
You don't get that guarantee in the design we discussed on Friday [1] [2].
In fact, we basically deferred the timing issue to other APIs that observe
DOM changes, namely mutation observers and custom elements lifecycle
callbacks. Each component uses those APIs to call distribute().
Post by Steve Orvell
In addition to rendering information about a node, distribution also
effects the flow of events. So a similar question: when is it safe to call
child.dispatchEvent such that if parent distributes elements to its
shadowRoot, elements in the shadowRoot will see the event?
Again, the timing was deferred in [1] and [2] so it really depends on when
each component decides to distribute.
- R. Niwa
[1] https://gist.github.com/rniwa/2f14588926e1a11c65d3
[2] https://gist.github.com/annevk/e9e61801fcfb251389ef
Ryosuke Niwa
2015-04-27 23:55:54 UTC
Permalink
Post by Ryosuke Niwa
Again, the timing was deferred in [1] and [2] so it really depends on when each component decides to distribute.
I want to be able to create an element <x-foo> that acts like other dom elements. This element uses Shadow DOM and distribution to encapsulate its details.
Let's imagine a 3rd party user named Bob that uses <div> and <x-foo>. Bob knows he can call div.appendChild(element) and then immediately ask div.offsetHeight and know that this height includes whatever the added element should contribute to the div's height. Bob expects to be able to do this with the <x-foo> element also since it is just another element from his perspective.
How can I, the author of <x-foo>, craft my element such that I don't violate Bob's expectations? Does your proposal support this?
In order to support this use case, the author of x-foo must use some mechanism to observe changes to x-foo's child nodes and involve `distribute` synchronously. This will become possible, for example, if we added childrenChanged lifecycle callback to custom elements.

That might be an acceptable mode of operations. If you wanted to synchronously update your insertion points, rely on custom element's lifecycle callbacks and you can only support direct children for distribution. Alternatively, if you wanted to support to distribute a non-direct-child descendent, just use mutation observers to do it at the end of a micro task.

- R. Niwa
Steve Orvell
2015-04-28 00:43:32 UTC
Permalink
Post by Ryosuke Niwa
That might be an acceptable mode of operations. If you wanted to
synchronously update your insertion points, rely on custom element's
lifecycle callbacks and you can only support direct children for
distribution.
That's interesting, thanks for working through it. Given a
`childrenChanged` callback, I think your first proposal
`<content>.insertAt` and `<content>.remove` best supports a synchronous
mental model. As you note, re-distribution is then the element author's
responsibility. This would be done by listening to the synchronous
`distributionChanged` event. That seems straightforward.

Mutations that are not captured in childrenChanged that can affect
distribution would still be a problem, however. Given:

<div id="host">
<div id="child"></div>
</div>

child.setAttribute('slot', 'a');
host.offsetHeight;

Again, we are guaranteed that parent's offsetHeight includes any
contribution that adding the slot attribute caused (e.g. via a
#child[slot=a] rule)

If the `host` is a custom element that uses distribution, would it be
possible to have this same guarantee?

<x-foo id="host">
<div id="child"></div>
</x-foo>

child.setAttribute('slot', 'a');
host.offsetHeight;
Post by Ryosuke Niwa
Post by Steve Orvell
Post by Ryosuke Niwa
Again, the timing was deferred in [1] and [2] so it really depends on
when each component decides to distribute.
Post by Steve Orvell
I want to be able to create an element <x-foo> that acts like other dom
elements. This element uses Shadow DOM and distribution to encapsulate its
details.
Post by Steve Orvell
Let's imagine a 3rd party user named Bob that uses <div> and <x-foo>.
Bob knows he can call div.appendChild(element) and then immediately ask
div.offsetHeight and know that this height includes whatever the added
element should contribute to the div's height. Bob expects to be able to do
this with the <x-foo> element also since it is just another element from
his perspective.
Post by Steve Orvell
How can I, the author of <x-foo>, craft my element such that I don't
violate Bob's expectations? Does your proposal support this?
In order to support this use case, the author of x-foo must use some
mechanism to observe changes to x-foo's child nodes and involve
`distribute` synchronously. This will become possible, for example, if we
added childrenChanged lifecycle callback to custom elements.
That might be an acceptable mode of operations. If you wanted to
synchronously update your insertion points, rely on custom element's
lifecycle callbacks and you can only support direct children for
distribution. Alternatively, if you wanted to support to distribute a
non-direct-child descendent, just use mutation observers to do it at the
end of a micro task.
- R. Niwa
Ryosuke Niwa
2015-04-28 02:01:57 UTC
Permalink
Post by Ryosuke Niwa
That might be an acceptable mode of operations. If you wanted to synchronously update your insertion points, rely on custom element's lifecycle callbacks and you can only support direct children for distribution.
That's interesting, thanks for working through it. Given a `childrenChanged` callback, I think your first proposal `<content>.insertAt` and `<content>.remove` best supports a synchronous mental model. As you note, re-distribution is then the element author's responsibility. This would be done by listening to the synchronous `distributionChanged` event. That seems straightforward.
<div id="host">
<div id="child"></div>
</div>
child.setAttribute('slot', 'a');
host.offsetHeight;
Again, we are guaranteed that parent's offsetHeight includes any contribution that adding the slot attribute caused (e.g. via a #child[slot=a] rule)
If the `host` is a custom element that uses distribution, would it be possible to have this same guarantee?
<x-foo id="host">
<div id="child"></div>
</x-foo>
child.setAttribute('slot', 'a');
host.offsetHeight;
That's a good point. Perhaps we need to make childrenChanged optionally get called when attributes of child nodes are changed just like the way you can configure mutation observers to optionally monitor attribute changes.

- R. Niwa
Steve Orvell
2015-04-28 02:32:56 UTC
Permalink
Post by Ryosuke Niwa
Perhaps we need to make childrenChanged optionally get called when
attributes of child nodes are changed just like the way you can configure
mutation observers to optionally monitor attribute changes.
Wow, let me summarize if I can. Let's say we have (a) a custom elements
synchronous callback `childrenChanged` that can see child adds/removes and
child attribute mutations, (b) the first option in the proposed api here
https://gist.github.com/rniwa/2f14588926e1a11c65d3, (c) user element code
that wires everything together correctly. Then, unless I am mistaken, we
have enough power to implement something like the currently spec'd
declarative `select` mechanism or the proposed `slot` mechanism without any
change to user's expectations about when information in the dom can be
queried.

Do the implementors think all of that is feasible?

Possible corner case: If a <content> is added to a shadowRoot, this should
probably invalidate the distribution and redo everything. To maintain a
synchronous mental model, the <content> mutation in the shadowRoot subtree
needs to be seen synchronously. This is not possible with the tools
mentioned above, but it seems like a reasonable requirement that the
shadowRoot author can be aware of this change since the author is causing
it to happen.
Post by Ryosuke Niwa
Post by Steve Orvell
Post by Ryosuke Niwa
That might be an acceptable mode of operations. If you wanted to
synchronously update your insertion points, rely on custom element's
lifecycle callbacks and you can only support direct children for
distribution.
Post by Steve Orvell
That's interesting, thanks for working through it. Given a
`childrenChanged` callback, I think your first proposal
`<content>.insertAt` and `<content>.remove` best supports a synchronous
mental model. As you note, re-distribution is then the element author's
responsibility. This would be done by listening to the synchronous
`distributionChanged` event. That seems straightforward.
Post by Steve Orvell
Mutations that are not captured in childrenChanged that can affect
<div id="host">
<div id="child"></div>
</div>
child.setAttribute('slot', 'a');
host.offsetHeight;
Again, we are guaranteed that parent's offsetHeight includes any
contribution that adding the slot attribute caused (e.g. via a
#child[slot=a] rule)
Post by Steve Orvell
If the `host` is a custom element that uses distribution, would it be
possible to have this same guarantee?
Post by Steve Orvell
<x-foo id="host">
<div id="child"></div>
</x-foo>
child.setAttribute('slot', 'a');
host.offsetHeight;
That's a good point. Perhaps we need to make childrenChanged optionally
get called when attributes of child nodes are changed just like the way you
can configure mutation observers to optionally monitor attribute changes.
- R. Niwa
Ryosuke Niwa
2015-04-28 03:18:21 UTC
Permalink
Post by Ryosuke Niwa
Perhaps we need to make childrenChanged optionally get called when attributes of child nodes are changed just like the way you can configure mutation observers to optionally monitor attribute changes.
Wow, let me summarize if I can. Let's say we have (a) a custom elements synchronous callback `childrenChanged` that can see child adds/removes and child attribute mutations, (b) the first option in the proposed api here https://gist.github.com/rniwa/2f14588926e1a11c65d3, (c) user element code that wires everything together correctly. Then, unless I am mistaken, we have enough power to implement something like the currently spec'd declarative `select` mechanism or the proposed `slot` mechanism without any change to user's expectations about when information in the dom can be queried.
Right. The sticking point is that it's like re-introducing mutation events all over again if we don't do it carefully.
Do the implementors think all of that is feasible?
I think something alone this line should be feasible to implement but the performance impact of firing so many events may warrant going back to micro-task timing and think of an alternative solution for the consistency.
Possible corner case: If a <content> is added to a shadowRoot, this should probably invalidate the distribution and redo everything. To maintain a synchronous mental model, the <content> mutation in the shadowRoot subtree needs to be seen synchronously. This is not possible with the tools mentioned above, but it seems like a reasonable requirement that the shadowRoot author can be aware of this change since the author is causing it to happen.
Alternatively, an insertion point could start empty, and the author could move stuff into it after running. We can also add `removeAll` on HTMLContentElement or 'resetDistribution' on ShadowRoot to remove all distributed nodes from a given insertion point or all insertion points associated with a shadow root.

- R. Niwa
Ryosuke Niwa
2015-04-25 20:49:52 UTC
Permalink
Post by Anne van Kesteren
Post by Ryosuke Niwa
In today's F2F, I've got an action item to come up with a concrete workable
proposal for imperative API. I had a great chat about this afterwards with
various people who attended F2F and here's a summary. I'll continue to work
with Dimitri & Erik to work out details in the coming months (our deadline
is July 13th).
https://gist.github.com/rniwa/2f14588926e1a11c65d3
I thought we came up with something somewhat simpler that didn't
https://gist.github.com/annevk/e9e61801fcfb251389ef <https://gist.github.com/annevk/e9e61801fcfb251389ef>
```js
var shadow = host.createShadowRoot({
mode: "closed",
distribute: (distributionList, insertionList) => {
for(var i = 0; i < distributionList.length; i++) {
for(var ii = 0; ii < insertionList.length; ii++) {
var select = insertionList[ii].getAttribute("select")
if(select != null && distributionList[i].matches(select)) {
insertionList[ii].add(distrubtionList[i])
} else if(select != null) {
insertionList[ii].add(distrubtionList[i])
}
}
}
}
})
host.shadowRoot.distribute()
```

One major drawback of this API is computing insertionList is expensive because we'd have to either (where n is the number of nodes in the shadow DOM):
Maintain an ordered list of insertion points, which results in O(n) algorithm to run whenever a content element is inserted or removed.
Lazily compute the ordered list of insertion points when `distribute` callback is about to get called in O(n).

If we wanted to allow non-direct child descendent (e.g. grand child node) of the host to be distributed, then we'd also need O(m) algorithm where m is the number of under the host element. It might be okay to carry on the current restraint that only direct child of shadow host can be distributed into insertion points but I can't think of a good reason as to why such a restriction is desirable.

- R. Niwa
Anne van Kesteren
2015-04-27 06:05:50 UTC
Permalink
Post by Ryosuke Niwa
One major drawback of this API is computing insertionList is expensive
because we'd have to either (where n is the number of nodes in the shadow
Maintain an ordered list of insertion points, which results in O(n)
algorithm to run whenever a content element is inserted or removed.
Lazily compute the ordered list of insertion points when `distribute`
callback is about to get called in O(n).
The alternative is not exposing it and letting developers get hold of
the slots. The rationale for letting the browser do it is because you
need the slots either way and the browser should be able to optimize
better.
Post by Ryosuke Niwa
If we wanted to allow non-direct child descendent (e.g. grand child node) of
the host to be distributed, then we'd also need O(m) algorithm where m is
the number of under the host element. It might be okay to carry on the
current restraint that only direct child of shadow host can be distributed
into insertion points but I can't think of a good reason as to why such a
restriction is desirable.
So you mean that we'd turn distributionList into a subtree? I.e. you
can pass all descendants of a host element to add()? I remember Yehuda
making the point that this was desirable to him.

The other thing I would like to explore is what an API would look like
that does the subclassing as well. Even though we deferred that to v2
I got the impression talking to some folks after the meeting that
there might be more common ground than I thought.


As for the points before about mutation observers. I kind of like just
having distribute() for v1 since it allows maximum flexibility. I
would be okay with having an option that is either optin or optout
that does the observing automatically, though I guess if we move from
children to descendants that gets more expensive.
--
https://annevankesteren.nl/
Justin Fagnani
2015-04-27 07:25:10 UTC
Permalink
Post by Anne van Kesteren
Post by Ryosuke Niwa
One major drawback of this API is computing insertionList is expensive
because we'd have to either (where n is the number of nodes in the shadow
Maintain an ordered list of insertion points, which results in O(n)
algorithm to run whenever a content element is inserted or removed.
I don't expect shadow roots to be modified that much. We certainly don't
see it now, though the imperative API opens up some new possibilities like
calculating a grouping of child nodes and generating a <content> tag per
group, or even generating a <content> tag per child to perform decoration.
I still think those would be very rare cases.
Post by Anne van Kesteren
Post by Ryosuke Niwa
Lazily compute the ordered list of insertion points when `distribute`
callback is about to get called in O(n).
The alternative is not exposing it and letting developers get hold of
the slots. The rationale for letting the browser do it is because you
need the slots either way and the browser should be able to optimize
better.
Post by Ryosuke Niwa
If we wanted to allow non-direct child descendent (e.g. grand child
node) of
Post by Ryosuke Niwa
the host to be distributed, then we'd also need O(m) algorithm where m is
the number of under the host element. It might be okay to carry on the
current restraint that only direct child of shadow host can be
distributed
Post by Ryosuke Niwa
into insertion points but I can't think of a good reason as to why such a
restriction is desirable.
The main reason is that you know that only a direct parent of a node can
distribute it. Otherwise any ancestor could distribute a node, and in
addition to probably being confusing and fragile, you have to define who
wins when multiple ancestors try to.

There are cases where you really want to group element logically by one
tree structure and visually by another, like tabs. I think an alternative
approach to distributing arbitrary descendants would be to see if nodes can
cooperate on distribution so that a node could pass its direct children to
another node's insertion point. The direct child restriction would still be
there, so you always know who's responsible, but you can get the same
effect as distributing descendants for a cooperating sets of elements.
Post by Anne van Kesteren
So you mean that we'd turn distributionList into a subtree? I.e. you
can pass all descendants of a host element to add()? I remember Yehuda
making the point that this was desirable to him.
The other thing I would like to explore is what an API would look like
that does the subclassing as well. Even though we deferred that to v2
I got the impression talking to some folks after the meeting that
there might be more common ground than I thought.
I really don't think the platform needs to do anything to support
subclassing since it can be done so easily at the library level now that
multiple generations of shadow roots are gone. As long as a subclass and
base class can cooperate to produce a single shadow root with insertion
points, the platform doesn't need to know how they did it.

Cheers,
Justin
Post by Anne van Kesteren
As for the points before about mutation observers. I kind of like just
having distribute() for v1 since it allows maximum flexibility. I
would be okay with having an option that is either optin or optout
that does the observing automatically, though I guess if we move from
children to descendants that gets more expensive.
--
https://annevankesteren.nl/
Anne van Kesteren
2015-04-27 08:01:59 UTC
Permalink
On Mon, Apr 27, 2015 at 9:25 AM, Justin Fagnani
Post by Justin Fagnani
I really don't think the platform needs to do anything to support
subclassing since it can be done so easily at the library level now that
multiple generations of shadow roots are gone. As long as a subclass and
base class can cooperate to produce a single shadow root with insertion
points, the platform doesn't need to know how they did it.
So a) this is only if they cooperate and the superclass does not want
to keep its tree and distribution logic hidden and b) if we want to
eventually add declarative functionality we'll need to explain it
somehow. Seems better that we know upfront how that will work.
--
https://annevankesteren.nl/
Justin Fagnani
2015-04-27 08:23:41 UTC
Permalink
Post by Anne van Kesteren
On Mon, Apr 27, 2015 at 9:25 AM, Justin Fagnani
Post by Justin Fagnani
I really don't think the platform needs to do anything to support
subclassing since it can be done so easily at the library level now that
multiple generations of shadow roots are gone. As long as a subclass and
base class can cooperate to produce a single shadow root with insertion
points, the platform doesn't need to know how they did it.
So a) this is only if they cooperate
In reality, base and sub class are going to have to cooperate. There's no
style or dom isolation between the two anymore, and lifecycle callbacks,
templating, and data binding already make them pretty entangled.
Post by Anne van Kesteren
and the superclass does not want
to keep its tree and distribution logic hidden
A separate hidden tree per class sounds very much like multiple generations
of shadow trees, and we just killed that... This is one of my concerns
about the inheritance part of the slots proposal: it appeared to give new
significance to <template> tags which essentially turn them into multiple
shadow roots, just without the style isolation.
Post by Anne van Kesteren
and b) if we want to
eventually add declarative functionality we'll need to explain it
somehow. Seems better that we know upfront how that will work.
I think this is a case where the frameworks would lead and the platform, if
it ever decided to, could integrate the best approach - much like data
binding.

I imagine that frameworks will create declarative forms of distribution and
template inheritance that work something like the current system, or the
slots proposal (or other template systems with inheritance like Jinja). I
don't even think a platform-based solution won't be any faster in the
common case because the frameworks can pre-compute the concrete template
(including distribution points and bindings) from the entire inheritance
hierarchy up front, and stamp out the same thing per instance.

Cheers,
Justin
Post by Anne van Kesteren
--
https://annevankesteren.nl/
Anne van Kesteren
2015-04-27 08:34:22 UTC
Permalink
On Mon, Apr 27, 2015 at 10:23 AM, Justin Fagnani
Post by Justin Fagnani
A separate hidden tree per class sounds very much like multiple generations
of shadow trees, and we just killed that...
We "killed" it for v1, not indefinitely. As I already said, based on
my post-meeting conversations it might not have been as contentious as
I thought. It's mostly the specifics. I haven't quite wrapped my head
around those specifics, but the way Gecko implemented <shadow> (which
does not match the specification or Chrome) seemed to be very similar
to what Apple wanted.
--
https://annevankesteren.nl/
Matthew Robb
2015-04-27 13:41:17 UTC
Permalink
I know this isn't the biggest deal but I think naming the function
distribute is highly suggestive, why not just expose this as
`childListChangedCallback` ?


- Matthew Robb
Post by Anne van Kesteren
On Mon, Apr 27, 2015 at 10:23 AM, Justin Fagnani
Post by Justin Fagnani
A separate hidden tree per class sounds very much like multiple
generations
Post by Justin Fagnani
of shadow trees, and we just killed that...
We "killed" it for v1, not indefinitely. As I already said, based on
my post-meeting conversations it might not have been as contentious as
I thought. It's mostly the specifics. I haven't quite wrapped my head
around those specifics, but the way Gecko implemented <shadow> (which
does not match the specification or Chrome) seemed to be very similar
to what Apple wanted.
--
https://annevankesteren.nl/
Anne van Kesteren
2015-04-27 14:04:20 UTC
Permalink
Post by Matthew Robb
I know this isn't the biggest deal but I think naming the function
distribute is highly suggestive, why not just expose this as
`childListChangedCallback` ?
Because that doesn't match the actual semantics. The callback is
invoked once distribute() is invoked by the web developer or
distribute() has been invoked on a composed ancestor ShadowRoot and
all composed ancestor ShadowRoot's have already had their callback
run. (Note that the distribute callback and the distribute method are
different things.)

Since the distribute callback is in charge of distribution it does in
fact make sense to call it such I think.
--
https://annevankesteren.nl/
Ryosuke Niwa
2015-04-28 03:48:30 UTC
Permalink
Post by Anne van Kesteren
Post by Ryosuke Niwa
If we wanted to allow non-direct child descendent (e.g. grand child node) of
the host to be distributed, then we'd also need O(m) algorithm where m is
the number of under the host element. It might be okay to carry on the
current restraint that only direct child of shadow host can be distributed
into insertion points but I can't think of a good reason as to why such a
restriction is desirable.
The main reason is that you know that only a direct parent of a node can distribute it. Otherwise any ancestor could distribute a node, and in addition to probably being confusing and fragile, you have to define who wins when multiple ancestors try to.
There are cases where you really want to group element logically by one tree structure and visually by another, like tabs. I think an alternative approach to distributing arbitrary descendants would be to see if nodes can cooperate on distribution so that a node could pass its direct children to another node's insertion point. The direct child restriction would still be there, so you always know who's responsible, but you can get the same effect as distributing descendants for a cooperating sets of elements.
That's an interesting approach. Ted and I discussed this design, and it seems workable with Anne's `distribute` callback approach (= the second approach in my proposal).

Conceptually, we ask each child of a shadow host the list of distributable node for under that child (including itself). For normal node without a shadow root, it'll simply itself along with all the distribution candidates returned by its children. For a node with a shadow root, we ask its implementation. The recursive algorithm can be written as follows in pseudo code:

```
NodeList distributionList(Node n):
if n has shadowRoot:
return <ask n the list of distributable noes under n (1)>
else:
list = [n]
for each child in n:
list += distributionList(n)
return list
```

Now, if we adopted `distribute` callback approach, one obvious mechanism to do (1) is to call `distribute` on n and return whatever it didn't distribute as a list. Another obvious approach is to simply return [n] to avoid the mess of n later deciding to distribute a new node.
Post by Anne van Kesteren
So you mean that we'd turn distributionList into a subtree? I.e. you
can pass all descendants of a host element to add()? I remember Yehuda
making the point that this was desirable to him.
The other thing I would like to explore is what an API would look like
that does the subclassing as well. Even though we deferred that to v2
I got the impression talking to some folks after the meeting that
there might be more common ground than I thought.
I really don't think the platform needs to do anything to support subclassing since it can be done so easily at the library level now that multiple generations of shadow roots are gone. As long as a subclass and base class can cooperate to produce a single shadow root with insertion points, the platform doesn't need to know how they did it.
I think we should eventually add native declarative inheritance support for all of this.

One thing that worries me about the `distribute` callback approach (a.k.a. Anne's approach) is that it bakes distribution algorithm into the platform without us having thoroughly studied how subclassing will be done upfront.

Mozilla tried to solve this problem with XBS, and they seem to think what they have isn't really great. Google has spent multiple years working on this problem but they come around to say their solution, multiple generations of shadow DOM, may not be as great as they thought it would be. Given that, I'm quite terrified of making the same mistake in spec'ing how distribution works and later regretting it.

In that regard, the first approach w/o distribution has an advantage of letting Web developer experiment with the bare minimum and try out which distribution algorithms and mechanisms work best.

- R. Niwa
Hayato Ito
2015-04-28 04:09:18 UTC
Permalink
For the record, I, as a spec editor, still think "Shadow Root hosts yet
another Shadow Root" is the best idea among all ideas I've ever seen, with
a "<shadow> as function", because it can explain everything in a unified
way using a single tree of trees, without bringing yet another complexity
such as multiple templates.

Please see
https://github.com/w3c/webcomponents/wiki/Multiple-Shadow-Roots-as-%22a-Shadow-Root-hosts-another-Shadow-Root%22
Post by Ryosuke Niwa
Post by Justin Fagnani
Post by Anne van Kesteren
Post by Ryosuke Niwa
If we wanted to allow non-direct child descendent (e.g. grand child
node) of
Post by Justin Fagnani
Post by Anne van Kesteren
Post by Ryosuke Niwa
the host to be distributed, then we'd also need O(m) algorithm where
m is
Post by Justin Fagnani
Post by Anne van Kesteren
Post by Ryosuke Niwa
the number of under the host element. It might be okay to carry on
the
Post by Justin Fagnani
Post by Anne van Kesteren
Post by Ryosuke Niwa
current restraint that only direct child of shadow host can be
distributed
Post by Justin Fagnani
Post by Anne van Kesteren
Post by Ryosuke Niwa
into insertion points but I can't think of a good reason as to why
such a
Post by Justin Fagnani
Post by Anne van Kesteren
Post by Ryosuke Niwa
restriction is desirable.
The main reason is that you know that only a direct parent of a node can
distribute it. Otherwise any ancestor could distribute a node, and in
addition to probably being confusing and fragile, you have to define who
wins when multiple ancestors try to.
Post by Justin Fagnani
There are cases where you really want to group element logically by one
tree structure and visually by another, like tabs. I think an alternative
approach to distributing arbitrary descendants would be to see if nodes can
cooperate on distribution so that a node could pass its direct children to
another node's insertion point. The direct child restriction would still be
there, so you always know who's responsible, but you can get the same
effect as distributing descendants for a cooperating sets of elements.
That's an interesting approach. Ted and I discussed this design, and it
seems workable with Anne's `distribute` callback approach (= the second
approach in my proposal).
Conceptually, we ask each child of a shadow host the list of distributable
node for under that child (including itself). For normal node without a
shadow root, it'll simply itself along with all the distribution candidates
returned by its children. For a node with a shadow root, we ask its
implementation. The recursive algorithm can be written as follows in pseudo
```
return <ask n the list of distributable noes under n (1)>
list = [n]
list += distributionList(n)
return list
```
Now, if we adopted `distribute` callback approach, one obvious mechanism
to do (1) is to call `distribute` on n and return whatever it didn't
distribute as a list. Another obvious approach is to simply return [n] to
avoid the mess of n later deciding to distribute a new node.
Post by Justin Fagnani
Post by Anne van Kesteren
So you mean that we'd turn distributionList into a subtree? I.e. you
can pass all descendants of a host element to add()? I remember Yehuda
making the point that this was desirable to him.
The other thing I would like to explore is what an API would look like
that does the subclassing as well. Even though we deferred that to v2
I got the impression talking to some folks after the meeting that
there might be more common ground than I thought.
I really don't think the platform needs to do anything to support
subclassing since it can be done so easily at the library level now that
multiple generations of shadow roots are gone. As long as a subclass and
base class can cooperate to produce a single shadow root with insertion
points, the platform doesn't need to know how they did it.
I think we should eventually add native declarative inheritance support for all of this.
One thing that worries me about the `distribute` callback approach (a.k.a.
Anne's approach) is that it bakes distribution algorithm into the platform
without us having thoroughly studied how subclassing will be done upfront.
Mozilla tried to solve this problem with XBS, and they seem to think what
they have isn't really great. Google has spent multiple years working on
this problem but they come around to say their solution, multiple
generations of shadow DOM, may not be as great as they thought it would be.
Given that, I'm quite terrified of making the same mistake in spec'ing how
distribution works and later regretting it.
In that regard, the first approach w/o distribution has an advantage of
letting Web developer experiment with the bare minimum and try out which
distribution algorithms and mechanisms work best.
- R. Niwa
Ryosuke Niwa
2015-04-28 04:33:28 UTC
Permalink
Note: Our current consensus is to defer this until v2.
For the record, I, as a spec editor, still think "Shadow Root hosts yet another Shadow Root" is the best idea among all ideas I've ever seen, with a "<shadow> as function", because it can explain everything in a unified way using a single tree of trees, without bringing yet another complexity such as multiple templates.
Please see https://github.com/w3c/webcomponents/wiki/Multiple-Shadow-Roots-as-%22a-Shadow-Root-hosts-another-Shadow-Root%22
That's a great mental model for multiple generations of shadow DOM but it doesn't solve any of the problems with API itself. Like I've repeatedly stated in the past, the problem is the order of transclusion. Quoting from [1],

The `<shadow>` element is optimized for wrapping a base class, not filling it in. In practice, no subclass ever wants to wrap their base class with additional user interface elements. A subclass is a specialization of a base class, and specialization of UI generally means adding specialized elements in the middle of a component, not wrapping new elements outside some inherited core.

In the three component libraries [1] described above, the only cases where a subclass uses `<shadow>` is if the subclass wants to add additional styling. That is, a subclass wants to override base class styling, and can do so via:

```
<template>
<style>subclass styles go here</style>
<shadow></shadow>
</template>
```

One rare exception is `core-menu` [3], which does add some components in a wrapper around a `<shadow>`. However, even in that case, the components in question are instances of `<core-a11y-keys>`, a component which defines keyboard shortcuts. That is, the component is not using this wrapper ability to add visible user interface elements, so the general point stands.

As with the above point, the fact that no practical component has need for this ability to wrap an older shadow tree suggests the design is solving a problem that does not, in fact, exist in practice.


[1] https://github.com/w3c/webcomponents/wiki/Proposal-for-changes-to-manage-Shadow-DOM-content-distribution
[2] Polymer’s core- elements, Polymer’s paper- elements, and the Basic Web Components’ collection of basic- elements
[3] https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2FPolymer%2Fcore-menu%2Fblob%2Fmaster%2Fcore-menu.html&sa=D&sntz=1&usg=AFQjCNH0Rv14ENbplb6VYWFh8CsfVo9m_A

- R. Niwa
Hayato Ito
2015-04-28 04:50:52 UTC
Permalink
I'm aware that our consensus is to defer this until v2. Don't worry. :)

The feature of "<shadow> as function" supports *subclassing*. That's
exactly the motivation I've introduced it once in the spec (and implemented
it in blink).
I think Jan Miksovsky, co-author of Apple's proposal, knows well that.

The reason I reverted it from the spec (and the blink), [1], is a technical
difficulty to implement, though I've not proved that it's impossible to
implement.

[1] https://codereview.chromium.org/137993003
Post by Ryosuke Niwa
Note: Our current consensus is to defer this until v2.
Post by Hayato Ito
For the record, I, as a spec editor, still think "Shadow Root hosts yet
another Shadow Root" is the best idea among all ideas I've ever seen, with
a "<shadow> as function", because it can explain everything in a unified
way using a single tree of trees, without bringing yet another complexity
such as multiple templates.
Post by Hayato Ito
Please see
https://github.com/w3c/webcomponents/wiki/Multiple-Shadow-Roots-as-%22a-Shadow-Root-hosts-another-Shadow-Root%22
That's a great mental model for multiple generations of shadow DOM but it
doesn't solve any of the problems with API itself. Like I've repeatedly
stated in the past, the problem is the order of transclusion. Quoting from
[1],
The `<shadow>` element is optimized for wrapping a base class, not filling
it in. In practice, no subclass ever wants to wrap their base class with
additional user interface elements. A subclass is a specialization of a
base class, and specialization of UI generally means adding specialized
elements in the middle of a component, not wrapping new elements outside
some inherited core.
In the three component libraries [1] described above, the only cases where
a subclass uses `<shadow>` is if the subclass wants to add additional
styling. That is, a subclass wants to override base class styling, and can
```
<template>
<style>subclass styles go here</style>
<shadow></shadow>
</template>
```
One rare exception is `core-menu` [3], which does add some components in a
wrapper around a `<shadow>`. However, even in that case, the components in
question are instances of `<core-a11y-keys>`, a component which defines
keyboard shortcuts. That is, the component is not using this wrapper
ability to add visible user interface elements, so the general point stands.
As with the above point, the fact that no practical component has need for
this ability to wrap an older shadow tree suggests the design is solving a
problem that does not, in fact, exist in practice.
[1]
https://github.com/w3c/webcomponents/wiki/Proposal-for-changes-to-manage-Shadow-DOM-content-distribution
[2] Polymer’s core- elements, Polymer’s paper- elements, and the Basic Web
Components’ collection of basic- elements
[3]
https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2FPolymer%2Fcore-menu%2Fblob%2Fmaster%2Fcore-menu.html&sa=D&sntz=1&usg=AFQjCNH0Rv14ENbplb6VYWFh8CsfVo9m_A
- R. Niwa
Ryosuke Niwa
2015-04-28 17:09:55 UTC
Permalink
The feature of "<shadow> as function" supports *subclassing*. That's exactly the motivation I've introduced it once in the spec (and implemented it in blink). I think Jan Miksovsky, co-author of Apple's proposal, knows well that.
We're (and consequently I'm) fully aware of that feature/prosal, and we still don't think it adequately addresses the needs of subclassing.

The problem with "<shadow> as function" is that the superclass implicitly selects nodes based on a CSS selector so unless the nodes a subclass wants to insert matches exactly what the author of superclass considered, the subclass won't be able to override it. e.g. if the superclass had an insertion point with select="input.foo", then it's not possible for a subclass to then override it with, for example, an input element wrapped in a span.
The reason I reverted it from the spec (and the blink), [1], is a technical difficulty to implement, though I've not proved that it's impossible to implement.
I'm not even arguing about the implementation difficulty. I'm saying that the semantics is inadequate for subclassing.

- R. Niwa
Hayato Ito
2015-04-28 17:34:18 UTC
Permalink
Could you help me to understand what "implicitly" means here?

In this particular case, you might want to blame the super class's author
and tell the author, "Please use <content select=.input-foo> so that
subclass can override it with arbitrary element with class="input-foo".

Could you give me an concrete example which <content slot> can support, but
"<shadow> as function" can't support?
Post by Hayato Ito
Post by Hayato Ito
The feature of "<shadow> as function" supports *subclassing*. That's
exactly the motivation I've introduced it once in the spec (and implemented
it in blink). I think Jan Miksovsky, co-author of Apple's proposal, knows
well that.
We're (and consequently I'm) fully aware of that feature/prosal, and we
still don't think it adequately addresses the needs of subclassing.
The problem with "<shadow> as function" is that the superclass implicitly
selects nodes based on a CSS selector so unless the nodes a subclass wants
to insert matches exactly what the author of superclass considered, the
subclass won't be able to override it. e.g. if the superclass had an
insertion point with select="input.foo", then it's not possible for a
subclass to then override it with, for example, an input element wrapped in
a span.
Post by Hayato Ito
The reason I reverted it from the spec (and the blink), [1], is a
technical difficulty to implement, though I've not proved that it's
impossible to implement.
I'm not even arguing about the implementation difficulty. I'm saying that
the semantics is inadequate for subclassing.
- R. Niwa
Ryosuke Niwa
2015-04-28 18:52:56 UTC
Permalink
Post by Hayato Ito
Post by Ryosuke Niwa
The feature of "<shadow> as function" supports *subclassing*. That's exactly the motivation I've introduced it once in the spec (and implemented it in blink). I think Jan Miksovsky, co-author of Apple's proposal, knows well that.
We're (and consequently I'm) fully aware of that feature/prosal, and we still don't think it adequately addresses the needs of subclassing.
The problem with "<shadow> as function" is that the superclass implicitly selects nodes based on a CSS selector so unless the nodes a subclass wants to insert matches exactly what the author of superclass considered, the subclass won't be able to override it. e.g. if the superclass had an insertion point with select="input.foo", then it's not possible for a subclass to then override it with, for example, an input element wrapped in a span.
The reason I reverted it from the spec (and the blink), [1], is a technical difficulty to implement, though I've not proved that it's impossible to implement.
I'm not even arguing about the implementation difficulty. I'm saying that the semantics is inadequate for subclassing.
Could you help me to understand what "implicitly" means here?
I mean that the superclass’ insertion points use a CSS selector to select nodes to distribute. As a result, unless the subclass can supply the exactly kinds of nodes that matches the CSS selector, it won’t be able to override the contents into the insertion point.
Post by Hayato Ito
In this particular case, you might want to blame the super class's author and tell the author, "Please use <content select=.input-foo> so that subclass can override it with arbitrary element with class="input-foo”.
The problem is that it may not be possible to coordinate across class hierarchy like that if the superclass was defined in a third party library. With the named slot approach, superclass only specifies the name of a slot, so subclass will be able to override it with whatever element it supplies as needed.
Post by Hayato Ito
Could you give me an concrete example which <content slot> can support, but "<shadow> as function" can't support?
The problem isn’t so much that slot can do something "<shadow> as function" can’t support. It’s that "<shadow> as function" promotes over specification of what element can get into its insertion points by the virtue of using a CSS selector.

Now, it's possible that we can encourage authors to always use a class name in select attribute to support this use case. But then why are we adding a capability that we then discourage authors from using it.

- R. Niwa
Hayato Ito
2015-04-30 04:17:55 UTC
Permalink
Thanks. As far as my understanding is correct, the conclusions so far are:

- There is no use cases which "<shadow> as function" can't support, but
"<content slot>" can support.
- there are use cases which "<shadow> as function" can support, but
"<content slot>" can't support.
- "<shadow> as function" is more expressive than "<content slot>"
- "<content slot>" is trying to achieve something by removing
expressiveness from web developers, instead of trusting them.

I still don't understand fully what the proposal is trying to achieve. I've
never heard such a complain, "<content select> is too expressive and easy
to be misused. Please remove it", from web developers.

I think any good APIs could be potentially wrongly used by a web developer.
But that shouldn't be a reason that we can remove a expressive API from web
developers who can use it correctly and get benefits from the
expressiveness.
Post by Hayato Ito
Post by Hayato Ito
Post by Ryosuke Niwa
Post by Hayato Ito
The feature of "<shadow> as function" supports *subclassing*. That's
exactly the motivation I've introduced it once in the spec (and implemented
it in blink). I think Jan Miksovsky, co-author of Apple's proposal, knows
well that.
Post by Hayato Ito
Post by Ryosuke Niwa
We're (and consequently I'm) fully aware of that feature/prosal, and we
still don't think it adequately addresses the needs of subclassing.
Post by Hayato Ito
Post by Ryosuke Niwa
The problem with "<shadow> as function" is that the superclass
implicitly selects nodes based on a CSS selector so unless the nodes a
subclass wants to insert matches exactly what the author of superclass
considered, the subclass won't be able to override it. e.g. if the
superclass had an insertion point with select="input.foo", then it's not
possible for a subclass to then override it with, for example, an input
element wrapped in a span.
Post by Hayato Ito
Post by Ryosuke Niwa
Post by Hayato Ito
The reason I reverted it from the spec (and the blink), [1], is a
technical difficulty to implement, though I've not proved that it's
impossible to implement.
Post by Hayato Ito
Post by Ryosuke Niwa
I'm not even arguing about the implementation difficulty. I'm saying
that the semantics is inadequate for subclassing.
Post by Hayato Ito
Could you help me to understand what "implicitly" means here?
I mean that the superclass’ insertion points use a CSS selector to select
nodes to distribute. As a result, unless the subclass can supply the
exactly kinds of nodes that matches the CSS selector, it won’t be able to
override the contents into the insertion point.
Post by Hayato Ito
In this particular case, you might want to blame the super class's
author and tell the author, "Please use <content select=.input-foo> so that
subclass can override it with arbitrary element with class="input-foo”.
The problem is that it may not be possible to coordinate across class
hierarchy like that if the superclass was defined in a third party library.
With the named slot approach, superclass only specifies the name of a slot,
so subclass will be able to override it with whatever element it supplies
as needed.
Post by Hayato Ito
Could you give me an concrete example which <content slot> can support,
but "<shadow> as function" can't support?
The problem isn’t so much that slot can do something "<shadow> as
function" can’t support. It’s that "<shadow> as function" promotes over
specification of what element can get into its insertion points by the
virtue of using a CSS selector.
Now, it's possible that we can encourage authors to always use a class
name in select attribute to support this use case. But then why are we
adding a capability that we then discourage authors from using it.
- R. Niwa
Ryosuke Niwa
2015-04-30 07:18:41 UTC
Permalink
- There is no use cases which "<shadow> as function" can't support, but "<content slot>" can support.
- there are use cases which "<shadow> as function" can support, but "<content slot>" can't support.
I disagree. What "<shadow> as function" provides is an extra syntax by which authors can choose elements. That's not a use case. A use case is a solution for a concrete user scenario such as building a social network button.
- "<shadow> as function" is more expressive than "<content slot>"
Again, I disagree.
- "<content slot>" is trying to achieve something by removing expressiveness from web developers, instead of trusting them.
I still don't understand fully what the proposal is trying to achieve. I've never heard such a complain, "<content select> is too expressive and easy to be misused. Please remove it", from web developers.
I think any good APIs could be potentially wrongly used by a web developer. But that shouldn't be a reason that we can remove a expressive API from web developers who can use it correctly and get benefits from the expressiveness.
Now let me make an analogous comparison between C++ and assembly language.

- There is no use cases which assembly can't support, but C++ can support.
- There are use cases which assembly can support, but C++ can't support.
- Assembly language is more expressive than C++.
- C++ is trying to achieve something by removing expressiveness from programmers, instead of trusting them.

Does that mean we should all be coding in assembly? Certainly not.

For a more relevant analogy, one could construct the entire document using JavaScript without using HTML at all since DOM API exposed to JavaScript can construct the set of trees which is a strict superset of what HTML tree building algorithm can generate. Yet, we don't see that happening even in the top tier Web apps just because DOM API is more expressive. The vast majority of Web apps still use plenty of templates and declarative formats to construct DOM for simplicity and clarity even though imperative alternatives are strictly more powerful.

Why did we abandon XHTML2.0? It was certainly more expressive. Why not SGML? It's a lot more expressive than XML. You can re-define special character as you'd like. Because expressiveness is not necessary the most desirable characteristics of anything by itself. The shape of a solution we need depends on the kind of problems we're solving.

- R. Niwa
Hayato Ito
2015-04-30 08:47:24 UTC
Permalink
Thanks, let me update my understanding:

- There is no use cases which "<shadow> as function" can't support, but
"<content slot>" can support.
- The purpose of the proposal is to remove an *extra* syntax. There is no
other goals.
- There is no reason to consider "<content slot>" proposal if we have a use
case which this *extra* syntax can achieve.

I'm also feeling that several topic are mixed in the proposal, "Imperative
APIs, Multiple Templates and <content slot>", which makes me hard to
understand the goal of each.
Can I assume that the proposal is trying to remove "<content select>", not
only from such a multiple templates, but also from everywhere?
Post by Hayato Ito
Post by Hayato Ito
Thanks. As far as my understanding is correct, the conclusions so far
- There is no use cases which "<shadow> as function" can't support, but
"<content slot>" can support.
Post by Hayato Ito
- there are use cases which "<shadow> as function" can support, but
"<content slot>" can't support.
I disagree. What "<shadow> as function" provides is an extra syntax by
which authors can choose elements. That's not a use case. A use case is a
solution for a concrete user scenario such as building a social network
button.
Post by Hayato Ito
- "<shadow> as function" is more expressive than "<content slot>"
Again, I disagree.
Post by Hayato Ito
- "<content slot>" is trying to achieve something by removing
expressiveness from web developers, instead of trusting them.
Post by Hayato Ito
I still don't understand fully what the proposal is trying to achieve.
I've never heard such a complain, "<content select> is too expressive and
easy to be misused. Please remove it", from web developers.
Post by Hayato Ito
I think any good APIs could be potentially wrongly used by a web
developer. But that shouldn't be a reason that we can remove a expressive
API from web developers who can use it correctly and get benefits from the
expressiveness.
Now let me make an analogous comparison between C++ and assembly language.
- There is no use cases which assembly can't support, but C++ can support.
- There are use cases which assembly can support, but C++ can't support.
- Assembly language is more expressive than C++.
- C++ is trying to achieve something by removing expressiveness from
programmers, instead of trusting them.
Does that mean we should all be coding in assembly? Certainly not.
For a more relevant analogy, one could construct the entire document using
JavaScript without using HTML at all since DOM API exposed to JavaScript
can construct the set of trees which is a strict superset of what HTML tree
building algorithm can generate. Yet, we don't see that happening even in
the top tier Web apps just because DOM API is more expressive. The vast
majority of Web apps still use plenty of templates and declarative formats
to construct DOM for simplicity and clarity even though imperative
alternatives are strictly more powerful.
Why did we abandon XHTML2.0? It was certainly more expressive. Why not
SGML? It's a lot more expressive than XML. You can re-define special
character as you'd like. Because expressiveness is not necessary the most
desirable characteristics of anything by itself. The shape of a solution we
need depends on the kind of problems we're solving.
- R. Niwa
Ryosuke Niwa
2015-04-30 09:38:12 UTC
Permalink
- There is no use cases which "<shadow> as function" can't support, but "<content slot>" can support.
- The purpose of the proposal is to remove an *extra* syntax. There is no other goals.
- There is no reason to consider "<content slot>" proposal if we have a use case which this *extra* syntax can achieve.
That's not at all what I'm saying. As far as we (Apple) are concerned, "<shadow> as a function" as a mere proposal just as much as our "<content slot>" is a proposal since you've never convinced us that "<shadow> as a function" is a good solution for shadow DOM inheritance. Both proposals should be evaluated based on concrete use cases.

And even if there are use cases for which a given proposal (either <shadow> as a function" or named slot) doesn't adequately address, there are multiple options to consider:
1. Reject the use case because it's not important
2. Defer the use case for future extensions
3. Modify the proposal as needed
4. Reject the proposal because above options are not viable
I'm also feeling that several topic are mixed in the proposal, "Imperative APIs, Multiple Templates and <content slot>", which makes me hard to understand the goal of each.
Can I assume that the proposal is trying to remove "<content select>", not only from such a multiple templates, but also from everywhere?
As I understand the situation, the last F2F's resolution is to remove <content select> entirely. That's not a proposal but rather the tentative consensus of the working group. If you'd like, we can initiate a formal CfC process to reach a consensus on this matter although I highly doubt the outcome will be different given the attendees of the meeting.

- R. Niwa
Hayato Ito
2015-04-30 10:03:54 UTC
Permalink
Post by Hayato Ito
Post by Hayato Ito
- There is no use cases which "<shadow> as function" can't support, but
"<content slot>" can support.
Post by Hayato Ito
- The purpose of the proposal is to remove an *extra* syntax. There is
no other goals.
Post by Hayato Ito
- There is no reason to consider "<content slot>" proposal if we have a
use case which this *extra* syntax can achieve.
That's not at all what I'm saying. As far as we (Apple) are concerned,
"<shadow> as a function" as a mere proposal just as much as our "<content
slot>" is a proposal since you've never convinced us that "<shadow> as a
function" is a good solution for shadow DOM inheritance. Both proposals
should be evaluated based on concrete use cases.
And even if there are use cases for which a given proposal (either
<shadow> as a function" or named slot) doesn't adequately address, there
1. Reject the use case because it's not important
2. Defer the use case for future extensions
3. Modify the proposal as needed
4. Reject the proposal because above options are not viable
Post by Hayato Ito
I'm also feeling that several topic are mixed in the proposal,
"Imperative APIs, Multiple Templates and <content slot>", which makes me
hard to understand the goal of each.
Post by Hayato Ito
Can I assume that the proposal is trying to remove "<content select>",
not only from such a multiple templates, but also from everywhere?
As I understand the situation, the last F2F's resolution is to remove
<content select> entirely. That's not a proposal but rather the tentative
consensus of the working group. If you'd like, we can initiate a formal CfC
process to reach a consensus on this matter although I highly doubt the
outcome will be different given the attendees of the meeting.
This is not true.
The resolution is: The decision is blocked on "The upcoming proposal of
Imperative APIs".
Post by Hayato Ito
- R. Niwa
Hayato Ito
2015-04-30 10:26:29 UTC
Permalink
For reference, the discussion about "<shadow> as function" was done in W3C
bugzilla, https://www.w3.org/Bugs/Public/show_bug.cgi?id=22344, where everyone
in the discussion agreed with the proposal.
Post by Hayato Ito
Post by Hayato Ito
Post by Hayato Ito
- There is no use cases which "<shadow> as function" can't support, but
"<content slot>" can support.
Post by Hayato Ito
- The purpose of the proposal is to remove an *extra* syntax. There is
no other goals.
Post by Hayato Ito
- There is no reason to consider "<content slot>" proposal if we have a
use case which this *extra* syntax can achieve.
That's not at all what I'm saying. As far as we (Apple) are concerned,
"<shadow> as a function" as a mere proposal just as much as our "<content
slot>" is a proposal since you've never convinced us that "<shadow> as a
function" is a good solution for shadow DOM inheritance. Both proposals
should be evaluated based on concrete use cases.
And even if there are use cases for which a given proposal (either
<shadow> as a function" or named slot) doesn't adequately address, there
1. Reject the use case because it's not important
2. Defer the use case for future extensions
3. Modify the proposal as needed
4. Reject the proposal because above options are not viable
Post by Hayato Ito
I'm also feeling that several topic are mixed in the proposal,
"Imperative APIs, Multiple Templates and <content slot>", which makes me
hard to understand the goal of each.
Post by Hayato Ito
Can I assume that the proposal is trying to remove "<content select>",
not only from such a multiple templates, but also from everywhere?
As I understand the situation, the last F2F's resolution is to remove
<content select> entirely. That's not a proposal but rather the tentative
consensus of the working group. If you'd like, we can initiate a formal CfC
process to reach a consensus on this matter although I highly doubt the
outcome will be different given the attendees of the meeting.
This is not true.
The resolution is: The decision is blocked on "The upcoming proposal of
Imperative APIs".
Post by Hayato Ito
- R. Niwa
Anne van Kesteren
2015-04-30 11:43:48 UTC
Permalink
Post by Ryosuke Niwa
The problem with "<shadow> as function" is that the superclass implicitly selects nodes based on a CSS selector so unless the nodes a subclass wants to insert matches exactly what the author of superclass considered, the subclass won't be able to override it. e.g. if the superclass had an insertion point with select="input.foo", then it's not possible for a subclass to then override it with, for example, an input element wrapped in a span.
So what if we flipped this as well and came up with an imperative API
for "<shadow> as a function". I.e. "<shadow> as an actual function"?
Would that give us agreement?

It'd be great to have something like this available.
--
https://annevankesteren.nl/
Hayato Ito
2015-04-30 16:16:37 UTC
Permalink
Thanks Anne, I agree that it would be great to have something like this.

I think it's too early for us to judge something because we don't have a
well defined Imperative API as of now. Let's re-open this issue after we
can see how an Imperative API goes.
I'll file a bug for the spec about this inheritance challenge so that we
can continue the discussion in the bugzilla.
Post by Ryosuke Niwa
Post by Ryosuke Niwa
The problem with "<shadow> as function" is that the superclass
implicitly selects nodes based on a CSS selector so unless the nodes a
subclass wants to insert matches exactly what the author of superclass
considered, the subclass won't be able to override it. e.g. if the
superclass had an insertion point with select="input.foo", then it's not
possible for a subclass to then override it with, for example, an input
element wrapped in a span.
So what if we flipped this as well and came up with an imperative API
for "<shadow> as a function". I.e. "<shadow> as an actual function"?
Would that give us agreement?
It'd be great to have something like this available.
--
https://annevankesteren.nl/
Hayato Ito
2015-04-30 16:24:42 UTC
Permalink
Filed as https://www.w3.org/Bugs/Public/show_bug.cgi?id=28587.
Post by Hayato Ito
Thanks Anne, I agree that it would be great to have something like this.
I think it's too early for us to judge something because we don't have a
well defined Imperative API as of now. Let's re-open this issue after we
can see how an Imperative API goes.
I'll file a bug for the spec about this inheritance challenge so that we
can continue the discussion in the bugzilla.
Post by Ryosuke Niwa
Post by Ryosuke Niwa
The problem with "<shadow> as function" is that the superclass
implicitly selects nodes based on a CSS selector so unless the nodes a
subclass wants to insert matches exactly what the author of superclass
considered, the subclass won't be able to override it. e.g. if the
superclass had an insertion point with select="input.foo", then it's not
possible for a subclass to then override it with, for example, an input
element wrapped in a span.
So what if we flipped this as well and came up with an imperative API
for "<shadow> as a function". I.e. "<shadow> as an actual function"?
Would that give us agreement?
It'd be great to have something like this available.
--
https://annevankesteren.nl/
Ryosuke Niwa
2015-04-30 18:00:24 UTC
Permalink
Post by Anne van Kesteren
Post by Ryosuke Niwa
The problem with "<shadow> as function" is that the superclass implicitly selects nodes based on a CSS selector so unless the nodes a subclass wants to insert matches exactly what the author of superclass considered, the subclass won't be able to override it. e.g. if the superclass had an insertion point with select="input.foo", then it's not possible for a subclass to then override it with, for example, an input element wrapped in a span.
So what if we flipped this as well and came up with an imperative API
for "<shadow> as a function". I.e. "<shadow> as an actual function"?
Would that give us agreement?
We object on the basis that "<shadow> as a function" is fundamentally backwards way of doing the inheritance. If you have a MyMapView and define a subclass MyScrollableMapView to make it scrollable, then MyScrollableMapView must be a MyMapView. It doesn't make any sense for MyScrollableMapView, for example, to be a ScrollView that then contains MyMapView. That's has-a relationship which is appropriate for composition.

- R. Niwa
Brian Kardell
2015-04-30 21:29:32 UTC
Permalink
Post by Ryosuke Niwa
Post by Anne van Kesteren
Post by Ryosuke Niwa
The problem with "<shadow> as function" is that the superclass implicitly selects nodes based on a CSS selector so unless the nodes a subclass wants to insert matches exactly what the author of superclass considered, the subclass won't be able to override it. e.g. if the superclass had an insertion point with select="input.foo", then it's not possible for a subclass to then override it with, for example, an input element wrapped in a span.
So what if we flipped this as well and came up with an imperative API
for "<shadow> as a function". I.e. "<shadow> as an actual function"?
Would that give us agreement?
We object on the basis that "<shadow> as a function" is fundamentally backwards way of doing the inheritance. If you have a MyMapView and define a subclass MyScrollableMapView to make it scrollable, then MyScrollableMapView must be a MyMapView. It doesn't make any sense for MyScrollableMapView, for example, to be a ScrollView that then contains MyMapView. That's has-a relationship which is appropriate for composition.
- R. Niwa
Is there really a hard need for inheritance over composition? Won't
composition ability + an imperative API that allows you to properly
delegate to the stuff you contain be just fine for a v1?
--
Brian Kardell :: @briankardell :: hitchjs.com
Ryosuke Niwa
2015-04-30 21:44:09 UTC
Permalink
Post by Brian Kardell
Post by Ryosuke Niwa
Post by Anne van Kesteren
Post by Ryosuke Niwa
The problem with "<shadow> as function" is that the superclass implicitly selects nodes based on a CSS selector so unless the nodes a subclass wants to insert matches exactly what the author of superclass considered, the subclass won't be able to override it. e.g. if the superclass had an insertion point with select="input.foo", then it's not possible for a subclass to then override it with, for example, an input element wrapped in a span.
So what if we flipped this as well and came up with an imperative API
for "<shadow> as a function". I.e. "<shadow> as an actual function"?
Would that give us agreement?
We object on the basis that "<shadow> as a function" is fundamentally backwards way of doing the inheritance. If you have a MyMapView and define a subclass MyScrollableMapView to make it scrollable, then MyScrollableMapView must be a MyMapView. It doesn't make any sense for MyScrollableMapView, for example, to be a ScrollView that then contains MyMapView. That's has-a relationship which is appropriate for composition.
Is there really a hard need for inheritance over composition? Won't
composition ability + an imperative API that allows you to properly
delegate to the stuff you contain be just fine for a v1?
Per resolutions in F2F last Friday, this is a discussion for v2 since we're definitely not adding multiple generations of shadow DOM in v1.

However, we should have a sound plan for inheritance in v2 and make sure our imperative API is forward compatible with it. So the goal here is to come up with some plan for inheritance so that we can be confident that our inheritance API is not completely busted.

- R. Niwa
Ryosuke Niwa
2015-04-30 21:45:03 UTC
Permalink
Post by Ryosuke Niwa
Post by Brian Kardell
Post by Ryosuke Niwa
Post by Anne van Kesteren
Post by Ryosuke Niwa
The problem with "<shadow> as function" is that the superclass implicitly selects nodes based on a CSS selector so unless the nodes a subclass wants to insert matches exactly what the author of superclass considered, the subclass won't be able to override it. e.g. if the superclass had an insertion point with select="input.foo", then it's not possible for a subclass to then override it with, for example, an input element wrapped in a span.
So what if we flipped this as well and came up with an imperative API
for "<shadow> as a function". I.e. "<shadow> as an actual function"?
Would that give us agreement?
We object on the basis that "<shadow> as a function" is fundamentally backwards way of doing the inheritance. If you have a MyMapView and define a subclass MyScrollableMapView to make it scrollable, then MyScrollableMapView must be a MyMapView. It doesn't make any sense for MyScrollableMapView, for example, to be a ScrollView that then contains MyMapView. That's has-a relationship which is appropriate for composition.
Is there really a hard need for inheritance over composition? Won't
composition ability + an imperative API that allows you to properly
delegate to the stuff you contain be just fine for a v1?
Per resolutions in F2F last Friday, this is a discussion for v2 since we're definitely not adding multiple generations of shadow DOM in v1.
However, we should have a sound plan for inheritance in v2 and make sure our imperative API is forward compatible with it. So the goal here is to come up with some plan for inheritance so that we can be confident that our inheritance API is not completely busted.
Sorry, I meant to say our *imperative* API is not completely busted.

- R. Niwa
Ryosuke Niwa
2015-04-30 21:35:29 UTC
Permalink
Post by Anne van Kesteren
Post by Ryosuke Niwa
The problem with "<shadow> as function" is that the superclass implicitly selects nodes based on a CSS selector so unless the nodes a subclass wants to insert matches exactly what the author of superclass considered, the subclass won't be able to override it. e.g. if the superclass had an insertion point with select="input.foo", then it's not possible for a subclass to then override it with, for example, an input element wrapped in a span.
So what if we flipped this as well and came up with an imperative API
for "<shadow> as a function". I.e. "<shadow> as an actual function"?
To start off, I can think of three major ways by which subclass wants to interact with its superclass:
1. Replace what superclass shows entirely by its own content - e.g. grab the device context and draw everything by yourself.
2. Override parts of superclass' content - e.g. subclass overrides virtual functions superclass provided to draw parts of the component/view.
3. Fill "holes" superclass provided - e.g. subclass implements abstract virtual functions superclass defined to delegate the work.

- R. Niwa
Anne van Kesteren
2015-05-01 08:04:35 UTC
Permalink
Post by Ryosuke Niwa
1. Replace what superclass shows entirely by its own content - e.g. grab the device context and draw everything by yourself.
So this requires either replacing or removing superclass' ShadowRoot.
Post by Ryosuke Niwa
2. Override parts of superclass' content - e.g. subclass overrides virtual functions superclass provided to draw parts of the component/view.
This is where you directly access superclass' ShadowRoot I assume and
modify things?
Post by Ryosuke Niwa
3. Fill "holes" superclass provided - e.g. subclass implements abstract virtual functions superclass defined to delegate the work.
This is the part that looks like it might interact with distribution, no?
--
https://annevankesteren.nl/
Ryosuke Niwa
2015-05-01 08:36:00 UTC
Permalink
Post by Anne van Kesteren
Post by Ryosuke Niwa
1. Replace what superclass shows entirely by its own content - e.g. grab the device context and draw everything by yourself.
So this requires either replacing or removing superclass' ShadowRoot.
Post by Ryosuke Niwa
2. Override parts of superclass' content - e.g. subclass overrides virtual functions superclass provided to draw parts of the component/view.
This is where you directly access superclass' ShadowRoot I assume and
modify things?
In the named slot approach, these overridable parts will be exposed to subclasses as an overridable slot. In terms of an imperative API, it means that the superclass has a virtual method (probably with a symbol name) that can get overridden by a subclass. The default implementation of such a virtual method does nothing, and shows the fallback contents of the slot.
Post by Anne van Kesteren
Post by Ryosuke Niwa
3. Fill "holes" superclass provided - e.g. subclass implements abstract virtual functions superclass defined to delegate the work.
This is the part that looks like it might interact with distribution, no?
With the named slot approach, we can also model this is an abstract method on the superclass that a subclass must implement. The superclass' shadow DOM construction code then calls this function to "fill" the slot.

- R. Niwa
Anne van Kesteren
2015-05-01 16:37:21 UTC
Permalink
Post by Ryosuke Niwa
Post by Anne van Kesteren
This is where you directly access superclass' ShadowRoot I assume and
modify things?
In the named slot approach, these overridable parts will be exposed to subclasses as an overridable slot. In terms of an imperative API, it means that the superclass has a virtual method (probably with a symbol name) that can get overridden by a subclass. The default implementation of such a virtual method does nothing, and shows the fallback contents of the slot.
Post by Anne van Kesteren
Post by Ryosuke Niwa
3. Fill "holes" superclass provided - e.g. subclass implements abstract virtual functions superclass defined to delegate the work.
This is the part that looks like it might interact with distribution, no?
With the named slot approach, we can also model this is an abstract method on the superclass that a subclass must implement. The superclass' shadow DOM construction code then calls this function to "fill" the slot.
I think I need to see code in order to grasp this.
--
https://annevankesteren.nl/
Elliott Sprehn
2015-04-28 20:04:24 UTC
Permalink
A distribute callback means running script any time we update distribution,
which is inside the style update phase (or event path computation phase,
...) which is not a location we can run script. We could run script in
another scripting context like is being considered for custom layout and
paint though, but that has a different API shape since you'd register a
separate .js file as the "custom distributor." like

(document || shadowRoot).registerCustomDistributor({src: "distributor.js"});

I also don't believe we should support distributing any arbitrary
descendant, that has a large complexity cost and doesn't feel like
simplification. It makes computing style and generating boxes much more
complicated.

A synchronous childrenChanged callback has similar issues with when it's
safe to run script, we'd have to defer it's execution in a number of
situations, and it feels like a duplication of MutationObservers which
specifically were designed to operate in batch for better performance and
fewer footguns (ex. a naive childrenChanged based distributor will be n^2).
Post by Ryosuke Niwa
Post by Justin Fagnani
Post by Anne van Kesteren
Post by Ryosuke Niwa
If we wanted to allow non-direct child descendent (e.g. grand child
node) of
Post by Justin Fagnani
Post by Anne van Kesteren
Post by Ryosuke Niwa
the host to be distributed, then we'd also need O(m) algorithm where
m is
Post by Justin Fagnani
Post by Anne van Kesteren
Post by Ryosuke Niwa
the number of under the host element. It might be okay to carry on
the
Post by Justin Fagnani
Post by Anne van Kesteren
Post by Ryosuke Niwa
current restraint that only direct child of shadow host can be
distributed
Post by Justin Fagnani
Post by Anne van Kesteren
Post by Ryosuke Niwa
into insertion points but I can't think of a good reason as to why
such a
Post by Justin Fagnani
Post by Anne van Kesteren
Post by Ryosuke Niwa
restriction is desirable.
The main reason is that you know that only a direct parent of a node can
distribute it. Otherwise any ancestor could distribute a node, and in
addition to probably being confusing and fragile, you have to define who
wins when multiple ancestors try to.
Post by Justin Fagnani
There are cases where you really want to group element logically by one
tree structure and visually by another, like tabs. I think an alternative
approach to distributing arbitrary descendants would be to see if nodes can
cooperate on distribution so that a node could pass its direct children to
another node's insertion point. The direct child restriction would still be
there, so you always know who's responsible, but you can get the same
effect as distributing descendants for a cooperating sets of elements.
That's an interesting approach. Ted and I discussed this design, and it
seems workable with Anne's `distribute` callback approach (= the second
approach in my proposal).
Conceptually, we ask each child of a shadow host the list of distributable
node for under that child (including itself). For normal node without a
shadow root, it'll simply itself along with all the distribution candidates
returned by its children. For a node with a shadow root, we ask its
implementation. The recursive algorithm can be written as follows in pseudo
```
return <ask n the list of distributable noes under n (1)>
list = [n]
list += distributionList(n)
return list
```
Now, if we adopted `distribute` callback approach, one obvious mechanism
to do (1) is to call `distribute` on n and return whatever it didn't
distribute as a list. Another obvious approach is to simply return [n] to
avoid the mess of n later deciding to distribute a new node.
Post by Justin Fagnani
Post by Anne van Kesteren
So you mean that we'd turn distributionList into a subtree? I.e. you
can pass all descendants of a host element to add()? I remember Yehuda
making the point that this was desirable to him.
The other thing I would like to explore is what an API would look like
that does the subclassing as well. Even though we deferred that to v2
I got the impression talking to some folks after the meeting that
there might be more common ground than I thought.
I really don't think the platform needs to do anything to support
subclassing since it can be done so easily at the library level now that
multiple generations of shadow roots are gone. As long as a subclass and
base class can cooperate to produce a single shadow root with insertion
points, the platform doesn't need to know how they did it.
I think we should eventually add native declarative inheritance support for all of this.
One thing that worries me about the `distribute` callback approach (a.k.a.
Anne's approach) is that it bakes distribution algorithm into the platform
without us having thoroughly studied how subclassing will be done upfront.
Mozilla tried to solve this problem with XBS, and they seem to think what
they have isn't really great. Google has spent multiple years working on
this problem but they come around to say their solution, multiple
generations of shadow DOM, may not be as great as they thought it would be.
Given that, I'm quite terrified of making the same mistake in spec'ing how
distribution works and later regretting it.
In that regard, the first approach w/o distribution has an advantage of
letting Web developer experiment with the bare minimum and try out which
distribution algorithms and mechanisms work best.
- R. Niwa
Ryosuke Niwa
2015-04-28 20:20:49 UTC
Permalink
A distribute callback means running script any time we update distribution, which is inside the style update phase (or event path computation phase, ...) which is not a location we can run script.
That's not what Anne and the rest of us are proposing. That idea only came up in Steve's proposal [1] that kept the current timing of distribution.
I also don't believe we should support distributing any arbitrary descendant, that has a large complexity cost and doesn't feel like simplification. It makes computing style and generating boxes much more complicated.
That certainly is a trade off. See a use case I outlined in [2].
A synchronous childrenChanged callback has similar issues with when it's safe to run script, we'd have to defer it's execution in a number of situations, and it feels like a duplication of MutationObservers which specifically were designed to operate in batch for better performance and fewer footguns (ex. a naive childrenChanged based distributor will be n^2).
Since the current proposal is to add it as a custom element's lifecycle callback (i.e. we invoke it when we cross UA code / user code boundary), this shouldn't be an issue. If it is indeed an issue, then we have a problem with a lifecycle callback that gets triggered when an attribute value is modified.

In general, I don't think we can address Steve's need to make the consistency guarantee [3] without running some script either synchronously or as a lifecycle callback in the world of an imperative API.

- R. Niwa

[1] https://lists.w3.org/Archives/Public/public-webapps/2015AprJun/0342.html
[2] https://lists.w3.org/Archives/Public/public-webapps/2015AprJun/0344.html
[3] https://lists.w3.org/Archives/Public/public-webapps/2015AprJun/0357.html
Ryosuke Niwa
2015-04-28 20:52:44 UTC
Permalink
I've updated the gist to reflect the discussion so far:
https://gist.github.com/rniwa/2f14588926e1a11c65d3 <https://gist.github.com/rniwa/2f14588926e1a11c65d3>

Please leave a comment if I missed anything.

- R. Niwa
Dimitri Glazkov
2015-04-29 23:16:53 UTC
Permalink
Post by Ryosuke Niwa
https://gist.github.com/rniwa/2f14588926e1a11c65d3
Please leave a comment if I missed anything.
Thank you for doing this. There are a couple of unescaped tags in
https://gist.github.com/rniwa/2f14588926e1a11c65d3#extention-to-custom-elements-for-consistency,
I think?

Any chance you could move it to the Web Components wiki? That way, we could
all collaborate.

:DG<
Ryosuke Niwa
2015-04-30 00:59:41 UTC
Permalink
Post by Ryosuke Niwa
https://gist.github.com/rniwa/2f14588926e1a11c65d3
Please leave a comment if I missed anything.
Thank you for doing this. There are a couple of unescaped tags in https://gist.github.com/rniwa/2f14588926e1a11c65d3#extention-to-custom-elements-for-consistency, I think?
Any chance you could move it to the Web Components wiki? That way, we could all collaborate.
Sure, what's the preferred work flow? Fork and push a PR?

- R. Niwa.
Dimitri Glazkov
2015-04-30 01:01:35 UTC
Permalink
Post by Dimitri Glazkov
Post by Dimitri Glazkov
Post by Ryosuke Niwa
https://gist.github.com/rniwa/2f14588926e1a11c65d3
Please leave a comment if I missed anything.
Thank you for doing this. There are a couple of unescaped tags in
https://gist.github.com/rniwa/2f14588926e1a11c65d3#extention-to-custom-elements-for-consistency,
I think?
Post by Dimitri Glazkov
Any chance you could move it to the Web Components wiki? That way, we
could all collaborate.
Sure, what's the preferred work flow? Fork and push a PR?
Actually, we might need to figure this out first. Github Wiki is not
super-friendly to fork/push-PR model. But I do like your idea. Maybe just
an .md page in a repo?
Post by Dimitri Glazkov
- R. Niwa.
Domenic Denicola
2015-04-30 12:30:05 UTC
Permalink
I have a clarifying question. It's often stated that since the current timing is undefined, this will lead to interoperability problems.


Can someone point me to the part of the spec that is problematic? That is, where is the line that says "UAs may run this algorithm at any time"? I am not sure what to Ctrl+F for.


Secondly, could someone produce a code snippet that would cause such interop problems, given the current spec?


Finally, assuming we have such an example, would there be a way to tighten the spec language such that we don't need to specify e.g. when style recalculation happens, but instead specify constraints? Like "offsetTop must always reflect the redistributions" or something.



________________________________
From: Ryosuke Niwa <***@apple.com>
Sent: Tuesday, April 28, 2015 16:52
To: Justin Fagnani; Anne van Kesteren; WebApps WG; Erik Bryn; Dimitri Glazkov; Edward O'Connor; Steve Orvell
Subject: Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

I've updated the gist to reflect the discussion so far:
https://gist.github.com/rniwa/2f14588926e1a11c65d3

Please leave a comment if I missed anything.

- R. Niwa
Anne van Kesteren
2015-04-30 12:41:29 UTC
Permalink
Post by Domenic Denicola
Can someone point me to the part of the spec that is problematic? That is,
where is the line that says "UAs may run this algorithm at any time"? I am
not sure what to Ctrl+F for.
At the end of section 3.4 it states "If any condition which affects
the distribution result changes, the distribution result must be
updated before any use of the distribution result." which basically
means you can't make use of a "dirty" tree.
Post by Domenic Denicola
Secondly, could someone produce a code snippet that would cause such interop
problems, given the current spec?
var x = new Event(eventType)
someNodeThatIsDistributed.addEventListener(eventType, e =>
console.log(e.path))
someNodeThatIsDistributed.dispatchEvent(ev);
Post by Domenic Denicola
Finally, assuming we have such an example, would there be a way to tighten
the spec language such that we don't need to specify e.g. when style
recalculation happens, but instead specify constraints? Like "offsetTop must
always reflect the redistributions" or something.
That is what the specification currently does and what prevents us
from defining an imperative API. For an imperative API it is
imperative (mahaha) that we get the timing with respect to tasks
right. (Or as per my proposal, leave timing up to developers.)
--
https://annevankesteren.nl/
Domenic Denicola
2015-04-30 12:44:25 UTC
Permalink
Post by Anne van Kesteren
var x = new Event(eventType)
someNodeThatIsDistributed.addEventListener(eventType, e => console.log(e.path))
someNodeThatIsDistributed.dispatchEvent(ev);
Can you explain in a bit more detail why this causes interop problems? What browsers would give different results for this code? What would those results be?
Anne van Kesteren
2015-04-30 12:51:34 UTC
Permalink
Post by Domenic Denicola
Post by Anne van Kesteren
var x = new Event(eventType)
someNodeThatIsDistributed.addEventListener(eventType, e => console.log(e.path))
someNodeThatIsDistributed.dispatchEvent(ev);
Can you explain in a bit more detail why this causes interop problems? What browsers would give different results for this code? What would those results be?
This essentially forces distribution to happen since you can observe
the result of distribution this way. Same with element.offsetWidth
etc. And that's not necessarily problematic, but it is problematic if
you want to do an imperative API as I tried to explain in the bit you
did not quote back.
--
https://annevankesteren.nl/
Domenic Denicola
2015-04-30 13:00:23 UTC
Permalink
This essentially forces distribution to happen since you can observe the result of distribution this way. Same with element.offsetWidth etc. And that's not necessarily problematic,
OK. So the claim that the current spec cannot be interoperably implemented is false? (Not that I am a huge fan of <content select>, but I want to make sure we have our arguments against it lined up and on solid footing.)
but it is problematic if you want to do an imperative API as I tried to explain in the bit you did not quote back.
Sure, let's dig in to that claim now. Again, this is mostly clarifying probing.

Let's say we had an imperative API. As far as I understand from the gist, one of the problems is when do we invoke the distributedCallback. If we use MutationObserve time, then inconsistent states can be observed, etc.

Why can't we say that this distributedCallback must be invoked at the same time that the current spec updates the distribution result? Since it sounds like there is no interop problem with this timing, I don't understand why this wouldn't be an option.
Anne van Kesteren
2015-04-30 13:07:27 UTC
Permalink
Post by Domenic Denicola
OK. So the claim that the current spec cannot be interoperably implemented is false?
Well, the wording could be improved for sure. If you're new to this
you might get confused.
Post by Domenic Denicola
Why can't we say that this distributedCallback must be invoked at the same time that the current spec updates the distribution result? Since it sounds like there is no interop problem with this timing, I don't understand why this wouldn't be an option.
Because then it would be observable when distribution happens and then
it does become a problem. The current specification allows for lazy
distribution. Once distribution is observable while distributing that
is no longer an option.
--
https://annevankesteren.nl/
Ryosuke Niwa
2015-04-30 17:55:59 UTC
Permalink
Post by Domenic Denicola
This essentially forces distribution to happen since you can observe the result of distribution this way. Same with element.offsetWidth etc. And that's not necessarily problematic,
OK. So the claim that the current spec cannot be interoperably implemented is false? (Not that I am a huge fan of <content select>, but I want to make sure we have our arguments against it lined up and on solid footing.)
but it is problematic if you want to do an imperative API as I tried to explain in the bit you did not quote back.
Sure, let's dig in to that claim now. Again, this is mostly clarifying probing.
Let's say we had an imperative API. As far as I understand from the gist, one of the problems is when do we invoke the distributedCallback. If we use MutationObserve time, then inconsistent states can be observed, etc.
Why can't we say that this distributedCallback must be invoked at the same time that the current spec updates the distribution result? Since it sounds like there is no interop problem with this timing, I don't understand why this wouldn't be an option.
There will be an interop problem. Consider a following example:

```js
someNode = ~
myButton.appendChild(someNode); // (1)
absolutelyPositionElement.offsetTop; // (2)
```

Now suppose absolutelyPositionElement.offsetTop is a some element that's in a disjoint subtree of the document. Heck, it could even in a separate iframe. In some UAs, (2) will trigger style resolution and update of the layout. Because UAs can't tell redistribution of myButton can affect (2), such UAs will update the distribution per spec text that says "the distribution result must be updated before any _use_ of the distribution result".

Yet in other UAs, `offsetTop` may have been cached and UA might be smart enough to detect that (1) doesn't affect the result of `absolutelyPositionElement.offsetTop` because they're in a different parts of the tree and they're independent for the purpose of style resolution and layout. In such UAs, (2) does not trigger redistribution because it does not use the distribution result in order to compute this value.

In general, there are thousands of other DOM and CSS OM APIs that may or may not _use_ the distribution result depending on implementations.

- R. Niwa
Hayato Ito
2015-05-01 03:17:41 UTC
Permalink
Post by Domenic Denicola
Post by Anne van Kesteren
This essentially forces distribution to happen since you can observe
the result of distribution this way. Same with element.offsetWidth etc. And
that's not necessarily problematic,
Post by Domenic Denicola
OK. So the claim that the current spec cannot be interoperably
implemented is false? (Not that I am a huge fan of <content select>, but I
want to make sure we have our arguments against it lined up and on solid
footing.)
Post by Domenic Denicola
Post by Anne van Kesteren
but it is problematic if you want to do an imperative API as I tried to
explain in the bit you did not quote back.
Post by Domenic Denicola
Sure, let's dig in to that claim now. Again, this is mostly clarifying
probing.
Post by Domenic Denicola
Let's say we had an imperative API. As far as I understand from the
gist, one of the problems is when do we invoke the distributedCallback. If
we use MutationObserve time, then inconsistent states can be observed, etc.
Post by Domenic Denicola
Why can't we say that this distributedCallback must be invoked at the
same time that the current spec updates the distribution result? Since it
sounds like there is no interop problem with this timing, I don't
understand why this wouldn't be an option.
The return value of (2) is the same in either case. There is no observable
difference. No interop issue.

Please file a bug for the spec with a concrete example if you can find a
observable difference due to the lazy-evaluation of the distribution.
```js
someNode = ~
myButton.appendChild(someNode); // (1)
absolutelyPositionElement.offsetTop; // (2)
```
Now suppose absolutelyPositionElement.offsetTop is a some element that's
in a disjoint subtree of the document. Heck, it could even in a separate
iframe. In some UAs, (2) will trigger style resolution and update of the
layout. Because UAs can't tell redistribution of myButton can affect (2),
such UAs will update the distribution per spec text that says "the
distribution result must be updated before any _use_ of the distribution
result".
Yet in other UAs, `offsetTop` may have been cached and UA might be smart
enough to detect that (1) doesn't affect the result of
`absolutelyPositionElement.offsetTop` because they're in a different parts
of the tree and they're independent for the purpose of style resolution and
layout. In such UAs, (2) does not trigger redistribution because it does
not use the distribution result in order to compute this value.
In general, there are thousands of other DOM and CSS OM APIs that may or
may not _use_ the distribution result depending on implementations.
- R. Niwa
Ryosuke Niwa
2015-05-01 03:57:21 UTC
Permalink
Post by Domenic Denicola
This essentially forces distribution to happen since you can observe the result of distribution this way. Same with element.offsetWidth etc. And that's not necessarily problematic,
OK. So the claim that the current spec cannot be interoperably implemented is false? (Not that I am a huge fan of <content select>, but I want to make sure we have our arguments against it lined up and on solid footing.)
but it is problematic if you want to do an imperative API as I tried to explain in the bit you did not quote back.
Sure, let's dig in to that claim now. Again, this is mostly clarifying probing.
Let's say we had an imperative API. As far as I understand from the gist, one of the problems is when do we invoke the distributedCallback. If we use MutationObserve time, then inconsistent states can be observed, etc.
Why can't we say that this distributedCallback must be invoked at the same time that the current spec updates the distribution result? Since it sounds like there is no interop problem with this timing, I don't understand why this wouldn't be an option.
The return value of (2) is the same in either case. There is no observable difference. No interop issue.
Please file a bug for the spec with a concrete example if you can find a observable difference due to the lazy-evaluation of the distribution.
The problem isn't so much that the current shadow DOM specification has an interop issue because what we're talking here, as the thread title clearly communicates, is the imperative API for node distribution, which doesn't exist in the current specification.

In particular, invoking user code at the timing specified in section 3.4 which states "if any condition which affects the distribution result changes, the distribution result must be updated before any use of the distribution result" introduces a new interoperability issue because "before any use of the distribution result" is implementation dependent. e.g. element.offsetTop may or not may use the distribution result depending on UA. Furthermore, it's undesirable to precisely spec this since doing so will impose a serious limitation on what UAs could optimize in the future.

- R. Niwa
Hayato Ito
2015-05-01 04:01:46 UTC
Permalink
Thanks, however, we're talking about
https://lists.w3.org/Archives/Public/public-webapps/2015AprJun/0442.html.
Post by Hayato Ito
Post by Domenic Denicola
Post by Anne van Kesteren
This essentially forces distribution to happen since you can observe
the result of distribution this way. Same with element.offsetWidth etc. And
that's not necessarily problematic,
Post by Hayato Ito
Post by Domenic Denicola
OK. So the claim that the current spec cannot be interoperably
implemented is false? (Not that I am a huge fan of <content select>, but I
want to make sure we have our arguments against it lined up and on solid
footing.)
Post by Hayato Ito
Post by Domenic Denicola
Post by Anne van Kesteren
but it is problematic if you want to do an imperative API as I tried
to explain in the bit you did not quote back.
Post by Hayato Ito
Post by Domenic Denicola
Sure, let's dig in to that claim now. Again, this is mostly
clarifying probing.
Post by Hayato Ito
Post by Domenic Denicola
Let's say we had an imperative API. As far as I understand from the
gist, one of the problems is when do we invoke the distributedCallback. If
we use MutationObserve time, then inconsistent states can be observed, etc.
Post by Hayato Ito
Post by Domenic Denicola
Why can't we say that this distributedCallback must be invoked at the
same time that the current spec updates the distribution result? Since it
sounds like there is no interop problem with this timing, I don't
understand why this wouldn't be an option.
Post by Hayato Ito
The return value of (2) is the same in either case. There is no
observable difference. No interop issue.
Post by Hayato Ito
Please file a bug for the spec with a concrete example if you can find a
observable difference due to the lazy-evaluation of the distribution.
The problem isn't so much that the current shadow DOM specification has an
interop issue because what we're talking here, as the thread title clearly
communicates, is the imperative API for node distribution, which doesn't
exist in the current specification.
In particular, invoking user code at the timing specified in section 3.4
which states "if any condition which affects the distribution result
changes, the distribution result must be updated before any use of the
distribution result" introduces a new interoperability issue because
"before any use of the distribution result" is implementation dependent.
e.g. element.offsetTop may or not may use the distribution result depending
on UA. Furthermore, it's undesirable to precisely spec this since doing so
will impose a serious limitation on what UAs could optimize in the future.
- R. Niwa
Elliott Sprehn
2015-05-01 04:25:16 UTC
Permalink
...
Post by Hayato Ito
The return value of (2) is the same in either case. There is no
observable difference. No interop issue.
Post by Hayato Ito
Please file a bug for the spec with a concrete example if you can find a
observable difference due to the lazy-evaluation of the distribution.
The problem isn't so much that the current shadow DOM specification has an
interop issue because what we're talking here, as the thread title clearly
communicates, is the imperative API for node distribution, which doesn't
exist in the current specification.
In particular, invoking user code at the timing specified in section 3.4
which states "if any condition which affects the distribution result
changes, the distribution result must be updated before any use of the
distribution result" introduces a new interoperability issue because
"before any use of the distribution result" is implementation dependent.
e.g. element.offsetTop may or not may use the distribution result depending
on UA. Furthermore, it's undesirable to precisely spec this since doing so
will impose a serious limitation on what UAs could optimize in the future.
element.offsetTop must use the distribution result, there's no way to know
what your style is without computing your distribution. This isn't any
different than getComputedStyle(...).color needing to flush style, or
getBoundingClientRect() needing to flush layout.

Distribution is about computing who your parent and siblings are in the box
tree, and where your should inherit your style from. Doing it lazy is not
going to be any worse in terms of interop than defining new properties that
depend on style.

- E
Ryosuke Niwa
2015-05-01 05:11:36 UTC
Permalink
Post by Ryosuke Niwa
...
The return value of (2) is the same in either case. There is no observable difference. No interop issue.
Please file a bug for the spec with a concrete example if you can find a observable difference due to the lazy-evaluation of the distribution.
The problem isn't so much that the current shadow DOM specification has an interop issue because what we're talking here, as the thread title clearly communicates, is the imperative API for node distribution, which doesn't exist in the current specification.
In particular, invoking user code at the timing specified in section 3.4 which states "if any condition which affects the distribution result changes, the distribution result must be updated before any use of the distribution result" introduces a new interoperability issue because "before any use of the distribution result" is implementation dependent. e.g. element.offsetTop may or not may use the distribution result depending on UA. Furthermore, it's undesirable to precisely spec this since doing so will impose a serious limitation on what UAs could optimize in the future.
element.offsetTop must use the distribution result, there's no way to know what your style is without computing your distribution. This isn't any different than getComputedStyle(...).color needing to flush style, or getBoundingClientRect() needing to flush layout.
That is true only if the distribution of a given node can affect the style of element. There are cases in which UAs can deduce that such is not the case and optimize the style recalculation away. e.g. two elements belonging two different documents.

Another example will be element.isContentEditable. Under certain circumstances WebKit needs to resolve styles in order to determine the value of this function which, then, uses the distribution result.
Distribution is about computing who your parent and siblings are in the box tree, and where your should inherit your style from. Doing it lazy is not going to be any worse in terms of interop than defining new properties that depend on style.
The problem is that different engines have different mechanism to deduce style dependencies between elements.

- R. Niwa
Ryosuke Niwa
2015-05-01 05:15:30 UTC
Permalink
Thanks, however, we're talking about https://lists.w3.org/Archives/Public/public-webapps/2015AprJun/0442.html.
Ah, I think there was some miscommunication there. I don't think anyone is claiming that the current spec results in interop issues. The currently spec'ed timing is only problematic when we try to invoke an author-defined callback at that moment. If we never added an imperative API or an imperative API we add don't invoke user code at the currently spec'ed timing, we don't have any interop problem.

- R. Niwa
Hayato Ito
2015-04-30 13:05:44 UTC
Permalink
Post by Anne van Kesteren
Post by Domenic Denicola
Post by Anne van Kesteren
var x = new Event(eventType)
someNodeThatIsDistributed.addEventListener(eventType, e =>
console.log(e.path))
Post by Domenic Denicola
Post by Anne van Kesteren
someNodeThatIsDistributed.dispatchEvent(ev);
Can you explain in a bit more detail why this causes interop problems?
What browsers would give different results for this code? What would those
results be?
This essentially forces distribution to happen since you can observe
the result of distribution this way. Same with element.offsetWidth
etc.
That's the exactly intended behavior in the current spec.
The timing of distribution is not observable. That enables UA to optimize
the distribution calc. We can delay the calculation of the distribution as
possible as we can. We don't need to calc distribution every time when a
mutation occurs.

If you find any interop issue in the current spec about distribution,
please file a bug with a concrete example.
Post by Anne van Kesteren
--
https://annevankesteren.nl/
Anne van Kesteren
2015-04-30 13:22:21 UTC
Permalink
Post by Hayato Ito
That's the exactly intended behavior in the current spec.
The timing of distribution is not observable.
Right, but you can synchronously observe whether something is
distributed. The combination of those two things coupled with us not
wanting to introduce new synchronous mutation observers is what
creates problems for an imperative API.

So if we want an imperative API we need to make a tradeoff. Do we care
about offsetTop et al or do we care about microtask-based mutation
observers? I'm inclined to think we care more about the latter, but
the gist I put forward takes a position on neither and leaves it up to
web developers when they want to distribute (if at all).
--
https://annevankesteren.nl/
Dimitri Glazkov
2015-04-29 23:15:06 UTC
Permalink
Post by Ryosuke Niwa
One thing that worries me about the `distribute` callback approach (a.k.a.
Anne's approach) is that it bakes distribution algorithm into the platform
without us having thoroughly studied how subclassing will be done upfront.
Mozilla tried to solve this problem with XBS, and they seem to think what
they have isn't really great. Google has spent multiple years working on
this problem but they come around to say their solution, multiple
generations of shadow DOM, may not be as great as they thought it would be.
Given that, I'm quite terrified of making the same mistake in spec'ing how
distribution works and later regretting it.
At least the way I understand it, multiple shadow roots per element and
distributions are largely orthogonal bits of machinery that solve largely
orthogonal problems.

:DG<
Tab Atkins Jr.
2015-04-29 23:37:47 UTC
Permalink
Post by Dimitri Glazkov
Post by Ryosuke Niwa
One thing that worries me about the `distribute` callback approach (a.k.a.
Anne's approach) is that it bakes distribution algorithm into the platform
without us having thoroughly studied how subclassing will be done upfront.
Mozilla tried to solve this problem with XBS, and they seem to think what
they have isn't really great. Google has spent multiple years working on
this problem but they come around to say their solution, multiple
generations of shadow DOM, may not be as great as they thought it would be.
Given that, I'm quite terrified of making the same mistake in spec'ing how
distribution works and later regretting it.
At least the way I understand it, multiple shadow roots per element and
distributions are largely orthogonal bits of machinery that solve largely
orthogonal problems.
Yes. Distribution is mainly about making composition of components
work seamlessly, so you can easily pass elements from your light dom
into some components you're using inside your shadow dom. Without
distribution, you're stuck with either:

* avoiding <content> entirely and literally moving the elements from
the light dom to your shadow tree (like, appendChild() the nodes
themselves), which means the outer page no longer has access to the
elements for their own styling or scripting purposes (this is
terribad, obviously), or
* components have to be explicitly written with the expectation of
being composed into other components, writing their own <content
select> *to target the <content> elements of the outer shadow*, which
is also extremely terribad.

Distribution makes composition *work*, in a fundamental way. Without
it, you simply don't have the ability to use components inside of
components except in special cases.

~TJ
Ryosuke Niwa
2015-04-29 23:47:43 UTC
Permalink
Post by Tab Atkins Jr.
Post by Dimitri Glazkov
Post by Ryosuke Niwa
One thing that worries me about the `distribute` callback approach (a.k.a.
Anne's approach) is that it bakes distribution algorithm into the platform
without us having thoroughly studied how subclassing will be done upfront.
Mozilla tried to solve this problem with XBS, and they seem to think what
they have isn't really great. Google has spent multiple years working on
this problem but they come around to say their solution, multiple
generations of shadow DOM, may not be as great as they thought it would be.
Given that, I'm quite terrified of making the same mistake in spec'ing how
distribution works and later regretting it.
At least the way I understand it, multiple shadow roots per element and
distributions are largely orthogonal bits of machinery that solve largely
orthogonal problems.
Yes. Distribution is mainly about making composition of components
work seamlessly, so you can easily pass elements from your light dom
into some components you're using inside your shadow dom. Without
As I clarified my point in another email, neither I nor anyone else is questioning the value of the first-degree of node distribution from the "light" DOM into insertion points of a shadow DOM. What I'm questioning is the value of the capability to selectively re-distribute those nodes in a tree with nested shadow DOMs.
Post by Tab Atkins Jr.
* components have to be explicitly written with the expectation of
being composed into other components, writing their own <content
select> *to target the <content> elements of the outer shadow*, which
is also extremely terribad.
Could you give me a concrete use case in which such inspection of content elements in the light DOM is required without multiple generations of shadow DOM? In all the use cases I've studied without multiple generations of shadow DOM, none required the ability to filter nodes inside a content element.
Post by Tab Atkins Jr.
Distribution makes composition *work*, in a fundamental way. Without it, you simply don't have the ability to use components inside of components except in special cases.
Could you give us a concrete example in which selective re-distribution of nodes are required? That'll settle this discussion/question altogether.

- R. Niwa
Tab Atkins Jr.
2015-04-29 23:57:34 UTC
Permalink
Post by Ryosuke Niwa
Post by Tab Atkins Jr.
Post by Dimitri Glazkov
Post by Ryosuke Niwa
One thing that worries me about the `distribute` callback approach (a.k.a.
Anne's approach) is that it bakes distribution algorithm into the platform
without us having thoroughly studied how subclassing will be done upfront.
Mozilla tried to solve this problem with XBS, and they seem to think what
they have isn't really great. Google has spent multiple years working on
this problem but they come around to say their solution, multiple
generations of shadow DOM, may not be as great as they thought it would be.
Given that, I'm quite terrified of making the same mistake in spec'ing how
distribution works and later regretting it.
At least the way I understand it, multiple shadow roots per element and
distributions are largely orthogonal bits of machinery that solve largely
orthogonal problems.
Yes. Distribution is mainly about making composition of components
work seamlessly, so you can easily pass elements from your light dom
into some components you're using inside your shadow dom. Without
As I clarified my point in another email, neither I nor anyone else is questioning the value of the first-degree of node distribution from the "light" DOM into insertion points of a shadow DOM. What I'm questioning is the value of the capability to selectively re-distribute those nodes in a tree with nested shadow DOMs.
Post by Tab Atkins Jr.
* components have to be explicitly written with the expectation of
being composed into other components, writing their own <content
select> *to target the <content> elements of the outer shadow*, which
is also extremely terribad.
Could you give me a concrete use case in which such inspection of content elements in the light DOM is required without multiple generations of shadow DOM? In all the use cases I've studied without multiple generations of shadow DOM, none required the ability to filter nodes inside a content element.
Post by Tab Atkins Jr.
Distribution makes composition *work*, in a fundamental way. Without it, you simply don't have the ability to use components inside of components except in special cases.
Could you give us a concrete example in which selective re-distribution of nodes are required? That'll settle this discussion/question altogether.
I'll let a Polymer person provide a concrete example, as they're the
ones that originally brought up redistribution and convinced us it was
needed, but imagine literally any component that uses more than one
<content> (so you can't get away with just distributing the <content>
element itself), being used inside of some other component that wants
to pass some of its light-dom children to the nested component.

Without redistribution, you can only nest components (using one
component inside the shadow dom of another) if you either provide
contents directly to the nested component (no <content>) or the nested
component only has a single distribution point in its own shadow.

~TJ
Justin Fagnani
2015-04-30 00:12:13 UTC
Permalink
Here's one case of redistribution:
https://github.com/Polymer/core-scaffold/blob/master/core-scaffold.html#L122

Any time you see <content> inside a custom element it's potentially
redistribution. Here there's on that is (line 122), and one that could be
(line 116), and one that definitely isn't (line 106).

I personally think that Hayato's analogy to function parameters is very
motivating. Arguing from use-cases at this point is going to miss many
things because so far we've focused on the most simple of components, are
having to rewrite them for Polymer 0.8, and haven't seen the variety and
complexity of cases that will evolve naturally from the community. General
expressiveness is extremely important when you don't have an option to work
around it - redistribution is one of these cases.

Cheers,
Justin
Post by Ryosuke Niwa
Post by Ryosuke Niwa
Post by Tab Atkins Jr.
Post by Dimitri Glazkov
Post by Ryosuke Niwa
One thing that worries me about the `distribute` callback approach
(a.k.a.
Post by Ryosuke Niwa
Post by Tab Atkins Jr.
Post by Dimitri Glazkov
Post by Ryosuke Niwa
Anne's approach) is that it bakes distribution algorithm into the
platform
Post by Ryosuke Niwa
Post by Tab Atkins Jr.
Post by Dimitri Glazkov
Post by Ryosuke Niwa
without us having thoroughly studied how subclassing will be done
upfront.
Post by Ryosuke Niwa
Post by Tab Atkins Jr.
Post by Dimitri Glazkov
Post by Ryosuke Niwa
Mozilla tried to solve this problem with XBS, and they seem to think
what
Post by Ryosuke Niwa
Post by Tab Atkins Jr.
Post by Dimitri Glazkov
Post by Ryosuke Niwa
they have isn't really great. Google has spent multiple years working
on
Post by Ryosuke Niwa
Post by Tab Atkins Jr.
Post by Dimitri Glazkov
Post by Ryosuke Niwa
this problem but they come around to say their solution, multiple
generations of shadow DOM, may not be as great as they thought it
would be.
Post by Ryosuke Niwa
Post by Tab Atkins Jr.
Post by Dimitri Glazkov
Post by Ryosuke Niwa
Given that, I'm quite terrified of making the same mistake in
spec'ing how
Post by Ryosuke Niwa
Post by Tab Atkins Jr.
Post by Dimitri Glazkov
Post by Ryosuke Niwa
distribution works and later regretting it.
At least the way I understand it, multiple shadow roots per element and
distributions are largely orthogonal bits of machinery that solve
largely
Post by Ryosuke Niwa
Post by Tab Atkins Jr.
Post by Dimitri Glazkov
orthogonal problems.
Yes. Distribution is mainly about making composition of components
work seamlessly, so you can easily pass elements from your light dom
into some components you're using inside your shadow dom. Without
As I clarified my point in another email, neither I nor anyone else is
questioning the value of the first-degree of node distribution from the
"light" DOM into insertion points of a shadow DOM. What I'm questioning is
the value of the capability to selectively re-distribute those nodes in a
tree with nested shadow DOMs.
Post by Ryosuke Niwa
Post by Tab Atkins Jr.
* components have to be explicitly written with the expectation of
being composed into other components, writing their own <content
select> *to target the <content> elements of the outer shadow*, which
is also extremely terribad.
Could you give me a concrete use case in which such inspection of
content elements in the light DOM is required without multiple generations
of shadow DOM? In all the use cases I've studied without multiple
generations of shadow DOM, none required the ability to filter nodes inside
a content element.
Post by Ryosuke Niwa
Post by Tab Atkins Jr.
Distribution makes composition *work*, in a fundamental way. Without
it, you simply don't have the ability to use components inside of
components except in special cases.
Post by Ryosuke Niwa
Could you give us a concrete example in which selective re-distribution
of nodes are required? That'll settle this discussion/question altogether.
I'll let a Polymer person provide a concrete example, as they're the
ones that originally brought up redistribution and convinced us it was
needed, but imagine literally any component that uses more than one
<content> (so you can't get away with just distributing the <content>
element itself), being used inside of some other component that wants
to pass some of its light-dom children to the nested component.
Without redistribution, you can only nest components (using one
component inside the shadow dom of another) if you either provide
contents directly to the nested component (no <content>) or the nested
component only has a single distribution point in its own shadow.
~TJ
Ryosuke Niwa
2015-04-30 01:06:57 UTC
Permalink
Here's one case of redistribution: https://github.com/Polymer/core-scaffold/blob/master/core-scaffold.html#L122
Any time you see <content> inside a custom element it's potentially redistribution. Here there's on that is (line 122), and one that could be (line 116), and one that definitely isn't (line 106).
Thank you very much for an example. I'm assuming core-header-panel is [1]? It grabs core-toolbar. It looks to me that we could also replace line 122 with:

```html
<content class=".core-header" select="core-toolbar, .core-header"></content>
<content select="*"></content>
```

and you wouldn't need redistribution. I wouldn't argue that it provides a better developer ergonomics but there's a serious trade off here.

If we natively supported redistribution and always triggered via `distribute` callback, then it may not be acceptable to invoke `distribute` on every DOM change in terms of performance since that could easily result in O(n^2) behavior. This is why the proposal we (Anne, I, and others) discussed involved using mutation observers instead of childrenChanged lifecycle callbacks.

Now, frameworks such as Polymer could provide a sugar on top of it by automatically re-distributing nodes as needed when implementing your "select" attribute.
I personally think that Hayato's analogy to function parameters is very motivating. Arguing from use-cases at this point is going to miss many things because so far we've focused on the most simple of components, are having to rewrite them for Polymer 0.8, and haven't seen the variety and complexity of cases that will evolve naturally from the community. General expressiveness is extremely important when you don't have an option to work around it - redistribution is one of these cases.
Evaluating each design proposal based on a concrete use case is extremely important precisely because we might miss out on expressiveness in some cases as we're stripping down features, and we can't reject a proposal or add a feature for a hypothetical/theoretical need.

[1] https://github.com/Polymer/core-header-panel/blob/master/core-header-panel.html

- R. Niwa
Ryosuke Niwa
2015-04-29 23:42:09 UTC
Permalink
Post by Ryosuke Niwa
One thing that worries me about the `distribute` callback approach (a.k.a. Anne's approach) is that it bakes distribution algorithm into the platform without us having thoroughly studied how subclassing will be done upfront.
Mozilla tried to solve this problem with XBS, and they seem to think what they have isn't really great. Google has spent multiple years working on this problem but they come around to say their solution, multiple generations of shadow DOM, may not be as great as they thought it would be. Given that, I'm quite terrified of making the same mistake in spec'ing how distribution works and later regretting it.
At least the way I understand it, multiple shadow roots per element and distributions are largely orthogonal bits of machinery that solve largely orthogonal problems.
Sorry, I wasn't clear about my point. I'm specifically talking about re-distributions.

It would be great if you or someone working on Polymer could point me to an example of a concrete use case for redistributions that come up in a nested shadow DOM. As far as I looked around, I couldn't find any use case for which selective re-distribution; i.e. the case in which an outer shadow DOM's insertion point needs to filter nodes distributed into an inner shadow DOM's insertion point.

- R. Niwa
Anne van Kesteren
2015-04-30 12:49:31 UTC
Permalink
Post by Ryosuke Niwa
One thing that worries me about the `distribute` callback approach (a.k.a. Anne's approach) is that it bakes distribution algorithm into the platform without us having thoroughly studied how subclassing will be done upfront.
Agreed. Dimitri saying these are largely orthogonal makes me hopeful,
but I would prefer to see a strawman API for it before fully
committing to the distribute() design on my gist.
Post by Ryosuke Niwa
Mozilla tried to solve this problem with XBL, and they seem to think what they have isn't really great.
Actually, I think that we found we needed something. What was
originally in the Shadow DOM specification was sufficient for our
needs I believe, but got removed...
Post by Ryosuke Niwa
In that regard, the first approach w/o distribution has an advantage of letting Web developer experiment with the bare minimum and try out which distribution algorithms and mechanisms work best.
Except that you don't have a clear story for how to move to a
declarative syntax later on. And redistribution seems somewhat
essential as it mostly depends on where you put your host element
whether you're subject to it. Making it immaterial where you put your
host element seems important.
--
https://annevankesteren.nl/
Ryosuke Niwa
2015-04-27 21:05:10 UTC
Permalink
Post by Anne van Kesteren
Post by Ryosuke Niwa
One major drawback of this API is computing insertionList is expensive
because we'd have to either (where n is the number of nodes in the shadow
Maintain an ordered list of insertion points, which results in O(n)
algorithm to run whenever a content element is inserted or removed.
Lazily compute the ordered list of insertion points when `distribute`
callback is about to get called in O(n).
The alternative is not exposing it and letting developers get hold of
the slots. The rationale for letting the browser do it is because you
need the slots either way and the browser should be able to optimize
better.
I don’t think that’s true. If you’re creating a custom element, you’re pretty much in the control of what goes into your shadow DOM. I’m writing any kind of component that creates a shadow DOM, I’d just keep references to all my insertion points instead of querying them each time I need to distribute nodes.

Another important use case to consider is adding insertion points given the list of nodes to distribute. For example, you may want to “wrap” each node you distribute by an element. That requires the component author to know the number of nodes to distribute upfront and then dynamically create as many insertion points as needed.
Post by Anne van Kesteren
Post by Ryosuke Niwa
If we wanted to allow non-direct child descendent (e.g. grand child node) of
the host to be distributed, then we'd also need O(m) algorithm where m is
the number of under the host element. It might be okay to carry on the
current restraint that only direct child of shadow host can be distributed
into insertion points but I can't think of a good reason as to why such a
restriction is desirable.
So you mean that we'd turn distributionList into a subtree? I.e. you
can pass all descendants of a host element to add()? I remember Yehuda
making the point that this was desirable to him.
Consider table-chart component which coverts a table element into a chart with each column represented as a line graph in the chart. The user of this component will wrap a regular table element with table-chart element to construct a shadow DOM:

```html
<table-chart>
<table>
...
<td data-value=“253” data-delta=5>253 ± 5</td>
...
</table>
</table-chart>
```

For people who like is attribute on custom elements, pretend it's
```html
<table is=table-chart>
...
<td data-value=“253” data-delta=5>253 ± 5</td>
...
</table>
```

Now, suppose I wanted to show a tooltip with the value in the chart. One obvious way to accomplish this would be distributing the td corresponding to the currently selected point into the tooltip. But this requires us allowing non-direct child nodes to be distributed.
Post by Anne van Kesteren
The other thing I would like to explore is what an API would look like
that does the subclassing as well. Even though we deferred that to v2
I got the impression talking to some folks after the meeting that
there might be more common ground than I thought.
For the slot approach, we can model the act of filling a slot as if attaching a shadow root to the slot and the slot content going into the shadow DOM for both content distribution and filling of slots by subclasses.

Now we can do this in either of the following two strategies:
1. Superclass wants to see a list of slot contents from subclasses.
2. Each subclass "overrides" previous distribution done by superclass by inspecting insertion points in the shadow DOM and modifying them as needed.

- R. Niwa
Anne van Kesteren
2015-04-30 12:12:05 UTC
Permalink
Post by Ryosuke Niwa
I’m writing any kind of component that creates a shadow DOM, I’d just keep references to all my insertion points instead of querying them each time I need to distribute nodes.
I guess that is true if you know you're not going to modify your
insertion points or shadow tree. I would be happy to update the gist
to exclude this parameter and instead use something like

shadow.querySelector("content")

somewhere. It doesn't seem important.
Post by Ryosuke Niwa
Another important use case to consider is adding insertion points given the list of nodes to distribute. For example, you may want to “wrap” each node you distribute by an element. That requires the component author to know the number of nodes to distribute upfront and then dynamically create as many insertion points as needed.
That seems doable.
Post by Ryosuke Niwa
Post by Anne van Kesteren
So you mean that we'd turn distributionList into a subtree?
```html
<table-chart>
<table>
...
<td data-value=“253” data-delta=5>253 ± 5</td>
...
</table>
</table-chart>
```
Now, suppose I wanted to show a tooltip with the value in the chart. One obvious way to accomplish this would be distributing the td corresponding to the currently selected point into the tooltip. But this requires us allowing non-direct child nodes to be distributed.
So if we did that, distributionList would become distributionRoot. And
whenever add() is invoked any node that is not a descendant of
distributionRoot or is a descendant of a node already add()'d would
throw? It seems that would get us a bit more complexity than the
current algorithm...
Post by Ryosuke Niwa
Post by Anne van Kesteren
The other thing I would like to explore is what an API would look like
that does the subclassing as well.
For the slot approach, we can model the act of filling a slot as if attaching a shadow root to the slot and the slot content going into the shadow DOM for both content distribution and filling of slots by subclasses.
1. Superclass wants to see a list of slot contents from subclasses.
2. Each subclass "overrides" previous distribution done by superclass by inspecting insertion points in the shadow DOM and modifying them as needed.
With the existence of closed shadow trees, it seems like you'd want to
allow for the superclass to not have to share its details with the
subclass.
--
https://annevankesteren.nl/
Ryosuke Niwa
2015-04-30 17:42:20 UTC
Permalink
Post by Anne van Kesteren
Post by Ryosuke Niwa
Post by Anne van Kesteren
The other thing I would like to explore is what an API would look like
that does the subclassing as well.
For the slot approach, we can model the act of filling a slot as if attaching a shadow root to the slot and the slot content going into the shadow DOM for both content distribution and filling of slots by subclasses.
1. Superclass wants to see a list of slot contents from subclasses.
2. Each subclass "overrides" previous distribution done by superclass by inspecting insertion points in the shadow DOM and modifying them as needed.
With the existence of closed shadow trees, it seems like you'd want to
allow for the superclass to not have to share its details with the
subclass.
Neither approach needs to expose internals of superclass' shadow DOM. In 1, what superclass seems is a list of proxies of slot contents subclasses provided. In 2, what subclass sees is a list of wrappers of overridable insertion points superclass defined.

I can't think of an inheritance model in any programming language in which overridable pieces are unknown to subclasses.

- R. Niwa
Ryosuke Niwa
2015-04-30 17:43:23 UTC
Permalink
Post by Anne van Kesteren
Post by Ryosuke Niwa
I’m writing any kind of component that creates a shadow DOM, I’d just keep references to all my insertion points instead of querying them each time I need to distribute nodes.
I guess that is true if you know you're not going to modify your
insertion points or shadow tree. I would be happy to update the gist
to exclude this parameter and instead use something like
shadow.querySelector("content")
somewhere. It doesn't seem important.
FYI, I've summarized everything we've discussed so far in https://gist.github.com/rniwa/2f14588926e1a11c65d3.

- R. Niwa
Loading...