content
content copied to clipboard
The `getHTML()`, `setHTMLUnsafe()`, and `parseHTMLUnsafe()` methods are missing
MDN URL
https://developer.mozilla.org/en-US/docs/Web/API/Web_components/Using_shadow_DOM#declaratively_with_html
What specific section or headline is this issue about?
Several APIs related to declarative shadow DOM serialization and parsing
What information was incorrect, unhelpful, or incomplete?
There are several APIs that are not documented at all on MDN:
getHTML()setHTMLUnsafe()parseHTMLUnsafe()serializableandshadowrootserializable
And the setHTML() method is documented, here, but that one isn't yet standardized or shipped anywhere. Perhaps worth removing or marking as such.
What did you expect to see?
Documentation
Do you have any supporting links, references, or citations?
setHTMLUnsafe() and parseHTMLUnsafe() are specified here:
- https://html.spec.whatwg.org/multipage/dynamic-markup-insertion.html#unsafe-html-parsing-methods
The getHTML() method and the serializable/shadowrootserializable concept will be in the spec once this spec PR lands:
- https://github.com/whatwg/html/pull/10139
Do you have anything more you want to share?
No response
After https://github.com/mdn/browser-compat-data/pull/22866 is merged all of the compat data entries will be set up for any pages added for the various new APIs mentioned.
https://github.com/mdn/content/pull/33141 - seeks to remove the setHTML (and associated documents) content, as it's not helpful imo.
Btw getHTML has been merged into the spec: https://html.spec.whatwg.org/multipage/dynamic-markup-insertion.html#html-serialization-methods and has an intent to ship in Chromium
Copying @pepelsbey, @hamishwillee, and @chrisdavidmills to put this on your radar. We're including this in our ongoing Firefox release work.
Related Firefox bugs:
- Bug 1712140: Implement Declarative Shadow DOM
- Bug 1876885: Implement new "double" shadow root behavior
Per the discussion at https://github.com/whatwg/dom/issues/1235, this PR makes the first declarative shadow root "win". Prior behavior was for the last one to remain, but this PR changes second and subsequent declarative shadow roots to report an exception instead.
This issue should be reopened it's not fully addressed yet and nor will it be when my second PR is merged.
@lukewarlow So normally the MDN team do docs, compatibility data, and release note, along with experimental features page update for new stuff behind a preference. If you need any help with any of that, or getting reviews done, feel free to add a note here pinging me with what help you could use from me.
For the record here, it was my mistake that this issue got closed. It happened because the description for https://github.com/mdn/content/pull/33494 had “Partially fixes #32731”. And when I reviewed the PR, I did actually notice the word Partially there — but I failed to remember that GitHub doesn’t pay any attention to whatever might come before the “fixes #NNNNN” part.
And so, when I merged the PR, GitHub went ahead and closed this issue. It should have dawned on me that it would, but I didn’t stop again to think about it, until Luke sent the ping about it.
So anyway, I’ve added it to my review checklist for the future — to check the PR description for text like “partially” or whatever preceding any “fixes #NNNNN” part — and to make sure to either edit that part before merging, or else after merging, to make sure to go back and re-open the associated issue.
Easily done ^^^.
@lukewarlow Just let me know what of https://github.com/mdn/content/issues/32731#issuecomment-2106591968 could use my help. The most obvious thing being a release note, since the release happens today.
A release note as well as an experimental features entry, for getHTML would be useful.
https://github.com/mdn/content/pull/33492 could also do with a review on this one.
The pages aren't exhaustive but I consider them good enough for an initial entry. Then someone from the docs team can add more detail when they get time.
The compat data for these features is all done I believe.
Need to check what of the below is in browsers - if it isn't present yet, it doesn't get documented.
Status:
-
[x] BCD
- [x]
HTMLTemplateElement.shadowRootSerializablehttps://github.com/mdn/browser-compat-data/pull/22924 - [x]
<template>shadowrootserializable- https://github.com/mdn/browser-compat-data/pull/23088 - [x]
ShadowRoot.serializablehttps://github.com/mdn/browser-compat-data/pull/22924 - [x]
Element.attachShadow()init.serializableproperty - https://github.com/mdn/browser-compat-data/pull/23088 - [x]
Element.getHTML(),ShadowRoot.getHTML()https://github.com/mdn/browser-compat-data/pull/22866 - [x]
setHTMLUnsafe()(element/shadowroot),parseHTMLUnsafe(): https://github.com/mdn/browser-compat-data/pull/22681
- [x]
-
[x] docs
- [x]
setHTMLUnsafe()(element/shadowroot),parseHTMLUnsafe(): https://github.com/mdn/content/pull/33492 - [x]
serializableandshadowrootserializableand corresponding<template>/attachShadow()docs #33600 - [x]
Element.getHTML()updates in #33600 - [x]
ShadowRoot.getHTML()- updates in #33600 - [x]
Element.setHTML()- status - [x] #32289
- [x] #33650
- [x]
-
[x] dev-doc-complete/release note - NA - the FF bits are already captured. This is cleanup on other parts of the API that are documented.
@mfreed7 I've tagged you with a couple of comments in the linked items.
- FYI only
Element.setHTML()- According to the definition of "standard" adopted by the browser compat data this is in a standard. I have recently updated this to be marked asdeprecatedin Browser-compat since it has been withdrawn from Chrome and is behind pref everywhere else. That has not yet propagated to docs, but when it does there will be a big fact deprecated warning on top of the page. That's about all we can do until either the identifier is supported in a browser or removed from the spec. - For the setHTMLUnsafe is this just called "unsafe" in contrast to the safe version that will eventually exist (point 1) which does sanitising? And again, we need this because setting
innerHTMLhas unpredictable and imprecise behaviour for shadow roots? - Not covered in this issue, but nowhere do we clearly state the implications of
clonablebeing set. My assumption is that- if you copy a node that contains a shadow root and it is clonable, then everything in the node is deep copied - is that right? Are there any implications to such a copy? I mean presumably the copy is deep so the copy is completely independent (other than things like sharing a different reference to the same CSS definition)?
- However if you set something as
clonable=falseand copy the node then the new copy will have nothing inside the element. It will be as though the element is empty and always was - right? In this case what are the implications? For example if I copy a web component and it has a shadow root, presumably the rest of the component will exist but now it doesn't have any internals? Won't that break things?
I think "deprecated" doesn't make sense for setHTML(), since it's not in the process of being removed; it's in the process of being added. "Experimental" maybe?
setHTML is really tricky because the version that is documented is deprecated.
I strongly believe removing the docs would be the most useful approach for developers. But policies prevent that happening.
For the setHTMLUnsafe is this just called "unsafe" in contrast to the safe version that will eventually exist (point 1) which does sanitising? And again, we need this because setting innerHTML has unpredictable and imprecise behaviour for shadow roots?
Yes in essence there's a safe and an unsafe pairing for setHTML and parseHTML. Eventually they'll both behave simiarly except that setHTML enfoces a "safe" baseline config. Whereas the unsafe variants will allow you to do unsafe things such as creating script elements.
These new functions need to exist for two reasons. Declarative shadow dom isn't supported by innerHTML (or any of the existing parser entry points), that's the main difference for now. But in future it needs to be a function so you can pass an argument to it (the sanitizer config).
So for now our best bet is to just mention the declarative shadow dom support.
@mfreed7 I've tagged you with a couple of comments in the linked items.
- FYI only
Element.setHTML()- According to the definition of "standard" adopted by the browser compat data this is in a standard. I have recently updated this to be marked asdeprecatedin Browser-compat since it has been withdrawn from Chrome and is behind pref everywhere else. That has not yet propagated to docs, but when it does there will be a big fact deprecated warning on top of the page. That's about all we can do until either the identifier is supported in a browser or removed from the spec.
So this sounds like a technical issue with MDN, right? The state of setHTML() is "still in development". It is not shipped yet, but definitely actively in progress. Hopefully to be shipped in the next year, if things go well.
- For the setHTMLUnsafe is this just called "unsafe" in contrast to the safe version that will eventually exist (point 1) which does sanitising? And again, we need this because setting
innerHTMLhas unpredictable and imprecise behaviour for shadow roots?
Exactly right. "Unsafe" === "Unsanitized" in this case. It does roughly the same thing as element.innerHTML=html except that it also parses declarative shadow dom. It isn't that innerHTML is unpredictable or imprecise. It precisely does not parse declarative shadow dom - <template shadowrootmode=open> will be parsed as a <template> instead of a #shadowroot.
Not covered in this issue, but nowhere do we clearly state the implications of
clonablebeing set. My assumption is that
- if you copy a node that contains a shadow root and it is clonable, then everything in the node is deep copied - is that right? Are there any implications to such a copy? I mean presumably the copy is deep so the copy is completely independent (other than things like sharing a different reference to the same CSS definition)?
Not quite. If you clone a node that contains a shadow root that is clonable, then the contents of the shadow root will be deep cloned. But children of the node will still either be deep-cloned or shallow-cloned according to the value of the deep parameter to cloneNode(). Otherwise, behavior is identical to regular cloneNode.
- However if you set something as
clonable=falseand copy the node then the new copy will have nothing inside the element. It will be as though the element is empty and always was - right? In this case what are the implications? For example if I copy a web component and it has a shadow root, presumably the rest of the component will exist but now it doesn't have any internals? Won't that break things?
Correction: "the new copy will have no shadow root". I.e. with clonable=false, the shadow root is not cloned. Your question about web components depends on how the web component was written. It is in charge of setting or not setting clonable on its shadow root. Typically web components have constructor (or connectedCallback) logic that creates the shadow root and populates it. So if clonable=false, this logic will still work to create the shadow root.
Also note that a lot of conversation about clonable seems to be happening here: https://github.com/mdn/content/pull/33325
Thanks @mfreed7
- FYI only
Element.setHTML()- According to the definition of "standard" adopted by the browser compat data this is in a standard. I have recently updated this to be marked asdeprecated...So this sounds like a technical issue with MDN, right? The state of
setHTML()is "still in development". It is not shipped yet, but definitely actively in progress. Hopefully to be shipped in the next year, if things go well.
The thing is that setHTML() DID ship. So there are browsers out there that include it. The rule on compatibility data is that once something ships it remains in the web platform for 2 years, at which point it can be removed from the docs. I'll see if we can make an exception for this case.
- if you copy a node that contains a shadow root and it is clonable, then everything in the node is deep copied - is that right? Are there any implications to such a copy? I mean presumably the copy is deep so the copy is completely independent (other than things like sharing a different reference to the same CSS definition)?
Not quite. If you clone a node that contains a shadow root that is
clonable, then the contents of the shadow root will be deep cloned. But children of the node will still either be deep-cloned or shallow-cloned according to the value of thedeepparameter tocloneNode(). Otherwise, behavior is identical to regular cloneNode.
So to be clear on our terminology here,
- a particular node can have children, which can be other nodes.
- a node can contain one, and only one shadow root (but that shadow root may contain nodes that themselves contain shadow roots)
So what you are saying is that the ShadowRoot belonging to a node is always deep cloned if it is cloned at all, but other child nodes are cloned according to the value of deep?
- However if you set something as
clonable=falseand copy the node then the new copy will have nothing inside the element. It will be as though the element is empty and always was - right? In this case what are the implications? For example if I copy a web component and it has a shadow root, presumably the rest of the component will exist but now it doesn't have any internals? Won't that break things?Correction: "the new copy will have no shadow root". I.e. with
clonable=false, the shadow root is not cloned. Your question about web components depends on how the web component was written. It is in charge of setting or not settingclonableon its shadow root. Typically web components have constructor (orconnectedCallback) logic that creates the shadow root and populates it. So ifclonable=false, this logic will still work to create the shadow root.
Thanks. This seems to match my new mental model.
When you're copying a web component it isn't just a matter that the parser gets to some DOM object and copies all properties in some kind of deep copy. The new object is constructed at some point (I think when the web component or node it is in is added to a document). But at whatever this point, the connectCallback() runs, and at that point creates the shadow root.
So cloning a web component effectively restarts creation of the element.
Could be completely wrong. I wrote a similar comment to this in the issue that is now merged. If it seems wrong let me know.
Note, I have marked this as done. Note however that I hope to perhaps clean up setHTML() as well via BCD deletion. That would be separate job though.
Note however that I hope to perhaps clean up setHTML() as well via BCD deletion. That would be separate job though.
@hamishwillee Do you still want to track that in this issue or separate issue?
I still want to track this here please.
I am closing this. Kicked off the removal of the sanitizer API stuff in #35238