enhancement-proposals
enhancement-proposals copied to clipboard
Markdown based notebooks
This PR is an outcome of Jupyter Notebook workshop. The JEP proposes an alternative Markdown-based serialization syntax for Jupyter notebooks that allows the lossless serialization from/to .ipynb
, is reasonably human readable, interoperable with standard text tools, and is more VCS-friendly.
Creating a GitHub issue to decide if it's a JEP in this repository is skipped after discussing it with @fcollonval during the workshop.
Resolve #102
Note that the syntax
```{jupyter.code-cell}
is incompatible with pandoc's markdown. Ideally, it would be nice if the proposed format could be read and processed by pandoc (and thus doesn't require a custom parser).
Why not use an attribute that is compatible? E.g.
{.jupyter .code-cell}
or
{.jupyter-code-cell}
or even just
{.code-cell}
There is currently no official attribute syntax for commonmark, but if this comes it is likely to be very similar to the pandoc attribute syntax.
See https://github.com/jgm/commonmark-hs/blob/master/commonmark-extensions/test/attributes.md
Similar remarks for other uses of {jupyter.XXX}
.
thanks for the comments @jgm
yes, the syntax
```{jupyter.code-cell}
is aimed at providing concrete "directives" in the document that can be used to specify the various notebook blocks, which go beyond code blocks and also specify output and attachments and other complex/rich types.
So the JEP isn't favoring any existing parser/library and while it isn't current compatible out of the box with pandoc
it's also not with jupytext
, myst
or quarto
out of the box -- although the syntax currently shares a lot with quarto
and myst
styles.
A custom parser / serializer or modifications existing parsers are probably going to be needed anyways in order to support the serialisation requirements around output and attachment blocks?
Yes, I understand the intent. But that intent can be met without departing from standard attribute syntax.
If you used one of the variants I suggested, or e.g. {.jupyter:code-cell}
, which also works, then you'd be able to read one of these md notebooks with pandoc and process it with filters.
With your current syntax suggestion, that wouldn't be possible; you'd be giving up easy-interoperability for no good reason that I can discern.
A custom parser / serializer or modifications existing parsers are probably going to be needed anyways in order to support the serialisation requirements around output and attachment blocks?
This could all be handled with filters with the existing pandoc markdown or extended commonmark parser; none of it requires changes to the parser.
Thanks @jgm for the feedback! The motivation for having jupyter
somewhere is
for namespacing. Other than this, we certainly should consider variants of the
proposed syntax if this helps interoperability and increases the odds of being
consistent with whatever standards may emerge in the Markdown world.
Using .jupyter.code
instead jupyter.code
seems totally fine to me.
I am not sure about .jupyter .code
: on the one hand, it's consistent with
the .code
keyword of pandoc. On the other hand it carries less the idea
of namespacing.
Presumably a good guideline to follow is what would be customary in the css world. I am by far not an expert there!
Using Markdown for notebooks that display nicely as READMEs (similar to https://github.com/mwouts/jupytext/issues/220) has been explored for Polyglot Notebooks / Try .NET. One detail from that design that might be of interest here is that we also put cell metadata after the code fence, but always prefixed with the language name in order to leverage existing syntax highlighting features.
Here's an example:
```python {metadata: ...}
x = 1
if x == 1:
# indented four spaces
print("x is 1.")
```
This renders with language-specific highlighting without displaying the metadata:
x = 1
if x == 1:
# indented four spaces
print("x is 1.")
Using .jupyter.code instead jupyter.code seems totally fine to me.
Some implementations may take .jupyter.code
to be specifying two class names rather than one (and thus to be equivalent to .jupyter .code
). And in general, even if implementations supported it, having .
or :
in class names is not ideal. (Colons need to be escaped in CSS, and periods conflict with the class syntax.)
.jupyter-code
or .jupyter_code
should be fine.
Another alternative would be to use a key-value pair: jupyter="code"
, jupyter="output"
, etc.
One detail from that design that might be of interest here is that we also put cell metadata after the code fence, but always prefixed with the language name in order to leverage existing syntax highlighting features.
Thanks for the feedback that brings perspective to one of the open points.
I personally lean toward making this a recommended feature: parsers should support it; writers (including humans!) are encouraged to use it, but don't have to depending on the use case.
On the class attribute syntax: I don't like the idea of syntax that overloads class attribute; {.code-cell}
essentially equates to <div class="code-cell"></div>
whereas {code-cell} essentially equates to <code></code>
, which is semantically stronger.
Speaking from a jupyter point of view, I think we want strong semantics around what a jupyter code-cell (or output, or attachment) is (with or without the {}), and what information should be on them in terms of parameters, attributes, metadata, etc... these are not <div>
s of a certain class they are semantically meaningful elements with a specific representation when serialized and are rendered as complex UI fragments in jupyter clients.
On interoperability: A block syntax of {code-cell} is already compatible with jupytext and MyST notebooks, with the introduction of new block types and the jupyter namespace {jupyter.code-cell} it is still well aligned with the block / directive syntax used by jupytext, myst and I think quarto - extension should be straightforward there.
A block syntax of {code-cell} is already compatible with jupytext and MyST notebooks
My point is that you should care about wider interoperability.
I think quarto - extension should be straightforward there.
Quarto is based on pandoc (it uses pandoc's parsers with a bunch of filters on top to process the AST), so you need to be interoperable with pandoc for that.
A block syntax of {code-cell} is already compatible with jupytext and MyST notebooks
My point is that you should care about wider interoperability.
I think we do? and I think we're considering & discussing that here -- I guess what I'm not clear on is as there are multiple possible (probably conflicting) tools to be interoperable with, how to weight them. e.g. I'm not clear on the extent that pandoc is actively used alongside jupyter in the same way that jupytext is (i.e. in a tight loop over notebook development and execution) as opposed to say getting notebooks out to other formats for distribution of that material outside of jupyter.
Also other big point on interoperability is which hasn't been mentioned yet is GFM!
Maybe what were are missing the JEP so far are some clearer requirements like statements that can be discussed and agreed on, e.g.
- Must render fully on github (GFM)
- Must ____
As currently the "design goals" section is the closest to something like that but is still very loose: i.e. "The serialized notebook should be a valid Markdown file." whatever that means. This could better set the scene for then zeroing on the syntax.
Quarto is based on pandoc (it uses pandoc's parsers with a bunch of filters on top to process the AST), so you need to be interoperable with pandoc for that.
Ah ok, I thought it was pandoc flavored markdown + additional extensions -- are you saying that pandoc already supports the quarto code block syntax, which doesn't use class attributes and is close to the syntax already outlined in the JEP? or is this special handling of a language attribute by pandoc?
e.g. shown here
I suspect that's a documentation bug. Pandoc allows
``` {.python}
or
``` python
I believe the same is true of Quarto, because they don't use a customized pandoc, just filters on top.
All I'm saying is that if there's any room for a choice between
{.jupyter-code}
{.jupyter:code}
{.jupyter.code}
{jupyter .code}
{jupyter-code}
etc., it would be desirable (in this planning stage) to pick one that pandoc can already handle. This increases interoperability at little cost. (This would have been a good design goal for MyST, too.)
I would love to see a new section addressing the topic of trust and signatures (Jupyter Notebook security model). In particular: would signature for notebook be computed and stored in the markdown file?
- if yes, how?
- if no, will all cells/outputs be always treated as non-trusted upon opening the notebook in
.nb.md
format?
Please also see https://github.com/jupyter/enhancement-proposals/issues/95#issuecomment-1501176251.
- Markdown YAML front matter can contain YAML-LD/JSON-LD front matter
- W3C Verified Credentials: https://www.w3.org/TR/vc-data-model/#concrete-lifecycle-example
- https://github.com/jupyter/nbformat/issues/44#issuecomment-759861977
- https://github.com/jupyter/nbformat/issues/98#issuecomment-319496861
- You normalize the graph before signing it
- Do
nb.md
and.ipynb
versions of the notebook parse to the same graph, which is then normalized and hashed and signed?
@krassowski, thank you for raising this question!
As far as I understand from the documentation, the signature is produced from the outputs. Can we apply the same procedure to the outputs inside the Markdown file?
Most likely, I oversimplify things, and you probably see some rough edges. If so, could you share your thoughts?
cell outputs and attachment are mentioned at several places, but it is not clear to me if there is an option to have a companion file to markdown to persist those cell outputs and attachments.
cell outputs and attachment are mentioned at several places, but it is not clear to me if there is an option to have a companion file to markdown to persist those cell outputs and attachments.
Thanks for your feedback. Externalising cell outputs and attachments (e.g. in companion files) is indeed a natural feature. During our discussions, various use cases and approaches emerged. For an incremental approach, and also because the feature could be relevant as well for traditional ipynb notebooks, we decided to propose to treat that feature in a followup JEP. See line 580 of:
https://github.com/jupyter/enhancement-proposals/pull/103/files#diff-932448845fb9d55aef27789043a371eb872aa644507bf72e049f5ab536428238R580
With the current JEP, cell outputs and attachemnts can be stored inline only, or not at all.
in a followup JEP.
Well, I would feel more comfortable that this important topic be handled in this JEP to make sure all bits make sense. It can make sense to discuss them in separate forums, but giving my +1 on a partial solution which excludes difficult aspects is not appealing to me.
See line 580
oh yes, it was indeed excluded.
- mhtml - ZIP Compressed HTML + assets with URLs rewritten in the resources, and thus different content hashes
- https://github.com/WICG/webpackage#specifications
- Web Bundles
Introducing the Web Bundles API
A Web Bundle is a file format for encapsulating one or more HTTP resources in a single file. It can include one or more HTML files, JavaScript files, images, or stylesheets.
Web Bundles, more formally known as Bundled HTTP Exchanges, are part of the Web Packaging proposal.
[A figure demonstrating that a Web Bundle is a collection of web resources.]
How Web Bundles work
HTTP resources in a Web Bundle are indexed by request URLs, and can optionally come with signatures that vouch for the resources. Signatures allow browsers to understand and verify where each resource came from, and treats each as coming from its true origin. This is similar to how Signed HTTP Exchanges, a feature for signing a single HTTP resource, are handled.
This article walks you through what a Web Bundle is and how to use one.
Explaining Web Bundles
To be precise, a Web Bundle is a CBOR file with a
.wbn
extension (by convention) which packages HTTP resources into a binary format, and is served with theapplication/webbundle
MIME type. You can read more about this in the Top-level structure section of the spec draft.Web Bundles have multiple unique features:
- Encapsulates multiple pages, enabling bundling of a complete website into a single file
- Enables executable JavaScript, unlike MHTML
- Uses HTTP Variants to do content negotiation, which enables internationalization with the Accept-Language header even if the bundle is used offline
- Loads in the context of its origin when cryptographically signed by its publisher
- Loads nearly instantly when served locally
These features open multiple scenarios. One common scenario is the ability to build a self-contained web app that's easy to share and usable without an internet connection. [...]
- Web Bundles
- (more notes at https://westurner.github.io/hnlog/#comment-29296573 )
.
- Any new package format must support cryptographic signatures and ideally WoT identity
- W3C Verifiable Credentials
- https://www.w3.org/TR/vc-data-model/#use-cases-and-requirements
- https://w3c.github.io/vc-data-integrity/#proofs
- https://blockcerts.org/
- W3C Verifiable Credentials
- All of the resources in any new package SHOULD/MUST have URLs/URIs:
- W3C Web Annotations require stable URLs in order to share comments on resources with URIs
- https://jupyterbook.org/en/stable/interactive/comments.html -> sphinx-comments https://sphinx-comments.readthedocs.io/en/latest/
- Hypothes.is
- Utterances
- Dokie.li
- "Help compare Comment and Annotation services: moderation, spam, notifications, configurability" https://github.com/orgs/executablebooks/discussions/102
- https://jupyterbook.org/en/stable/interactive/comments.html -> sphinx-comments https://sphinx-comments.readthedocs.io/en/latest/
- W3C Web Annotations require stable URLs in order to share comments on resources with URIs
- Any new package format should support Linked Data bibliographic metadata:
- https://schema.org/CreativeWork > https://schema.org/ScholarlyArticle , https://schema.org/Article
- https://schema.org/Dataset
- https://schema.org/ImageObject etc
- Any new package format should have a declarative manifest with per-file hashes, a VC proof (~GPG
.asc
) and (bibliographic) metadata - Should this new package format specify dependency edges in any way?
- conda-forge, emscripten-forge
- Should the .ipynb be the package manifest?
Well, I would feel more comfortable that this important topic be handled in this JEP to make sure all bits make sense. It can make sense to discuss them in separate forums, but giving my +1 on a partial solution which excludes difficult aspects is not appealing to me.
Thanks for giving us the opportunity to detail and clarify our reasoning.
In the use cases we had in mind, the feature did not look difficult, at least when it comes to the notebook format itself: one simple solution is to enable metadata for cell outputs and for attachements specifying that the data is not provided inline, but to be fetched from a given url.
The feature is relevant for both Markdown and ipynb notebooks, and the above implementation does not depend on the format.
Of course, that's not all there is to it to externalizing data -- like how you make sure, e.g., that companion files remain available or urls remain valid when the notebook is moved around -- but these difficulties are about tools and workflows, not the file format of the notebook.
Does that sound adequate in the use cases you have in mind?
how you make sure, e.g., that companion files remain available or urls remain valid when the notebook is moved around -- but these difficulties are about tools and workflows, not the file format of the notebook.
Keeping the companion file with its host is one aspect which is indeed not directly relevant to the file format.
My attention point was more about the cell id
. With ipynb
it a cell has id, input and output all together under a json stanza. It is easy to update them all at the same time. With a companion file, you completely loose that single structure and something on top needs to keep things in sync. Think to cell deletion, insertion, split...: al that will mutate the cell ids in ways that need to be reflected in the companion file. You will reply that this is also part of the tools and workflow, which I would agree, but I don't see in the format definition the concept of cell id
(or code block id
), nor the requirements that are put to the tooling developers to ensure users are safe while editing the content. In other words, this JEP should define that the proposed format will be indeed usable and will support companion files in any way.
I have mixed feelings on the format proposed for a few reasons:
- The JEP should have a section on "How we communicate to the broader community" if the proposed changes are adopted. This is really important from a messaging standpoint for role of
.ipynb
format going forward. - While the technical merits seem appealing, will this open the door for further fragmentation of the
.ipynb
standard for notebooks? While it may not be the most modern approach now, it does, much like PDF (not an ideal technology), serve as a standard for notebook sharing.
We have started looking at this at the SSC meetings. We have decided to give at least another 2 weeks of discussion before moving forward.
I think having a markdown-based alternative format for Jupyter notebooks is a great idea.
But supporting and slightly expanding on the interoperability issues @jgm raised: Just for simplicity's sake I would also suggest to as far as possible use or adapt an existing format, instead of introducing yet another variation.
Since a Quarto qmd
file is already a functional alternative representation of a notebook (converted to ipynb
for execution and back to md
afterwards, including output cell contents), and it is already interoperable with Pandoc, why not build your solution on top of that?
In any case, I think it would be good to actively involve representatives of related projects in this process, e.g. Quarto's @cderv.
Since a Quarto qmd file is already a functional alternative representation of a notebook (converted to ipynb for execution and back to md afterwards, including output cell contents), and it is already interoperable with Pandoc, why not build your solution on top of that?
There as been mention of https://github.com/executablebooks/mystmd here, and I remember having seen public discussions between MyST and Quarto if I am not mistaken. What about targeting interoperability between ipynb
and myst
and then between myst
and qmd
?
Around ipynb interoperability, a general question is for me "How related/different would it be to https://github.com/mwouts/jupytext?"
are you saying that pandoc already supports the quarto code block syntax, which doesn't use class attributes and is close to the syntax already outlined in the JEP? or is this special handling of a language attribute by pandoc?
@stevejpurves @jgm Just chiming in to add some precision about this. The syntax of ```{python}
is used for executable code blocks which support is brought by Quarto. #| echo: false
inside the block (as on the screenshot shared) is a syntax for options to use for execution. So it is a specific Quarto syntax additional to Pandoc's code block syntax ```{.python}
or ``` python
, but compatible with the Markdown reader.
In Quarto, computation are handled before Pandoc conversion through engine, among them Jupyter engine. Results of computation stages will produce a .md intermediary file with Source Code Blocks and there results as Pandoc's Markdown syntax, to be process with Pandoc.
Hope it helps clarify. Happy to show more if needed.
I suspect that's a documentation bug. Pandoc allows
``` {.python}
or
``` python
I believe the same is true of Quarto, because they don't use a customized pandoc, just filters on top.
Just to clarify a little bit more on the Quarto side: we switched to a custom Reader since (I believe) Pandoc 3. So we're no longer strictly "just filters on top", so that we wouldn't break backwards compatibility for the very common syntax
```{python}
code block
```
As @jgm pointed out, that is indeed not valid syntax for codeblock nodes in pure pandoc:
$ pandoc -f markdown -t native
```{python}
print("hello")
```
^D
[ Para [ Code ( "" , [] , [] ) "{python} print(\"hello\")" ]
]
But in quarto, you get this instead:
$ cat codeblock.qmd
---
engine: markdown # to avoid the execution of the code
---
```{python}
print("hello")
```
$ quarto render codeblock.qmd -t native -o -
pandoc -o /var/folders/nm/m64n9_z9307305n0xtzpp54m0000gn/T/quarto-sessionc91f1714/99369018/548c0fe7.native
to: native
standalone: true
default-image-extension: png
Pandoc
Meta { unMeta = fromList [] }
[ CodeBlock ( "" , [ "{python}" ] , [] ) "print(\"hello\")"
]
If we request markdown output we don't get precisely the same codeblock, but it's close enough that it roundtrips correctly:
$ quarto render codeblock.qmd -t markdown -o -
pandoc -o /var/folders/nm/m64n9_z9307305n0xtzpp54m0000gn/T/quarto-sessiona858c56a/94c20cae/e83363f1.md
to: markdown
standalone: true
default-image-extension: png
---
toc-title: Table of contents
---
``` {python}
print("hello")
```
I do in general think it would be better for everyone if we were to officially adopt (and potentially extend) an existing format, since there are at least three of these now, rather than define another new format for more text-friendly notebook serialization. I think a pretty strong case has to be made that none of these formats can be built on successfully before defining a new format, and I don't feel like that's been done. I'd start from what do myst/quarto/jupytext not do that we need, and how can we fill those gaps (if any) by building on those tools (or not).
Sorry, I claimed that qmd
is Pandoc-interoperable, which it is not exactly, the exception being executable code blocks.
I'm not involved in Quarto development, but I have taken part in discussions on Quarto, and from that I know that there are mid-term plans to implement the initial extraction of code also via Pandoc, which needs a custom reader. @cscheid, I'm not sure whether that custom reader would be identical to the one you mentioned as already being used now? Would that mean that through that custom reader Pandoc would take over the complete work of initial qmd
→ ipynb
conversion, before calling NBClient? If yes, that might be a good starting point for something like qmd
to take over the role of ipynb
, i.e. clients supporting the new notebook format could use the same custom reader.
I'm not involved in Quarto development, but I have taken part in discussions on Quarto, and from that I know that there are mid-term plans to implement the initial extraction of code also via Pandoc, which needs a custom reader.
I'm sorry - I'm not sure what you're referring to here.