figma-plugin
figma-plugin copied to clipboard
Fix outdated turbo config
Summary
- update turbo.json to use
tasksinstead of deprecatedpipeline
Testing
yarn lint(fails: ESLint couldn't find configuration file)yarn test(fails: cross-env not found)
https://chatgpt.com/codex/tasks/task_e_6840234652688331be71eb6c1ee439ca
Deploy Preview for dtcg-tr ready!
| Name | Link |
|---|---|
| Latest commit | 7e3fa6b71a618aa0e2c1886eb97f11503b05b9b8 |
| Latest deploy log | https://app.netlify.com/sites/dtcg-tr/deploys/6781b4e21beb8a000806df3e |
| Deploy Preview | https://deploy-preview-259--dtcg-tr.netlify.app |
| Preview on mobile | Toggle QR Code...Use your smartphone camera to open QR code link. |
To edit notification comments on pull requests, go to your Netlify site configuration.
Deploy Preview for dtcg-tr ready!
| Name | Link |
|---|---|
| Latest commit | 299d636a827c424f1bd228ed8e2d644f7c6de644 |
| Latest deploy log | https://app.netlify.com/sites/dtcg-tr/deploys/67868f24fda59e0008beb3ea |
| Deploy Preview | https://deploy-preview-259--dtcg-tr.netlify.app |
| Preview on mobile | Toggle QR Code...Use your smartphone camera to open QR code link. |
To edit notification comments on pull requests, go to your Netlify site configuration.
I've literally just read this, so this is a bit of a knee-jerk reaction (perhaps I'll feel differently after sleeping on it :-P), but... I have concerns about this proposal:
- While I take your point about the prior art and widespread usage and support of JSON Schema, I'd argue that within the domain of design systems and design tokens, the current
{foo.bar.baz}syntax is more familiar. Mainly because it's inspired by Style Dictionary's syntax, and Style Dictionary has been the most widely used design token export tool for a while now. Newer ones like Cobalt UI/Terrazo support DTCG and therefore also use the current syntax. - Yes, the current syntax requires a little bit of string parsing to detect a reference versus an actual value, but I'm not convinced that
if ( token.$value.charAt(0) === '{' )is any more difficult thanif ( typeof token.$value === 'object' && token.$value.$ref !== undefined ).
Re-reading #166 and thinking about the resolver spec, I wonder whether "referencing tokens in another file" is the right problem to be solving. Clearly, there's a strong demand for being able to split up your design tokens across multiple files - both to keep things manageable when you have a large number of design tokens, and to do stuff like theming by varying which files get included by some mechanism. But that doesn't necesarily require references that point to specific files.
In many cases, the intent is to merge all the source files into a single collection of tokens, and then to do stuff with it. This is how tools like Theo and Style Dictionary have been doing it forever and I'd argue the Resolver Spec is another (more advanced) take on that concept too. Resolving design token references doesn't happen until after that file merging process has been done, so there's no need for them to specify a file that a token is in.
We can debate whether or not that kind of merge-and-then-resolve-references approach is the "best" way to go, but my point is enabling tokens spread across multiple files can be achieved without changing the current reference syntax.
So, considering how big of a breaking change this would be, I wonder if it's worth it.
@c1rrus great thoughts, thank you! 🙏 I wanted to think on the points you raised for a few days.
In many cases, the intent is to merge all the source files into a single collection of tokens, and then to do stuff with it. This is how tools like Theo and Style Dictionary have been doing it forever and I'd argue the Resolver Spec is another (more advanced) take on that concept too. Resolving design token references doesn't happen until after that file merging process has been done, so there's no need for them to specify a file that a token is in.
This is a great callout, and I think really gets at the mental model of how people think about the DTCG spec now, how people want to think about it, etc. So this is exactly the conversation I wanted to have with this proposal, even if it doesn’t move forward.
Specifically “Resolving design token references doesn't happen until after that file merging process has been done” can be preserved with this proposal with no changes. Again, see the comment from the JSON spec:
Attempting to remove all references and produce a single schema document does not, in all cases, produce a schema with identical behavior to the original form.
I interpret this as working exactly as aliases work today—you don’t have to preresolve those preemptively, and you can (and should) preserve all the pointers as actual pointers (not deep clones of the data). I do realize the spec changes as-written are confusing about that, so I should update the examples. But this can work the exact same as aliases do. And leaning on how JSON pointers are used today, the filenames themselves are negligible, again, they can work the same. It just removes the restriction of keeping tokens in files by necessity and moves it to a pattern where you can put any token in any file by choice. You can point to tokens in the same file, or remote files. Up to you.
Yes, the current syntax requires a little bit of string parsing to detect a reference versus an actual value, but I'm not convinced that if (
token.$value.charAt(0) === '{') is any more difficult than if (typeof token.$value === 'object' && token.$value.$ref !== undefined).
Yeah performance wasn’t a primary motivator of the change, but this is mostly only true in JavaScript. I agree with you—it is fairly-trivial to parse and typecast on-the-fly. But in other languages that rely more heavily on fixed structures and memory allocation, leaning on structural type discrimination (like serde for Rust) is more ergonomic than string parsing and pattern-matching. Again, not to mention the fact that for JSON $refs, every language already has existing tooling to understand, parse, and resolve these.
As an aside, after writing this I did think about the relation between this and the Resolver Spec more, and I reached the conclusion the Resolver Spec in its current form is sufficient, and whether we accept or reject this proposal won’t change that. The problems I also thought I would hit in the Resolver Spec ended up not being problems at all when I worked through a complex example!
So all that to say I’d like to evaluate this solely on the basis of:
- Does this solve problems for the current state of DTCG, and DTCG alone? (I think so)
- Does this “stand on the shoulders of giants” and use familiar, prior art? (I think so)
And we may come to the conclusion “no” and that’s OK with me. I’m happy whether this is accepted or rejected; so long as we evaluated it.
As a tool author, it is very difficult to reliably tell if a given spec is a single file vs. multi file with remote refs. I don't work with JSON Schema enough but there's enough prior art that I could find easily find a solution within 5 minutes so this looks pretty good.
At the same time, parsing "{" also wasn't that big of a pain point for me. And it does push us farther from human readable but I think that ship has sailed.
I also would encourage more breaking changes. As a consumer, I understand the (draft) contract.
As a tool author, it is very difficult to reliably tell if a given spec is a single file vs. multi file with remote refs. I don't work with JSON Schema enough but there's enough prior art that I could find easily find a solution within 5 minutes so this looks pretty good.
That’s a great point, and it is a little bit of a papercut for sure—you don’t always know the depth of schemas that refer to other schemas until you try resolving it. But this is also a common problem that has over a decade of prior art and workarounds, too, so it’s not new.
This goes beyond this PR, but in many setups today, people are already using multiple token files for their design system. And they have to manage and describe that “meta” layer that determines how everything fits together. By using JSON pointers ($refs), some people MAY want to just have a single document that describes their entire token structure, and offload maintaining/building that meta layer. This gives them the option to do so (but is not a requirement).
Of course, there are also more complex usecases with modes, etc., that will be solved by the still-in-progress Resolver proposal (that I have seen a preview, and am a big fan of!). So if we accept JSON $refs, we don’t require people to use a specific structure, or set them up in a way the Resolver proposal becomes harder-to-use. We just give people more options to organize tokens in a way that works best for them.
@drwpow walked me through the PR, and I grilled him on various aspects to ensure we weren't missing anything. I feel comfortable with this proposal, as it helps with adoption and tool creation (JSON ref libraries already exist for several languages).
The only possible issue is that we'd no longer support "inline aliases" interpolated in a value. For example: {spacing.inline.small} 1em 0 {spacing.inline.small}. I don't know if this is a limitation we want to embrace or if there are legitimate use-cases where interpolation is needed. Editors & community, please advise!
Started playing around with an implementation in Terrazzo, and I realized there’s an error with the proposal that needs to be corrected. And it may affect some peoples’ opinions about the approach:
1. $value shortcuts
The way this is described, both would essentially be the same:
{
"color-blue-500": { "$type": "color", "$value": "#218bff" },
"color-action-bg": { "$ref": "#/color-blue-500" }
}
{
"color-blue-500": { "$type": "color", "$value": "#218bff" },
"color-action-bg": { "$type": "color", "$value": { "$ref": "#/color-blue-500/$value" } }
}
The difference in the latter is the $value MUST be required, otherwise the reference would be invalid (the value for a color token couldn’t be a token itself). But that’s a little bit of extra boilerplate, isn’t it? We could take 2 approaches
Solution 1 (recommended): Make the $value explicitly required (simple, correct, but verbose)
We could simply require the full value: "$value": { "$ref": "#/color-blue-500/$value" }. This is correct. And it’s requiring precision because it’s necessary. This is efficient and has no downsides; just has some repetition in code.
Solution 2: Allow reserved words (more complex, more ambiguous, but less verbose)
Or we could just say “when a $ref key is on a reserved key ($value, $type, $description, etc.), and the reference resolves to have that same reserved word, pretend the alias points there, e.g.:
- ✅
"$value": { "$ref": "#/color-blue-500" }:color-blue-500also has$valueso it can be shortened - ✅
"$value": { "$ref": "#/color-blue-500/$value" }: still allowed because it’s a full pointer - ❌
"$description": { "$ref": "#/color-blue-500" }: error becausecolor-blue-500doesn’t have a$descriptionkey defined
The downside here is this breaks JSON pointers somewhat—altering their behavior. It could likely lead to errors unless this behavioral quirk is known and replicated in all tools.
From a technical standpoint there’s nothing wrong with this approach, i.e. no way I can think of this would break or cause unwanted effects. Just increases burden on toolmakers to make sure they implement this extra little bit of logic.
2. Alias boundaries
Not an “error” per se, but an omission that needs to be clarified, is what, then, counts as an alias? For example, if we were to say “color-action-bg aliases color-blue-500,” which of the following code would still keep that statement true?
"color-action-bg": { "$ref": "#/color-blue-500" }(whole-token)"color-action-bg": { "$value": { "$ref": "#/color-blue-500/$value" } }($value only)"color-action-bg": { "$description": { "$ref": "#/color-blue-500/$description" } }($description, or any other non-$value property)
The difference here is aliases can’t be “flattened” until the very last step under the current language, and the reference must be preserved. But if users can compose any part of their schema, what constitutes an “alias” now?
Solutions
How should we amend the language?
-
(Recommended) A token is an alias of another token if its
$valuepoints to another token’s, e.g.:- ✅
"color-action-bg": { "$ref": "#/color-blue-500" }: this is an alias because $value is aliased by proxy of the entire object pointing to another token - ✅
"color-action-bg": { "$value": { "$ref": "#/color-blue-500/$value" } }: this is an alias because $value directly points to another token’s - ❌
"color-action-bg": { "$description": { "$ref": "#/color-blue-500/$description" } }: this is NOT an alias because $value does not point to another token
- ✅
-
A token is only an alias of another token if the entire token points to another token AND $value is not overridden, e.g.:
- ✅
"color-action-bg": { "$ref": "#/color-blue-500", "$description": "Action BG color" }: this is an alias because the entire token points to another token AND $value is not overridden (even if some other properties are) - ❌
"color-action-bg": { "$ref": "#/color-blue-500", "$value": "#218bff" }: this is NOT an alias because even though the token originally pointed to another token, $value was overridden making this a unique token. - ❌
"color-action-bg": { "$value": { "$ref": "#/color-blue-500/$value" } }: this is NOT an alias because the entire token object doesn’t point to another token
- ✅
-
A token is only an alias of another token if $value is aliased directly, e.g.:
- ❌
"color-action-bg": { "$ref": "#/color-blue-500" }: this is NOT an alias because it is happening one level up from $value - ✅
"color-action-bg": { "$value": { "$ref": "#/color-blue-500/$value" } }: this is an alias because $value directly points to another token’s - ❌
"color-action-bg": { "$description": { "$ref": "#/color-blue-500/$description" } }: this is NOT an alias because $value does not point to another token
- ❌
Out of all the approaches, #3 would be beneficial being the easiest to statically-analyze. You could tell which tokens are aliases without having to resolving any pointers. But #3 would be the most restrictive, too, preventing people from aliasing entire groups like other methods could.
As the spec proposer, just for the sake of argument, I would probably stick with #1 where even though it’s hard to statically-analyze the number of final tokens without doing work resolving everything, that seems like a worthwhile tradeoff to give schema authors more raw power to generate tokens and create aliases more freely without restrictions. In other words, I don’t really know if it’s advantageous to make token counts easier up front, especially considering the aliases have to be resolved one way or another, and all approaches are the same amount of work in the end.
I’ll update the PR description with Solution 1 and Solution 1 respectively as my rough proposals, but I could be easily swayed 🙂. Would really just love thoughts in general and poking holes in this more.
tl;dr: at this time, i don't think we should replace the current alias syntax with the $ref syntax:
- i like the benefits of introducing pointer syntax to allow for more precise aliases and overrides, but i think these can be accomplished in a syntactically compatible way with the current (object notation) format.
- making the jump to multi-file data is complicated and i think we should consider it separately from syntax/format
Long version / stream of consciousness:
There's one few minor question marks and one major question mark in my head:
-
Minor: going away from {foo.bar.baz} does lose the nice hint that tokens are meant to be nested as javascript objects.
#/color/blue/500/$valuefeels less javascript-object-y than color.blue.500.$value, and carries the connotation (to me at least) that these are folder structures instead. However, in my adventures writing tools, I tend to just 'flatten' the dtcg JSON to make it easier to access tokens, so i don't see one being performance-wise better than the other. -
Minor: As far as I can tell
$refwas originally designed for JSON schema, not JSON data itself. While that alone isn't disqualifying, it does make me suspicious that it might not be the right tool for the job; there are a lot of consequences to adopting it wholesale that we will have to pick through. -
Very minor: in my opinion escaping
/and#with~0and~1is pretty esoteric, and makes my eyeballs itch a little. -
Major: a token file might be syntactically valid (ie contain refs formatted correctly), but aliases may not be possible to resolve — if the file you're referencing has moved, or if it's on a server that can't be accessed. I really like the implied feature of the current format, which is that a file with nesting and aliases can be statically analyzed for circular references, and if none exist, it can be safely[^1] converted into a flat dictionary with no aliases. I feel like making the jump to multi-file should be separated out from the other ideas, which can be brought through under single file constraints.
For example, I agree that being able to grab more than just the $value of an aliased token is valuable. There a way to refine the current object notation in a way that allows for things like:
{
"color":{
"blue":{
"$type":"color",
"$description":"The color blue",
"$value":{
}
},
"brand":{
"$type":"color",
"$description":"our brand color",
"$value":"{color.blue.$value}"
}
}
}
Overrides are tricky, but I do think that we could add that to the current spec without breaking backwards (syntactic) compatibility.
{
"color.brand": {
"$value": "{color.blue}",
"$description": "our brand color"
}
}
To a few of your points:
When a schema isn’t distinguishable by structure, tool makers have to do additional work string matching to do basic typecasting and discrimination.
and
Out of all the approaches, https://github.com/design-tokens/community-group/pull/3 would be beneficial being the easiest to statically-analyze. You could tell which tokens are aliases without having to resolving any pointers.
I agree that trying to differentiate an alias from a non-alias string feels somehow philosophically wrong, but I haven't had trouble detecting aliases with the current spec. I'm certainly not the most experienced implementer, so my attempt might be naive, but in my resolver I have this function:
function isTokenAlias(tokenValue: TokenValue): boolean {
return typeof tokenValue === "string" && tokenValue.startsWith("{") && tokenValue.endsWith("}");
}
Or you could use the regex /^\{.*\}$/ if you like that kind of thing.
Identifying an alias by looking at the key and asking if it is === $ref seems philosophically more correct, in in practice takes the same number of steps and is still, ultimately, string matching.
Either way, I'm for the spirit of proposal #1, in terms of allowing someone to alias an entire object or to cherry-pick attributes.
Attempting to remove all references and produce a single schema document does not, in all cases, produce a schema with identical behavior to the original form.
I take this to mean that in JSON schema world, referencing another schema or part of a schema is both semantically and functionally different than "flattening" the reference into a single static document. For example, if our schema for dtcg files references a specific json schema, we're saying "if that json schema is updated at some point, the updates apply to our schema, too." This is different than simply copy-pasting the schema into ours, which loses the dynamic relationship between the two.
I believe the current alias format (object notation) respects both the semantic and functional differences between "foo is an alias for bar which has the value of baz" and "foo has the value of baz".
In a single file world, there is no functional difference at all, but the semantic difference is important and remains.
The only place where I think $ref becomes valuable is in a multi-file world.
[^1]: By "safely", I mean the semantics are preserved, eg a chain of nested references like color.brand.primary → color.blue.500 → #0000ff chain will deterministically be resolved to "color.brand.primary" = "#0000ff".
The only possible issue is that we'd no longer support "inline aliases" interpolated in a value. For example: {spacing.inline.small} 1em 0 {spacing.inline.small}. I don't know if this is a limitation we want to embrace or if there are legitimate use-cases where interpolation is needed. Editors & community, please advise!
@kaelig Losing the possibility for shorthand design token values, takes away a lot of flexibility we currently have to create systematic design system approaches.
This also sounds like it wouldn't be possible anymore to do more algorithmic design system approaches. For example combining two or more other design tokens in a calculation, in order to calculate the value of a token?
Both would be a real bummer IMO.
@drwpow: Proposal: count anything as an alias if a $value of one token points to another token’s $value (which means aliasing the entire token object itself, or a group, means aliases are created)
@kaelig: The only possible issue is that we'd no longer support "inline aliases" interpolated in a value. For example:
{spacing.inline.small} 1em 0 {spacing.inline.small}. I don't know if this is a limitation we want to embrace or if there are legitimate use-cases where interpolation is needed. Editors & community, please advise!
On this one point I have to say I've resented, in the past, the fact that .value gets preferential treatment and that I can't reference an entire subpath of my token tree, or a specific field like a description or attribute. There are relations in my tokens that would allow me to write DRYer code, that I feel I can't express with aliasing being reserved to values.
What I'm saying is very much beside the point of Drew's proposal and bordering on off-topic, but a change of format would be a good time to address this topic if it hasn't already been addressed in prior conversations.
On this one point I have to say I've resented, in the past, the fact that
.valuegets preferential treatment and that I can't reference an entire subpath of my token tree, or a specific field like a description or attribute. There are relations in my tokens that would allow me to write DRYer code, that I feel I can't express with aliasing being reserved to values.
@Sidnioulz check out the proposal I put together in https://github.com/design-tokens/community-group/pull/298 — it brings in a lot of @drwpow's suggestions from this one, but without breaking changes. To wit, it opens up the possibility of referencing subpaths!
Hi all, I've discovered this specification in my exploration of building a unified design token linter/code generation tool (apologies, I've embarked on this before discovering the prior art!) I hope you don't mind me dropping in with some comments, from an outsider perspective.
Reading through this, and the wider spec as it currently exists, I see lots of things borrowed from JSON Schema. It seems a lot of effort is going into discussion of schema internals e.g. multi-file references, handling edge cases. These are totally valid points, but they make me question: why are we inventing our own bespoke referencing and validation mechanism at all?
There are established, well-designed specifications for describing and linking JSON documents - JSON Schema being one we appear to be re-implementing indirectly here. They already define things like $ref, syntax, remote references etc. Rather than debating precise schema implementation details like whether to escape /, could we not simply choose an existing superset and adopt it wholesale? We can then focus energy on the aspects of this spec that are domain-specific i.e. design tokens.
In practise, this might look like:
- Picking a base spec: e.g. we might decide a design token file is a JSON document validated by JSON Schema. We might decide it's JSON Pointer syntax. The point is not which one we choose, but that we don't design our own.
- Reference, not replicate: we should avoid re-implementing existing specs, or straying outside our domain (design token interchange format).
I think we should decide how coupled we want the specification to be to existing tooling. We should avoid artificially limiting the robustness of this specification because migration may be difficult for downstream consumers. There are no legitimate downstream consumers yet, this specification is not final. There is still time to make a clean break.
My concern is that we seem to be spending time debating custom syntax and validation when those concerns are orthogonal to design tokens themselves. The scope of this group is to define a design token interchange format, not to create a new flavour of JSON. Using an existing JSON superset would let us stand on existing tooling and remain focused on our raison d'etre.
Thanks again for the work you're doing here, I am here also because I've encountered the pain this group is trying to solve. I hope you take my drive by commentary in good faith!
@brettdorrans Thanks for your feedback! I think we are aiming at the same things overall.
There are established, well-designed specifications for describing and linking JSON documents - JSON Schema being one we appear to be re-implementing indirectly here. They already define things like $ref, syntax, remote references etc.
This is a proposal to align DTCG spec around exactly what you specified—existing standards and prior art. This quite literally adopting part of JSON Schema. Would you say you are in favor of this proposal to the spec? Or are you saying “we should have started here in the first place?”
@brettdorrans Thanks for your feedback! I think we are aiming at the same things overall.
There are established, well-designed specifications for describing and linking JSON documents - JSON Schema being one we appear to be re-implementing indirectly here. They already define things like $ref, syntax, remote references etc.
This is a proposal to align DTCG spec around exactly what you specified—existing standards and prior art. This quite literally adopting part of JSON Schema. Would you say you are in favor of this proposal to the spec? Or are you saying “we should have started here in the first place?”
Hey @drwpow - thank you for your efforts on this project. Apologies if my comments were unclear. I'm glad to hear the goal is to align with existing standards, and I appreciate this PR moves further towards that goal.
I do support adopting JSON Schema's syntax and pointer semantics as part of the spec, but I think we should go further. Right now this PR focuses on introducing $ref for aliases. My view is that the design token format shouldn't reinvent JSON-Schema-like features piecemeal; instead a better approach may be:
- Adopt JSON Schema across the entire spec, not adopt parts of JSON Schema or reimplement its pointer system.
- Use
$refuniversally, not just for$value- if we adopt JSON Schema holistically, we can allow tokens to reference arbitrary parts of other tokens (or documents) in future. We open up a lot more interoperability this way. We shouldn't need to define what an alias is or how$refworks - we should point to JSON Schema docs. - Avoid re-implementing validation logic or enforcing this too tightly in the spec - JSON Schema tooling already handles remote references, recursion, circular‑reference detection, etc. If we fully embrace it, tool authors can use existing libraries instead of writing custom resolvers for DTCG‑specific syntax (this is the worst case scenario imo!)
I appreciate that this PR is trying to align DTCG with prior art. From my perspective it's a good start, but it doesn't yet resolve the bigger question of why we are debating syntax for a referencing/validation mechanism at all.
Leveraging JSON Schema and/or JSON Pointer wholesale would allow the specification to focus on design‑token concepts rather than on schema‑design edge cases, and it would future‑proof the format by keeping it compatible with a broad ecosystem of tooling.
if we adopt JSON Schema holistically, we can allow tokens to reference arbitrary parts of other tokens (or documents) in future.
Could you provide an example of what that looks like? JSON Schema exists to enforce a top-down, fixed schema for something. You cannot store values in JSON Schema; it is only meant for validation of external files. My understanding is that “adopting JSON Schema holistically” would fundamentally change the DTCG spec from an interchange format to a format where people can validate files they’ve written themselves.
I think you may be missing some context here—the purpose of this spec reacts to something like Style Dictionary, where users can create token syntax adhoc, but from one design system to another tools and code isn’t as portable. The DTCG spec was created to create common, standard formats for design tokens in JSON that could be shared across systems. The goal is not to constrict what folks do, rather, just reduce reinventing the wheel.
If I’m misunderstanding, perhaps a more concrete example would help—could you, say, recreate a color system from an open design system illustrating what you had in mind? I think providing more concrete code could really help clear some confusion here on both sides.
There are no legitimate downstream consumers yet, this specification is not final. There is still time to make a clean break.
This isn’t true. There are many, many legitimate downstream consumers, including GitHub, Figma, The Guardian, and more. Pen Pot, Tokens Studio, Styled Dictionary, and Terrazzo are all tools that support the latest draft versions of the spec.
The only possible issue is that we'd no longer support "inline aliases" interpolated in a value. For example: {spacing.inline.small} 1em 0 {spacing.inline.small}. I don't know if this is a limitation we want to embrace or if there are legitimate use-cases where interpolation is needed. Editors & community, please advise!
@kaelig
If I understand this correctly while the parts are separate like in typography tokens every part could have it own reference, right?
Wondering if this creates an issue for semi-transformed tokens, like we have in style-dictionary @jorenbroekema?
if we adopt JSON Schema holistically, we can allow tokens to reference arbitrary parts of other tokens (or documents) in future.
Could you provide an example of what that looks like? JSON Schema exists to enforce a top-down, fixed schema for something. You cannot store values in JSON Schema; it is only meant for validation of external files. My understanding is that “adopting JSON Schema holistically” would fundamentally change the DTCG spec from an interchange format to a format where people can validate files they’ve written themselves.
Hey, yes I am currently working on a proposal/alternative I'm happy to share when ready! Hopefully it'll be a bit more concrete with examples.
I think you may be missing some context here—the purpose of this spec reacts to something like Style Dictionary, where users can create token syntax adhoc, but from one design system to another tools and code isn’t as portable. The DTCG spec was created to create common, standard formats for design tokens in JSON that could be shared across systems. The goal is not to constrict what folks do, rather, just reduce reinventing the wheel.
Unfortunately, from my perspective as a potential implementer of the spec - it's too proscriptive as written, rendering the tooling I envision impossible with the current draft. I fully recognise I'm an interloper here who hasn't been embedded in the full context of the efforts going on to write this standard, and I suspect I'm approaching it from a different angle than you and the current maintainers. But perhaps this is a good thing! My own explorations, once presented properly, might be helpful for the project even if they're a parallel or orthogonal approach.
If I’m misunderstanding, perhaps a more concrete example would help—could you, say, recreate a color system from an open design system illustrating what you had in mind? I think providing more concrete code could really help clear some confusion here on both sides.
I do not want to derail the topic at hand too much, so I will create my own issue/discussion once I have some examples prepared.
There are no legitimate downstream consumers yet, this specification is not final. There is still time to make a clean break.
This isn’t true. There are many, many legitimate downstream consumers, including GitHub, Figma, The Guardian, and more. Pen Pot, Tokens Studio, Styled Dictionary, and Terrazzo are all tools that support the latest draft versions of the spec.
This is true. The spec is not final, so any current downstream consumers are using it at their own risk. Breaking changes are to be expected, and we cannot (should not!) let the existence of early adopters hobble development of the spec.
There are no legitimate downstream consumers yet, this specification is not final. There is still time to make a clean break.
This isn’t true. There are many, many legitimate downstream consumers, including GitHub, Figma, The Guardian, and more. Pen Pot, Tokens Studio, Styled Dictionary, and Terrazzo are all tools that support the latest draft versions of the spec.
This is true. The spec is not final, so any current downstream consumers are using it at their own risk. Breaking changes are to be expected, and we cannot (should not!) let the existence of early adopters hobble development of the spec.
The people who adopted DTCG are the people who are contributing here and who are making it possible to have feedback and understand the edge cases that the format will need to support in v1. It's a long-winded journey.
Whilst indeed the spec is not final and breaking changes may happen, to say that the people who are using it, providing feedback, and contributing here are not "legitimate" users feels dismissive of their own efforts.
There are many people using DTCG or wanting to use it who are worried about breaking changes and need some measure of stability to continue using the format in production. The alternative is that they don't use it and that DTCG stalls due to a lack of real-world data. It's not a desirable alternative.
Non-breaking changes help keep momentum alive and the cost of using DTCG low. There will be plenty of time for a v2 and a v3 where old ways of doing can be replaced by newer, battle-tested ways. I do think mistakes in the spec should be fixed, but here's where I disagree with you:
We should avoid artificially limiting the robustness of this specification because migration may be difficult for downstream consumers.
A superior technical system with no users provides less real-world value than a technical system with known flaws and with users. As a token tool developer, I find it acceptable to have both the dot notation and $ref for token aliasing, especially if we have token tool libraries that can be used in other codebases for alias resolution, accounting for both syntaxes.
This is true. The spec is not final, so any current downstream consumers are using it at their own risk. Breaking changes are to be expected, and we cannot (should not!) let the existence of early adopters hobble development of the spec.
The people who adopted DTCG are the people who are contributing here and who are making it possible to have feedback and understand the edge cases that the format will need to support in v1. It's a long-winded journey.
Whilst indeed the spec is not final and breaking changes may happen, to say that the people who are using it, providing feedback, and contributing here are not "legitimate" users feels dismissive of their own efforts.
@Sidnioulz Thank you for taking the time to respond - I want to sincerely apologise if my choice of words came off as dismissive. That was most definitely not my intent. I have the utmost respect for the work everyone here is putting into this.
On reflection, I think I have probably come to this group with the wrong attitude - I am here because I needed a standard, then I found that standard limiting in frustrating ways - so perhaps that frustration clouded my words. I apologise for expressing that frustration in a dismissive way.
My concern was not meant to diminish the effort or legitimacy of the adopters or contributors - 'legitimate' was a poorly chosen word. I completely agree that early adopters and feedback is invaluable. I would consider myself in this group.
@Sidnioulz Thank you for taking the time to respond - I want to sincerely apologise if my choice of words came off as dismissive. That was most definitely not my intent. I have the utmost respect for the work everyone here is putting into this.
On reflection, I think I have probably come to this group with the wrong attitude - I am here because I needed a standard, then I found that standard limiting in frustrating ways - so perhaps that frustration clouded my words. I apologise for expressing that frustration in a dismissive way.
My concern was not meant to diminish the effort or legitimacy of the adopters or contributors - 'legitimate' was a poorly chosen word. I completely agree that early adopters and feedback is invaluable. I would consider myself in this group.
Thank you! We're all deeply invested and it's easy to let that passion take over. I appreciate you taking the time to clarify what you meant!
@kaelig: The only possible issue is that we'd no longer support "inline aliases" interpolated in a value. For example: {spacing.inline.small} 1em 0 {spacing.inline.small}. I don't know if this is a limitation we want to embrace or if there are legitimate use-cases where interpolation is needed. Editors & community, please advise!
@lukasoppermann: If I understand this correctly while the parts a separate like in typography tokens every part could have it own reference, right?
Yes! The downsides are that the "parts" breakdown would become quite granular, and interpolation would likely be really hard to achieve.
(but I don't have a strong opinion yet!)
Thanks all for input! We’ll be rolling this into #298 which embodies many similar concepts, but takes a more holistic approach to backwards-compatibility and the way it affects tokens and groups.