proposal-signals icon indicating copy to clipboard operation
proposal-signals copied to clipboard

Rationale: Autotracking

Open NullVoxPopuli opened this issue 10 months ago • 19 comments

A key part of this proposal is autotracking. Autotracking can be a somewhat complicated mechanism at first (it is a transparent feature, but has implications for how folks will write code). Here is an attempt at explaining why it's included in Signals:

To me, autotracking means that libraries built with signals would inherently work in environments that just do not care about autotracking and, (imo (for me), this is a hard requirement:

to have some level of "same behavior" in and out of environments that know about Signals).

Here, I've included some prose from prior RFCs discussing the move away from explicit dependency lists (similar to React's syntax, e.g.: useState(() => , [dep list here ])), due to being highly bug prone, requiring linting to get right, and generally being worse DevEx.

From RFC#410

Terminology Key

  • tracked properties => Signal.State (this proposal)
  • computed => useState (React ~ ish)

Author: @pzuraq


Motivation

Tracked properties are designed to be simpler to learn, simpler to write, and simpler to maintain than today's computed properties. In addition to clearer code, tracked properties eliminate the most common sources of bugs and mental model confusion in computed properties today, and reduce memory overhead by not caching by default.

Leverage Existing JavaScript Knowledge

Ember's computed properties provide functionality that overlaps with native JavaScript getters and setters. Because native getters don't provide Ember with the information it needs to track changes, it's not possible to use them reliably in templates or in other computed properties.

New learners have to "unlearn" native getters, replacing them with Ember's computed property system. Unfortunately, this knowledge is not portable to other applications that don't use Ember that developers may work on in the future, and while this problem may be lessened by adopting native classes and decorators, it still requires users learn Ember's notification system and its quirks.

Tracked properties are as thin a layer as possible on top of native JavaScript. Tracked properties look like normal properties because they are normal properties.

Because there is no special syntax for retrieving a tracked property, any JavaScript syntax that feels like it should work does work:

// Dot notation
const fullName = person.fullName;
// Destructuring
const { fullName } = person;
// Bracket notation for computed property names
const fullName = person['fullName'];

Similarly, syntax for changing properties works just as well:

// Simple assignment
this.firstName = 'Yehuda';
// Addition assignment (+=)
this.lastName += 'Katz';
// Increment operator
this.age++;

This compares favorably with APIs from other libraries, which becomes more verbose than necessary when JavaScript syntax isn't available:

this.setState({
  age: this.state.age + 1,
});
this.setState({
  lastName: this.state.lastName + "Katz";
})

Avoiding Dependency Hell

Currently, Ember requires developers to manually enumerate a computed property's dependent keys: the list of other properties that this computed property depends on. Whenever one of the listed properties changes, the computed property's cache is cleared and any listeners are notified that the computed property has changed.

In this example, 'firstName' and 'lastName' are the dependent keys of the fullName computed property:

import EmberObject, { computed } from '@ember/object';

const Person = EmberObject.extend({
  fullName: computed('firstName', 'lastName', function() {
    return `${this.firstName} ${this.lastName}`;
  }),
});

While this system typically works well, it comes with its share of drawbacks.

First, it's extra work to have to type every property twice: once as a string as a dependent key, and again as a property lookup inside the function. While explicit APIs can often lead to clearer code, this verbosity has the potential to complicate the implementation without improving developer intent at all. People understand intuitively that they are typing out dependent keys to help Ember, not other programmers.

It's also not clear what syntax goes inside the dependent key string. In this simple example it's a property name, but nested dependencies become a property path, like 'person.firstName'. (Good luck writing a computed property that depends on a property with a period in the name.)

You might form the mental model that a JavaScript expression goes inside the string—until you encounter the {firstName,lastName} expansion syntax or the magic @each syntax for array dependencies.

The truth is that dependent key strings are made up of an unintuitive, unfamiliar microsyntax that you just have to memorize if you want to use Ember well.

Lastly, it's easy for dependent keys to fall out of sync with the implementation, leading to difficult-to-detect, difficult-to-troubleshoot bugs.

For example, imagine a new member on our team is assigned a bug where a user's middle name is not appearing in their profile. Our intrepid developer finds the problem, and updates fullName to include the middle name:

import EmberObject, { computed } from '@ember/object';

const Person = EmberObject.extend({
  fullName: computed('firstName', 'lastName', function() {
    return `${this.firstName} ${this.middleName} ${this.lastName}`;
  }),
});

They test their change and it seems to work. Unfortunately, they've just introduced a subtle bug. If the user's middleName were to change, fullName wouldn't update! Maybe this will get caught in a code review, given how simple the computed property is, but noticing missing dependencies is a challenge even for experienced Ember developers when the computed property gets more complicated.

Tracked properties have a feature called autotrack, where dependencies are automatically detected as they are used. This means that as long as all properties that are dependencies are marked as tracked, they will automatically be detected:

import { tracked } from '@glimmer/tracking';

class Person {
  @tracked firstName = 'Tom';
  @tracked lastName = 'Dale';

  get fullName() {
    return `${this.firstName} ${this.lastName}`;
  }
}

Note that getters and setters do not need to be marked as tracked, only the properties that they access need to. This also allows us to opt out of tracking entirely, like if we know for instance that a given property is constant and will never change. In general, the idea is that mutable, watchable properties should be marked as tracked, and immutable or unwatched properties should not be.

Reducing Memory Consumption

By default, computed properties cache their values. This is great when a computed property has to perform expensive work to produce its value, and that value gets used over and over again.

But checking, populating, and invalidating this cache comes with its own overhead. Modern JavaScript VMs can produce highly optimized code, and in many cases the overhead of caching is greater than the cost of simply recomputing the value.

Worse, cached computed property values cannot be freed by the garbage collector until the entire object is freed. Many computed properties are accessed only once, but because they cache by default, they take up valuable space on the heap for no benefit.

For example, imagine this component that checks whether the files property is supported in input elements:

import Component from '@ember/component';
import { computed } from '@ember/object';

export default Component.extend({
  inputElement: computed(function() {
    return document.createElement('input');
  }),

  supportsFiles: computed('inputElement', function() {
    return 'files' in this.inputElement;
  }),

  didInsertElement() {
    if (this.supportsFiles) {
      // do something
    } else {
      // do something else
    }
  },
});

This component would create and retain an HTMLInputElement DOM node for the lifetime of the component, even though all we really want to cache is the Boolean value of whether the browser supports the files attribute.

Particularly on inexpensive mobile devices, where RAM is limited and often slow, we should be more conservative about our memory consumption. Tracked properties switch from an opt-out caching model to opt-in, allowing developers to err on the side of reduced memory usage, but easily enabling caching (a.k.a. memoization) if a property shows up as a bottleneck during profiling.

From RFC#566

Terminology Key

  • tracked properties => Signal.State (this proposal)
  • computed => useState (React ~ ish)
  • cached => Signal.Computed (this proposal)

Author: @pzuraq


Motivation

One of the major differences between computed properties and tracked properties with autotracking in Octane is that native, autotracked getters do not automatically cache their values, where computed properties were cached by default. This was an intentional design choice, as the memoization logic for computed properties was actually more costly, on average, than rerunning the getter in the first place. This was especially true given that computed properties would usually only ever be calculated and used once or twice per render before being updated.

However, there are absolutely cases where getters are expensive, and their values are used repeatedly, so memoization would be very helpful. Strategic, opt-in memoization is a useful tool that would help Ember developers optimize their apps when relevant, without adding extra overhead unless necessary.

NullVoxPopuli avatar Apr 09 '24 13:04 NullVoxPopuli

What is the relevance of "Reducing Memory Consumption" and "RFC#566 motivation" sections to the auto-tracking discussion?

I do not think wrapping signals in access getters is a practice that should be encouraged (auto-tracking if not out right encourages it makes it very easy). For example say you want to sync an attribute of a dom node using signals, how do you do that efficiently without access to the signal.

function syncElementAttr(el, attr, signal) {
  el.setAttribute(attr, signal.get());
  // Needing to create a watcher itself for each synchronization point does not seem very efficient either, but that's a separate issue.
  let w = new Signal.subtle.Watcher(() => {
    el.setAttribute(attr, signal.get());
  });
  return () => w.unwatch();
}

class Foo {
  #isActiveClass = new Signal.State("active");
  // Signal is hidden as an implementation detail.
  get isActiveClass () { return this.#isActiveClass.get(); }
  activate () { return this.#isActiveClass.set("active"); }
  deactivate () { return this.#isActiveClass.set(""); }
}

class Bar {
  // Signal is concretely a Signal
  isActiveClass = new Signal.State("active");
  activate () { return this.isActiveClass.set("active"); }
  deactivate () { return this.isActiveClass.set(""); }
}

const foo = new Foo();
const bar = new Bar();

syncElementAttr(el, "class", foo.isActiveClass); // ERROR foo.isActiveClass is not a signal
// Workaround
syncElementAttr(el, "class", new Signal.Computed(() => foo.isActiveClass));

// VS concrete signal
syncElementAttr(el, "class", bar.isActiveClass);

There are also other alternatives which "Avoid Dependency Hell". Such as:

Signal.Derived((read) => {
  // explicitly track xSignal
  const x = read(xSignal);
  // untracked access
  const y = ySignal.unwrap();
});

Described in more detail in #155

So it seems the main motivation for auto-tracking is: Ability to wrap signals in non-signal constructs and have tracking work implicitly to any signals that may be hidden behind a function call or property getter.

Which as I've described above will introduce complications for efficient synchronization mechanisms, and likely any API that want's to interact with signals directly.

robbiespeed avatar Apr 09 '24 18:04 robbiespeed

I do not think wrapping signals in access getters is a practice that should be encouraged (auto-tracking if not out right encourages it makes it very easy). For example say you want to sync an attribute of a dom node using signals, how do you do that efficiently without access to the signal

I think this might be where our fundamental disagreement is: in DevEx.

As a library dev, how to do we achieve the goal of: "allowing non-signals users to have the same experience as signals users?"

Why push Signal.Derived on non-signals users? or read() functions?

Some ecosystems are looking at exposing everything as functions, (like in component props), so they would props.foo() everything instead of props.foo.get() when necesary.

What I really like about the argument for getters, is that your consumer, doesn't need to, nor should they (imo!) care about if they API they're using is "Signals" or not -- they just.. access properties on objects.

Taking your example, say we're a library author providing syncElementAttr, we'd implement it like this:

function syncElementAttr(props) {
  el.setAttribute(props.attr, props.value);

  let w = new Signal.subtle.Watcher(() => {
    props.el.setAttribute(props.attr, props.value);
  });

  // hopefully a watcher no-ops if nothing reactive is read within
  return () => w?.unwatch();
}

This ends up working for both non-signals and signals users, because the API doesn't care

if we, as a part of our library, implement part of a component this way:

class AttributeState {
  #isActiveClass = new Signal.State("active");
  // We don't want consumers to know we use signals
  get value () { return this.#isActiveClass.get(); }
  
  el = someElement;
  attr = 'class';
}

Then usage would be:

syncElementAttr(foo);

but could also be, for the non-reactive programmers out there:

syncElementAttr({
  el: someElement,
  attr: 'class',
  value: 'something'
});

or if someone had their own reactivity they were working out, or just didn't want to use classes:

syncElementAttr({
  el: someElement,
  attr: 'class',
  get value() {
    return someSignal.get();
  }
});

The key is just lazy access being synonymous with reactivity, I think.

By using properties / getters / etc, we open up more " It's just JS™ " opportunities than if we force any specific API

NullVoxPopuli avatar Apr 09 '24 19:04 NullVoxPopuli

An important reality to contend with is that folks have been using auto tracking signals with accessors in JS at least since Knockout released its ES5 plugin. Durandal supported this from 2012 at least as well. So, 12 years or more. Code in the wild, for better or worse, uses signals this way and library authors and app developers expect that a standard will support pre-existing patterns like this.

I can see adding some mechanism to Signal.Computed to opt into manual dependencies. But I can't see removing auto-tracking. That would force all down-stream code to explicitly use signal APIs for everything. It would be like callback hell, but much worse.

FYI, regarding the attribute sync code above, that's not how I would implement the renderer. I don't think you'd want to use a watcher per DOM part. Instead, you would use a watcher per view/component, or maybe even a single global watcher, depending on what your rendering and ownership models were for the component system. Then, you would have a Computed per DOM part instead.

EisenbergEffect avatar Apr 09 '24 19:04 EisenbergEffect

Taking your example, say we're a library author providing syncElementAttr, we'd implement it like this

@NullVoxPopuli I don't think that works per the current proposal it says "No signals may be read or written during the notify". Which also implies that no signal can be tracked inside the notify callback. If it could then Watcher would essentially be an effect.

FYI, regarding the attribute sync code above, that's not how I would implement the renderer. I don't think you'd want to use a watcher per DOM part. Instead, you would use a watcher per view/component, or maybe even a single global watcher, depending on what your rendering and ownership models were for the component system. Then, you would have a Computed per DOM part instead.

@EisenbergEffect A watcher per component, is not fine grained reactivity. You can fake it by doing equality checks like Solid does when it compiles it's synchronisation logic to a single effect. But I'm not sure who'd want to write that code by hand. In a world where you only have Watchers/Effects for handling synchronisation though that's probably the only efficient path.

robbiespeed avatar Apr 09 '24 21:04 robbiespeed

@robbiespeed I have actually implemented a fine-grained batched renderer exactly as I described above. A single watcher can handle multiple computeds. When the watcher callback is fired, you can schedule work on the microtask queue (or rAF, whatever...). When the task runs, you call Watcher#getPending(). That will return only the dirty computeds. You simply iterate what is dirty and call get() to perform the batch update.

EisenbergEffect avatar Apr 09 '24 21:04 EisenbergEffect

I don't think that works per the current proposal it says

yeah, I mean, for brevity, I just copied the example from your comment <3

For a implementation of effects, I took this approach here: https://github.com/NullVoxPopuli/signal-utils/blob/main/src/subtle/microtask-effect.ts#L8

let watcher = new Signal.subtle.Watcher(() => {
  if (!pending) {
    pending = true;
    queueMicrotask(() => {
      pending = false;
      flushPending();
    });
  }
});

(which is from Implementing effects )

NullVoxPopuli avatar Apr 09 '24 21:04 NullVoxPopuli

@EisenbergEffect so in that implementation the computed callbacks would contain syncronization logic? If so I can see how that would work. Still not ideal that each sync point requires an additional computed wrapper, but it does solve the issue of multiple watchers.

In Metron each atom is both a signal and an emitter. so you can subscribe directly to the signal and implement syncing like this:

export function syncElementAttribute(
  element: Element,
  name: string,
  atom: Atom<unknown>
): Disposer {
  element.setAttribute(name, atom.unwrap());

  return atom[EMITTER].subscribe(() => {
    element.setAttribute(name, atom.unwrap());
  });
}

robbiespeed avatar Apr 10 '24 00:04 robbiespeed

Yes, I have something like this:

export function attrEffect(owner: View, ref: Element, getValue: () => string | null | undefined, attr: string) {
  owner.runEffect(() => {
    const value = getValue();

    if (value === null || value === void 0) {
      ref.removeAttribute(attr);
    } else {
      ref.setAttribute(attr, value);
    }
  });
}

Where runEffect() is something like this:

runEffect(callback: () => void) {
  const computed = new Signal.Computed(callback);
  this.watch(computed);
  computed.get();
}

EisenbergEffect avatar Apr 10 '24 21:04 EisenbergEffect

Note that getters vs functions is something users always may have to know about - if they use object spread or Object.assign, the result will be a static value, and they'll lose the reactivity.

By forcing use of a function, the user has to explicitly choose when to lose reactivity with ().

ljharb avatar Apr 10 '24 21:04 ljharb

the user has to explicitly choose when to lose reactivity

A user has to choose when to lose reactivity on property access. In a reactive system, you cannot know if everything is following the same rules, so all access and invocations must be as lazy/delayed as much as possible -- and then given that, property access is actually quite nice, because code ends up looking like all other code (since property access is common for accessing property-like data)

NullVoxPopuli avatar Apr 10 '24 21:04 NullVoxPopuli

I think that's my concern with it, and with getters themselves - code that behaves not like other code shouldn't look like other code. Different things should look different.

ljharb avatar Apr 10 '24 21:04 ljharb

I understand the concern -- however, as an app-dev (which i'm normally not! 😅 ), we have 0 control over the consistency that consumed libraries would be authored so delaying access / being lazy about it is the only sure way to ensure reactivity is maintained

NullVoxPopuli avatar Apr 10 '24 21:04 NullVoxPopuli

we have 0 control over the consistency that consumed libraries would be authored so delaying access / being lazy about it is the only sure way to ensure reactivity is maintained

Can you elaborate on this?

With promises if something is async you know it's async, because it's a promise. I see signals as fundamentally similar: you know a thing is reactive, because it's a signal. We use async/promises every day in app code and there's no such issues. From my experience using explicit signals is just as easy if not easier than async/promises.

robbiespeed avatar Apr 10 '24 22:04 robbiespeed

Can you elaborate on this?

ye! so, libraries can basically do whatever they want, consumers of those libraries cannot change how the library author wants their own API to be used.

import { foo } from 'the-library';

foo.bar

when accessing bar on foo, we have no idea if it's meant to be reactive or not. and we can't know unless we were to do instanceof Signal.State checks. (else we could accidentally match something that "looks like" a signal, by virtue of happenstance in "close enough"ly matching the interface.

when using foo.bar in UI, it might look something like this:

import { foo } from 'the-library';

const bar = foo.bar;

<template>
  {{bar}}
</template>

This will always detach reactivity, if bar was expected to be reactive or not.

So the app-dev must delay access so that the laziest thing possible happens:

import { foo } from 'the-library';

<template>
  {{foo.bar}}
</template>

This is reactive, because within the <template></template> tags, we know that a Component / UI or something will be rendered, and it knows how to watch for changes and update accordingly.

Likewise, if foo.bar is not meant to be reactive, we don't care, the end-user way of interacting with the-library is the exact same -- which, imo, is less cognitive load.

That said, we have the same issues with functions:

import { foo } from 'the-library';

const bar = foo.bar();

<template>
  {{bar}}
</template>

this detaches from reactivity, the same as before, so the user has to do

import { foo } from 'the-library';

<template>
  {{foo.bar()}}
</template>

And what I mean by

as an app-dev, we have 0 control over the consistency

We could end up with situations like:

import { foo } from 'the-library';
import { hi } from 'another-library';

<template>
  {{foo.bar()}}
  {{hi.there}}
</template>

where both the the returned value from foo.bar() and hi.there is reactive, but the style of access is different -- ultimately we don't care, because we know that if we delay access to the last possible moment, we are guaranteed reactivity, if it exists.

We use async/promises every day in app code and there's no such issues.

it's not at all the same as async, because async forces decisions about how to use access values at every call site, and propagates upward (it's more of a "Virus", if you well), or you end up having detached promises somewhere with an eventual event emission.

With signals, the impact is localized.

The impact is localized, because I can do something like:

class State {
  get value() {
    return aSignal.get();
  }
}

and now anyone accessing new State().value has no idea it's a Signal, and rightfully so -- as a library maintainer, I want to keep this as private API in case I need to change something, or migrate to a different data structure -- because if I, as a library maintainer, expose Signals directly, I'm on the hook to maintain that get/set interface for all of time (else more frequent breaking releases).

Additionally, there is a semantic difference -- the value of a signal, the result of a get() (direct or indirect) is intended to mean "this is always the latest value of this source of truth" -- a declarative way to represent "current".

promises, and async functions in general, are operations : "do this, give me the result when you're ready" -- there is no "current and always current value of a promise". Tho, you can kind of model a promise in that way with nested Signals, which I've done here: https://github.com/NullVoxPopuli/signal-utils/?tab=readme-ov-file#async-function with signalFunction:

import { Signal } from 'signal-polyfill';
import { signalFunction } from 'signal-utils/async-function';

const url = new Signal.State('...');
const signalResponse = signalFunction(async () => {
  const response = await fetch(url.get()); // entangles with `url`
  // after an away, you've detatched from the signal-auto-tracking
  return response.json(); 
});

// output: true
// after the fetch finishes
// output: false
<template>
  <output>{{signalResponse.isLoading}}</output>
</template>

Accessing any of these properties will always represent the "current and always current" value for:

  • value
  • error
  • isResolved
  • isPending
  • isRejected (etc)

:thinking: hope this helps! (clear as mud? :sweat_smile: )

NullVoxPopuli avatar Apr 10 '24 23:04 NullVoxPopuli

when accessing bar on foo, we have no idea if it's meant to be reactive or not. and we can't know unless we were to do instanceof Signal.State checks. (else we could accidentally match something that "looks like" a signal, by virtue of happenstance in "close enough"ly matching the interface.

I'm guessing this is speaking in terms of if signals were explicit and couldn't be wrapped by a getter?

With an explicit signal (no auto-tracking) you could always be assured that this would work whether bar is a value or a signal. There's no issue with passing things around or destructuring. If the renderer sees a signal in a slot then it can set up synchronization logic, if it sees a raw value then it knows the value will remain static and will render once.

import { foo } from 'the-library';

const { bar } = foo; // could be value or Signal<value> but it doesn't matter since the renderer can handle both.

<template>
  {{bar}}
</template>

where both the the returned value from foo.bar() and hi.there is reactive, but the style of access is different -- ultimately we don't care, because we know that if we delay access to the last possible moment, we are guaranteed reactivity, if it exists.

That delayed access is a self imposed limit by allowing signals to be hidden behind a getter in the first place.

it's not at all the same as async, because async forces decisions about how to use access values at every call site, and propagates upward (it's more of a "Virus", if you well), or you end up having detached promises somewhere with an eventual event emission.

Yes async does colour functions. It would be much worse if JS implicitly awaited any promise or async call, since you'd have no ability to tell what part of your code will be sync or async. Which seems a lot like what auto-tracking allows, though admittedly with less consequences for those who wish to be in an auto-tracked environment. For those who don't want auto-tracking though it becomes tricky, because there's no longer any way to tell what will be reactive.

and now anyone accessing new State().value has no idea it's a Signal, and rightfully so -- as a library maintainer, I want to keep this as private API in case I need to change something, or migrate to a different data structure -- because if I, as a library maintainer, expose Signals directly, I'm on the hook to maintain that get/set interface for all of time (else more frequent breaking releases).

Side Note: This is a good example of why signals read and write access should be split by default. If .get() was all the signal interface was then there would be no issue of over exposing access.

Are you proposing that with auto-tracking you might have a reactive getter, and later change it to a non-reactive value or vice versa? That sounds like a breaking API change to me, and is even more reason why Signal's should be explicit, it would be bad if people went around changing the reactive properties of something without marking that a breaking change.

robbiespeed avatar Apr 11 '24 00:04 robbiespeed

I think auto-tracking vs. non-auto-tracking dependencies should be a different issue than tracking vs. non-tracking contexts.

What I mean is, for the sake of developer ergonomics, I may agree that auto-tracking is a good trade-off. That is to say, this syntax:

const foo = new Computed(() => signalA.get() + signalB.get());

effect(() => console.log(signalA.get()));

It's not my preferred API personally, I would rather an explicit API using derived or combinators- but it's a reasonable proposal with sensible tradeoffs.

However, this really does not jive with me and I think it should be a different discussion entirely:

// runs once if third party doesn't use effect
foo(() => signalA.get())

// runs multiple times if third party does use effect
foo(() => signalA.get())

// runs once even if third party does use effect
foo(() => console.log(""))

As an application developer, I find this confusing and hard to reason about. I think that even if the proposal does land on auto-tracking, which I am not against (I just think there should be better documentation of the tradeoffs), it should throw an exception whenever .get() is called outside of a tracking context, and instead, a new sample() / peek() / getCurrent() / unwrap() method should be added to the proposal which definitively only gets the current value and may be called everywhere.

@NullVoxPopuli - As discussed on discord, I do think this boils down to a fundamental difference of view. I don't view async/await as a "virus that infects everything". I see it as syntax sugar that allows writing procedural code over values that do not exist yet (similar to "do notation" in Haskell), and it really needs that distinction to work. Signals could have a similar notation around effect/get, and it would be more like an async iterable than a promise... that would be neat.. but that would be yet another separate issue to discuss :)

Though, I do think the fact that it would be an async (or "effect") iterable and not a regular iterable is illustrative. It would be impossible to implement it as a regular iterable because it makes no sense for signals to have "multiple values over time at once". It's either current value, or the values over time, not both simultaneously... having a method that wants to be both simultaneously seems like a similar footgun to me.

To summarize, I see totally separate discussion points here:

  1. auto-tracked vs. non-auto-tracked dependencies. I don't have a horse in that race personally. I'd prefer non-auto-tracked so I have more control over performance, maybe, but auto-tracking is less verbose and I'd accept that the tradeoff is worth it.

  2. behavior of get in tracking vs. non-tracking contexts. I strongly believe it should be an exception to call this in non-tracking contexts, and disambiguated from "getCurrent" which should be allowed everywhere. I see async/await as good prior art here. Similar to .then(), it might make sense to add .effect() as a helper method to eagerly create a new tracking context for side effects.. idk.. I think more fruitful discussion can be had for how to make it ergonomic once the fundamentals are sound.

Of course, it's just my opinion.. how's the saying go, everybody's got em :)

dakom avatar Apr 11 '24 05:04 dakom

@EisenbergEffect something just occurred to me with your synchronization code. How do you determine what values need to be wrapped in an effect for synchronization vs non-reactive values that can be statically rendered once?

It seems like with auto-tracking the only way to detrmine if something is going to be reactive is that it must be wrapped in a computed. Is there another way? This would absolutely kill the performance of my rendering logic if I needed to wrap a computed around each value (reactive or not) going into a template/component.

robbiespeed avatar Apr 12 '24 01:04 robbiespeed

@robbiespeed I haven't finished implementing that yet. But there are a couple of things I'm playing with. Most likely, the effect helper I showed above will be parameterized so that for static scenarios, it just runs the function directly instead of passing it to the runEffect method on the owner (there are a few more details, but that's the basic idea). What I'm working on involves a compiler that can detect static values vs. expressions, so it can emit different code. I'll probably also have a way for the template author to indicate that an expression, whether signal backed or not, should not be watched but just be treated as a static value. I'm not trying to magically detect whether signals are involved or not. I'm operating on the principle that it's the template author that makes the decision about static vs. dynamic, based on the needs of the view.

EisenbergEffect avatar Apr 12 '24 03:04 EisenbergEffect

Edit: is this issue specifically "auto-tracking of properties"? Or is it the rationale of autotracking in general

jeff-hykin avatar Apr 21 '24 19:04 jeff-hykin