easier transformValue overriding
Easier to extend the transform functionality without much hassle, In my particular use case I had to send more params to this._schema.transformValue, like _topModel
The reality is that I have structured schema with the type information for each attr, a json schema so each Megamorphic topModel have a json schema, so all the transforms and default values must be read from this schema per access, so my method call reads like this this._schema.transformValue(this._modelName, key, value, this._topModel.schemaVersion) and probably I would also favor this._schema.getDefaultValue(this._modelName, key, this._topModel.schemaVersion)
_getDefaultValue(modelName, key){
return this._schema.getDefaultValue(modelName, key)
}
_transformValue(modelName, key, value){
return this._schema.transformValue(modelName, key, value)
}
So now I can easily do
MegamorphicModel.reopen({
_getDefaultValue(modelName, key) {
return this._schema.getDefaultValue(modelName, key, this.schemaVersion);
},
_transformValue(modelName, key, rawValue) {
if(this._schema.shouldTransform(key)) {
return this._schema.transformValue(modelName, key, rawValue, this.schemaVersion);
}
return rawValue;
}
});
Update: I guess I didn't saw this coming, but sending the topModel and reading from it would cause a stack overflow... closing for now, while I understand how to better handle this...
Update: I did end up using this but also added a transformValue attrName exclude list, so for some attrNames the transformValue function won't be called, for my particular use case I had to access a prop from the topModel inside the transformValue function, so the unknownProperty would be triggered again... leading to an infinite recursion. So what I wanted is a way to avoid calling transformValue for certain attrNames
@betocantu93 thanks for digging in to this.
The current PR implementation adds a couple of _fn extension points: we should stick to the convention of treating functions with a leading _ as private. Also I'd like to avoid having the model class itself be an extension point.
That said the problem you're raising makes total sense: you need more context than is currently provided to the schema.models section for default values and transforms.
There's also the second problem of there not being a public API for getting the top model of a nested model.
Both of these seem to me to be well worth solving.
For the first problem the best solution that I see is to deprecate specifying defaults and transforms on schema.models and instead have users do these actions within computeAttribute
This should work today already for defaults, although transforms are current run eagerly this could easily be moved into the else branch for the old compute hooks.
@betocantu93 what do you think?
@hjdivad
The current PR implementation adds a couple of
_fnextension points: we should stick to the convention of treating functions with a leading_as private. Also I'd like to avoid having the model class itself be an extension point.
That said the problem you're raising makes total sense: you need more context than is currently provided to the
schema.modelssection for default values and transforms.There's also the second problem of there not being a public API for getting the top model of a nested model.
Both of these seem to me to be well worth solving.
For the first problem the best solution that I see is to deprecate specifying defaults and transforms on
schema.modelsand instead have users do these actions withincomputeAttribute
I agree, centralizing everything within computeAttribute makes sense to me, deprecating eager transform.
I also think in this truly dynamic world, computeAttribute should receive a little bit more context than just the rawValue, I don't know if i'm biased by my use case, but maybe _topModel should be enough extra context within computeAttribute, but again, that's just to solve my requirement, "every _topModel belongsTo a DS.Model, which has more typed info"
Also, I think schema.models transforms for known models and attrs works great, but for truly dynamism is not really enough... I had to do some tricks inside the serializer like flattening nested objects (to reduce the memory usage) and also to encode the keys to get some deep transforms going
{
extras: {
prop1: {
pictures: ['url'],
price: 200.50,
timestamp: "2020/10/12"
},
prop2: {
pictures: ['url'],
price: 200.50,
timestamp: "2020/10/12"
}
}
to
{
`extras::prop1::pictures[array]`: [],
`extras::prop1::price[number]`: 200.50,
`extras::prop1::timestamp[date]`: "2020/12/10",
`extras::prop2::price[number]`: 200.50,
`extras::prop2::pictures[array]`: [],
`extras::prop2::timestamp[date]`: "2020/12/10"
}
Ignoring the flattening, I think maybe some kind of convention to apply transforms on dynamic keys using maybe regex could be great (I understand this is not a silver bullet and could be error prone).
So I think:
- More context should be added to computeAttribute hook.
- By deprecating
schema.modelstransforms and defaults I see two options-
Create a transformValue function on
m3-schemawhich runs before the computeAttribute but with more context and pass the result to computeAttribute changing the value concept to a more visible transformedValue (this is really similat to what is happening right now, but maybe a little bit more clear and with more context in the signature)transformValue( modelName, attrName, value, topModel, type = 'deserialize')computeAttribute(key, transformedValue, modelName, schemaInterface, topModel) -
Just show some patterns and example on how to apply transforms inside computeAttribute
-
- I found useful to augment the current transformValue to receive an argument "type" which defaults to
deserializeso I can also call this function inside the serialization process but withserialize
I'm very +1 on having a smaller API surface area via computeAttribute, and providing that function more context, especially indirectly via schemaInterface.
For top model access maybe something like
schemaInterface.topModel()
So then you'd
// m3-schema
{
computeAttribute(key, value, modelName, schemaInterface) {
let topm = schemaInterace.topModel();
// ...
}
}
If you can do everything in computeAttribute, then having a schema.models can serve as an example in documentation of a clean way to handle simple schemas, and computeAttribute the hook where you can add more sophistication.
@betocantu93 does 👍 make sense to you?
If you have cycles to work on this I'd be happy to review moving the API in this direction
@hjdivad just listing my understanding
- Deprecate transformValue
- Add more context to computeAttribute via schemaInterface
- Add a clear example of how
schema.modelsand transforming could work via computeAttribute i.e
//m3-schema
{
computeAttribute(key, value, modelName, schemaInterface) {
value = myVeryOwnTransformsImplementation(key, value, modelName, schemaInterface.topModel(), get(this, `models.${modelName}.transforms.${key}`);
}
}
- Add backwards compatibility for old apis.
sounds good to me.
I'll get a draft PR going and see how it goes thanks for your feedback
@betocantu93 sounds like a plan; looking forward to the pr ❤️
@betocantu93 did you get a chance to work on the pr?
Hey @hjdivad sorry for the extended silence/inactivity, unfortunately no, I've been pretty busy lately with a major product release, which includes releasing ember-m3 to production (so happy for this), but I do plan to tackle this soon! It's in the back of my head because it's literally on our core now, I'm going to try to get a PR going this weekend
@betocantu93 no worries I totally understand. Looking forward to the pr when you get some time to get back at it