deepkit-framework
deepkit-framework copied to clipboard
Idea: Asynchronous (de)serialization?
I know that ability to work async would be a fundamental change/improvement but I wanted to present this idea anyway. Any async API would have to be separate from the regular synchronous one and asynchronous definitions would not be handled by the synchronous parser.
Scenario: Input object is from Mongoose or any other ORM/DB framework
class Post {
/* ... */
}
class Thread {
@t.array(Post).deserializeAsync(async obj => {await obj.populate('posts'); obj.posts; } )
posts: Post[];
}
Now .. a similar syntax could be useful for https://github.com/deepkit/deepkit-framework/issues/18
.deserialize(functor)
This violates separation of concerns and is thus very unlikely to happen. Fetching data should happen before serialization. Why do you need that process while serializing? You can always write your own function that does populating and serializing, so there's no need to implement that into the serializer itself.
You example issues a db query for each item to serialize. This would be incredible slow. It's better to load posts
eager to fetch everything in one query, and then let the serializer use the model as is.
Deepkit also contains a very fast (in fact the by far fastest) ORM/ODM for MongoDB which uses the already declared type annotation on your models, so might worth a look if you don't want to fiddle around with TypeORM and deepkit/types.
You're probably right. The way I am trying to use @deepkit/types
currently is more of a serialization framework than a two-way marshalling solution. And I am using @nestjs/mongoose
. It's a codebase that's a few years old so getting rid of Mongoose completely it's not an easy option. @nestjs/mongoose
at least gives me a more modern, more redundant way to declare the types. This is with class-transformer
(which I hope to replace with @deepkit/types
).
import { Prop, Schema, SchemaFactory } from "@nestjs/mongoose";
import { Expose } from "class-transformer";
import { ExposeId, ExposeIdOrType } from 'server/lib/dto';
@Schema({ timestamps: true, discriminatorKey: 'type' })
export class Revision extends mixinTimestamps(BaseEntityRevision) {
@Prop({ type: MongooseSchema.Types.ObjectId, index: true, ref: 'Entity' })
@ExposeId() // <- this is a custom decorator which works around issues with ObjectId and class-transformer
entity!: Types.ObjectId;
@Prop({ type: MongooseSchema.Types.ObjectId, ref: 'User', index: true })
@ExposeIdOrType(ResolvedAuthor) // <- this is a custom decorator which supports author being either populated or not
author?: EntityAuthor;
// we want this field only for the single revision REST endpoint
// for any use-case when multiple revisions are fetched `withData` will be set to false
@Prop()
@Expose ( { groups: [ 'withData' ], toClassOnly: true })
data!: string;
/* this field is never serialized - it's completely private for the purpose of some data processing - hence lack of any expose decorators */
@Prop({ type: Map, of: { etag: String, objectId: String, mimetype: String, contentHash: String } })
data!: Map<string, {
etag: string,
objectId: string,
mimetype: string,
contentHash: string,
}>;
}
const schema: MongooseSchema<RevisionDocument> = SchemaFactory.createForClass(Revision);
schema.methods.toDTO = function(this: Document, groups?: { withData: boolean }) => {
const groupsArray = groups ? Object.keys(groups).filter(key => groups[key]) : [];
const dtoObject = plainToClass(schemaClass, this, { groups: groupsArray, excludeExtraneousValues: true });
// and at API response this gets through classToPlain
return dtoObject;
}
As as you can see above, I can re-use the same class for declaring the Mongoose schema, and having interface fields in sync with the schema. This also gives me a way to serialize only the fields that are needed by the usecase
// the below is a simplified example - lacks authorization etc.
@Injectable()
class RevisionsService {
/* ... */
async getSingleRevision(id: string) {
const revision = await RevisionDB.findById(id).populate('author', '_id, name');
return revision.toDTO({ withData: true }); // <- this is serializing with both data and the populated author
}
async listRevisions() {
const revisions = await RevisionDB.list(); // note - no populate
return revision.toDTO({ withData: false }); // <- this is serializing without the data and the author is just the id
}
}
Now - would I want to have the same framework so that I could be more efficient? YES! I can look into deepkit for handling the ORM part. Would be nice if it could automatically for example figure out which things to populate depending on the set of groups needed by a specific use case. So for example:
-
Hey TypeKit, I need the Revision document with the author populated and all data fields present. Please construct an appropriate query and return me the "DTO" object.
-
Hey TypeKit, I need the Revision document with the author as an id and no data fields please. Please construct an optimal query so that needless data is not fetched from the DB
.
BUT this also leaves a question of private fields. Sometimes I need the private fields in order to do some operations on the documents, or to access external data stores and product byproducts of the documents.
Because of these private fields I think serialization is kind of a separate problem. Maybe some fields could be declared as "{ classToPlain: false }` using the class-transformer language. :)
Anyway - sorry for a lengthy post but wanted to explain myself a little bit better. Again, awesome project. I love love the JIT approach.
Async de/serialization is not going to happen as it would break many APIs and core functionality. I'm going to close this, but we can address this again in the future if we find a strong case in favor of it.