memos
memos copied to clipboard
use orm like https://entgo.io/ instead of writing sqls manually
Describe the solution you'd like
currently, memos support sqlite, mysql, postgresql, but writing target sql separately, we could use orm, like https://entgo.io/ to simplify the db logic and get rid of manually written sql.
Type of feature
Other
Additional context
I would rewrite the mysql to do a POC if accepted
Although it's a bit more complicated to write SQL for each driver, but the execution/migration will be open and transparent. IMO, I think writing raw SQL is better then using ORM.
But you can try adding a new storev1 to the current logic to implement a store that uses ORM, so let's see how it works.
@boojack I was working on https://github.com/usememos/memos/pull/2855#issuecomment-1915945653 and and quite stuck on DB part. I would like to say that the current migration strategy and manual updates is quite tricky and probably (if you may) - it's a good time to use rare opportunity and make DB part slightly more polished.
In case you don't like ORM (which I personally sympathize), may I suggest the approach I used in other projects? sqlc + sql-migrate
However, manual queries (even with SQLc) will require multiple work for each supported DB, as well as duplicated data structures.
@reddec Of course, we can use your design or ORM (as long as it solves the problem correctly and works well). But it's best to do this in a new package and not change the existing store.
I recently started a project that borrows from the memos project structure, It's great and saves me tons of time, but in the store part I found a lot of duplicate code, and I often had to copy code between drivers.
Then I noticed that each driver contains a *sql.DB, Go already handles the differences between drivers through it, drivers call APIs like QueryContext, ExecContext and so on.
For different drivers (based on *sql.DB), the only difference is the sql and args to be executed, while the rest of the scheduling logic is the same. So, I thought it might be possible to raise the scheduling logic from drivers to store and it will cut down a lot of duplicate code.
For example, the current ListActivities is,
// interface
type Driver interface{
...
(ctx context.Context, find *FindActivity) ([]*Activity, error)
}
// store
func (s *Store) ListActivities(ctx context.Context, find *FindActivity) ([]*Activity, error) {
return s.driver.ListActivities(ctx, find)
}
// db/sqlite
func (d *DB) ListActivities(ctx context.Context, find *store.FindActivity) ([]*store.Activity, error) {
where, args := []string{"1 = 1"}, []any{}
if find.ID != nil {
where, args = append(where, "`id` = ?"), append(args, *find.ID)
}
if find.Type != nil {
where, args = append(where, "`type` = ?"), append(args, find.Type.String())
}
query := "SELECT `id`, `creator_id`, `type`, `level`, `payload`, `created_ts` FROM `activity` WHERE " + strings.Join(where, " AND ") + " ORDER BY `created_ts` DESC"
rows, err := d.db.QueryContext(ctx, query, args...)
if err != nil {
return nil, err
}
defer rows.Close()
list := []*store.Activity{}
for rows.Next() {
activity := &store.Activity{}
var payloadBytes []byte
if err := rows.Scan(
&activity.ID,
&activity.CreatorID,
&activity.Type,
&activity.Level,
&payloadBytes,
&activity.CreatedTs,
); err != nil {
return nil, err
}
payload := &storepb.ActivityPayload{}
if err := protojsonUnmarshaler.Unmarshal(payloadBytes, payload); err != nil {
return nil, err
}
activity.Payload = payload
list = append(list, activity)
}
if err := rows.Err(); err != nil {
return nil, err
}
return list, nil
}
after modification, it will be,
// interface
type Driver interface{
...
ListActivitiesSQL(find *FindActivity) (string, []any)
}
// store
func (s *Store) ListActivities(ctx context.Context, find *FindActivity) ([]*Activity, error) {
query, args := s.driver.ListActivitiesSQL(find)
rows, err := s.driver.GetDB().QueryContext(ctx, query, args...)
if err != nil {
return nil, err
}
defer rows.Close()
list := []*Activity{}
for rows.Next() {
activity := &Activity{}
var payloadBytes []byte
if err := rows.Scan(
&activity.ID,
&activity.CreatorID,
&activity.Type,
&activity.Level,
&payloadBytes,
&activity.CreatedTs,
); err != nil {
return nil, err
}
payload := &storepb.ActivityPayload{}
if err := protojsonUnmarshaler.Unmarshal(payloadBytes, payload); err != nil {
return nil, err
}
activity.Payload = payload
list = append(list, activity)
}
if err := rows.Err(); err != nil {
return nil, err
}
return list, nil
}
// db/sqlite
func (d *DB) ListActivitiesSQL(find *store.FindActivity) (string, []any) {
where, args := []string{"1 = 1"}, []any{}
if find.ID != nil {
where, args = append(where, "`id` = ?"), append(args, *find.ID)
}
if find.Type != nil {
where, args = append(where, "`type` = ?"), append(args, find.Type.String())
}
query := "SELECT `id`, `creator_id`, `type`, `level`, `payload`, `created_ts` FROM `activity` WHERE " + strings.Join(where, " AND ") + " ORDER BY `created_ts` DESC"
return query, args
}
The problem with this modification is that drivers can only be implemented via *sql.DB and cannot accept non-sql integrations (e.g. mongodb, redis).
I have created a POC PR https://github.com/usememos/memos/pull/2940, we can discuss the ent there.
@boojack can you reopen this issue, I am still working on this.