odmantic icon indicating copy to clipboard operation
odmantic copied to clipboard

Define indexes for a model

Open chy373 opened this issue 4 years ago • 10 comments

Feature request

Context

can we define index information when define a model?

chy373 avatar Jan 16 '21 15:01 chy373

It's not yet possible, here is the current draft of the feature 😃


Unique fields

class Tree(Model):
    name: str = Field(unique=True)

Regular index

class Tree(Model):
    name: str = Field(index=True)

Compound indexes

class Book(Model):
    name: str
    pages: int

    class Config:
        def indexes():
            yield Index(cls.name, cls.pages)

I would be glad to have some feedback on this draft !

art049 avatar Jan 16 '21 21:01 art049

This is such an important feature, hopefully the devs will be able to add it soon :+1:

ecosys-technology avatar Jun 02 '21 16:06 ecosys-technology

I like the feature and the design, but one bit of feedback on this feature is that I often make indexes using partialFilterExpression syntax to get more fine-grained nullability.

I'm not the Mongo expert here but as I understand it this is the best way to ensure that an index is unique when it also needs to be nullable. Otherwise, you'd get an error if you tried to add more than 1 null value.

Here's an example:

users.create_index(
    "email",
    unique=True,
    partialFilterExpression={"email": {"$type": "string"}},
)

I'd typically do this in a FastAPI app where I want users to either have no email address or a unique one. This lets me surface a 409 error when there's a document conflict.

Maybe you could add a nullable=True option to Field as well?

mattyweb avatar Aug 18 '21 00:08 mattyweb

First, when using sqlalchemy do people actually use the indexing properties or do they make all the tables using alembic? Or is this like 'you /can/ make the tables in code just like you /can/ make a requirements.txt without pinning versions'. I lean to the side of doing all the schema work in the migration file(s) (cf diesel.rs) so that's where I'm coming from.

In my experience when you have the index defined by the ORM then lifecycle management (aka migrations) are painful when you realise an index is no longer doing it's job and you want to remove it. It's also painful to update indexes when you want to have two indexes on the same criteria (using different partial filter expressions). (This is fixed in mongo5 🎉 )

Also you cannot ALTER indexes in mongo which means adding new fields with index properties either needs to fail on startup (😬 ) or add a new index on the fly (😱 )

Of course who wants to spend the time and effort re-writing alembic for Mongo when it has no schemas to migrate? Well in this case you do want a migration tool for indexes rather than doing this in code in my humble opinion.

Curious to know what you think!

ehiggs avatar Sep 10 '21 14:09 ehiggs

So, until that is done, what is the best practice for creating indexes? Where is it better to create them?

rdfsx avatar Nov 28 '21 14:11 rdfsx

@rdfsx https://github.com/stxnext/pymongo-migrate

ehiggs avatar Nov 29 '21 10:11 ehiggs

such an important feature, plz have it soon

schwannden avatar Nov 30 '21 07:11 schwannden

@rdfsx https://github.com/stxnext/pymongo-migrate

So where should I look?:)

rdfsx avatar Nov 30 '21 10:11 rdfsx

such an important feature, plz have it soon

Perhaps I could do a pull request, but I need some explanation 🤔

rdfsx avatar Nov 30 '21 10:11 rdfsx

giving up this project and decide to use beanie, it supports

  1. build operation like build insert, build delete
  2. index, multiple index
  3. the syntax for query is a bit simpler
  4. uses motor > 2.5

schwannden avatar Dec 01 '21 03:12 schwannden