pydantic-extra-types icon indicating copy to clipboard operation
pydantic-extra-types copied to clipboard

How do we decide what to add here?

Open samuelcolvin opened this issue 2 years ago • 12 comments

We need a formal and agreed way to decide what types to add to this package.

I would propose the following:

  • 10 upvotes on the issue for new types which do not require new dependencies
  • 30 upvotes on the issue for new types which require new dependencies

Comments which explain why a type would be useful count double.

What do you think?

samuelcolvin avatar Jul 03 '23 11:07 samuelcolvin

The numbers proposed look fine to me. 👍

Kludex avatar Jul 03 '23 12:07 Kludex

To streamline the process and maintain a sense of progress, I suggest starting by merging the types that are already available.

This step will allow us to consolidate the existing contributions and establish a solid foundation for the package, it also provides an opportunity to test and refine the implementation of those types based on real-world usage and feedback.

Once the existing types are merged, we can initiate a voting process for new type proposals, then we can make sure that the proposed types receive adequate support and justification 👍🏻

yezz123 avatar Jul 03 '23 12:07 yezz123

I'm trying to find some way of justifying proposed vote counts required. If we look in retrospect, we see issues having much smaller number of votes have find their way int like this one (2 upvotes at the moment) https://github.com/pydantic/pydantic-extra-types/issues/15

On the other hand, having such a policy will result in increased number of likes for sure. Companies having a lot of employees will have an unfair advantage.

lig avatar Jul 03 '23 12:07 lig

@yezz123 i agree it makes sense to add the existing ones.

@lig I agree this system isn't perfect, but I think it's the least worst. We (the maintainers) can always add types we specifically want or think would obviously be helpful without meeting the threshold - like #15.

That being said, I might reduce the thresholds to 5 and 20.

We should also have a checklist of requirements, before allowing a PR, here's a first guess:

  • meets threshold
  • and 3rd party libraries appear to have good community traction, stability - we don't want to be left maintaining a library just to keep this package working
  • good candidate for pydantic validation, e.g. no need for IO etc.
  • anything else???

samuelcolvin avatar Jul 03 '23 12:07 samuelcolvin

anything else???

  • 100% test coverage

Gets bonus:

  • Implements a well-known standard/format

No go:

  • Implements a specific case targeting a single library/application

lig avatar Jul 03 '23 12:07 lig

  • 100% test coverage

This package already enforces 100% coverage, fyi

https://github.com/pydantic/pydantic-extra-types/blob/10269e13e39f088f081203f319aa3787fcfb27e8/pyproject.toml#L79

Kludex avatar Jul 03 '23 12:07 Kludex

@lig I agree this system isn't perfect, but I think it's the least worst. We (the maintainers) can always add types we specifically want or think would obviously be helpful without meeting the threshold - like #15.

That being said, I might reduce the thresholds to 5 and 20.

I think an approach that you could use here would be to simply start with one of these thresholds and in whatever document or issue that the final information is in, include a last updated date and next review date. Maybe review if it's working in six months and then lengthen the duration between reviews of the process as it stabilizes.

I assume there are going to be similar considerations for other situations (though not explicitly the same), so this would also give you a consistent approach to situations where thresholds aren't clear while maintaining community transparency (if you want that for this specific type of question).

Either way, thanks for all you are doing!

kkirsche avatar Jul 14 '23 17:07 kkirsche

Can we document this somewhere, and close this issue?

Kludex avatar Jul 27 '23 15:07 Kludex

  • 100% test coverage

May I ask on this note: are there any plans to add fake data generators such as mimesis or faker into tests pipeline? It seems to me that they can be useful for this project. Then passing several iterations of tests with random data can also become a requirement.

treylav avatar Jul 28 '23 12:07 treylav

  • 100% test coverage

May I ask on this note: are there any plans to add fake data generators such as mimesis or faker into tests pipeline? It seems to me that they can be useful for this project. Then passing several iterations of tests with random data can also become a requirement.

No plans for such a thing.

Kludex avatar Jul 28 '23 14:07 Kludex

No plans for such a thing.

Should I try to promote it as an additional option for testing, suggest relevant PRs, etcetera?

treylav avatar Jul 28 '23 14:07 treylav

I'm fine with not doing it, but I don't know others' opinions here.

Kludex avatar Jul 28 '23 14:07 Kludex