datamodel-code-generator
datamodel-code-generator copied to clipboard
Change install requirements from pinpoint to lower bounds
Hi,
first of all thank you for the project this looks really interesting!
Is your feature request related to a problem? Please describe. All the versions in requirements are pinpoint which might cause version conflicts when using this with other packages e.g. fastapi.
Describe the solution you'd like
Instead of pinpoint the pydantic version make it lower bound
pydantic[email]>=1.5.1
@dennis-wey Thank you for creating this issue.
I'm afraid of broken dependency in the future.
All the versions in requirements are pinpoint which might cause version conflicts when using this with other packages e.g. fastapi.
How did you use this code-generator? as a module? or CLI?
If you use it as CLI then, I recommend that you should not mix other packages in the environment. I usually create a requirements.txt(or poetry or pipenv) for only this code generator. Also, I may choice pipx for the standalone app.
However, pinpoint versions are not comfortable for us. We should set a cap version when we cut the pinpoint version (example: "pydantic>=1.51,<2.0")
How do you think about it?
My use case is, that for a bigger project I would like to build pydantic classes out of openapi.json provided by a fastapi application. I don't want to automate the creation of the whole client but make the schemas available. So my use case would be using a parser and then accessing its results attribute. So basically I wouldn't even need black or isort package but since their vital to the core use case, it wouldn't make sense to make them optional.
We should set a cap version when we cut the pinpoint version (example: "pydantic>=1.51,<2.0")
Yes this sounds like a very good approach.
@dennis-wey
So my use case would be using a parser and then accessing its results attribute.
I'm happy you use this library deeply :smile:
Yes this sounds like a very good approach.
Thank you
I have changed and released a new version as 0.5.26 :rocket:
Sorry me again. I did tried to include the project in the bigger project but got stuck again since due to strict requirement managament I can only rely on argcomplete ~=1.10. Is it reasonable to lower the minimal version on this one?
Also for me it seems like this package does two things:
- Providing Parser forbuilding pydantic Classes out of openapi/json schema.
- Provide a code generator command line tool
I think the 1. has many use cases, since this makes it possible to communicate complex objects accross languages or services. I think Number 2 is just one of many possible use cases you can do with this package, and my question would be if it make sense to split them up (or even contribute the parser directly to pydantic). That said as long as I'm the only one using this outside of the code generator use case this is propably unnecesssary.
@dennis-wey
Sorry me again. I did tried to include the project in the bigger project but got stuck again since due to strict requirement managament I can only rely on argcomplete ~=1.10. Is it reasonable to lower the minimal version on this one?
Did you want to change the lowest version to 1.10?
I think it is not a perfect answer to resolving the problem.
But, If you hope it then I can do it. because I guess another way needs a lot of time. :cat2:
That said as long as I'm the only one using this outside of the code generator use case this is propably unnecesssary.
I agree.
I use this package as a library too. https://github.com/koxudaxi/fastapi-code-generator
We should consider splitting the package to Parser and CLI. or better names.
My idea is to create a new package for Parser. :rocket:
Also, We should think to create a new repository for one.
Did you want to change the lowest version to 1.10? I think it is not a perfect answer to resolving the problem. But, If you hope it then I can do it. because I guess another way needs a lot of time. 🐈
Yes that would be great.
I use this package as a library too. https://github.com/koxudaxi/fastapi-code-generator We should consider splitting the package to Parser and CLI. or better names. My idea is to create a new package for Parser. 🚀 Also, We should think to create a new repository for one.
Yes this would make sense. Most difficult thing would be to think of a name. How about pydanntic-schema-parser or short PySchePa?
@dennis-wey I have changed the version to 1.10. Would you please check the latest version? (0.5.28)
Yes this would make sense. Most difficult thing would be to think of a name. How about pydanntic-schema-parser or short PySchePa?
I agree I think the name is significant.
First, the parser would support only pydantic. ? :thinking:
Second, Should we choose schema? datamodel?
I prefer this format xxx-xxx-parser.
@joshbode I know you are a heavy user on this code generator. How did you think the idea which split this package to CLI and parsers? Also, Did you have a good name for the parser?
As a solution to the original problem, given most people are not using this project as a library, I suggest installing via pipx rather than pip. pipx creates dedicated, isolated virtual environments (that you don't have to activate!) for python-based CLI tools. I use this since I have projects that often have competing package requirements.
Install pipx:
$ pip install pipx
$ pipx ensurepath
then:
$ pipx install datamodel-code-generator
$ which datamodel-codegen
/home/josh/.local/bin/datamodel-codegen
Upgrading is simple, too:
$ pipx upgrade datamodel-code-generator
It also works with editable packages if you're developing datamodel-code-generator :)
$ git clone [email protected]:koxudaxi/datamodel-code-generator.git
$ pipx install -e datamodel-code-generator
oh, I see you've already suggested pipx @koxudaxi :)
I'm OK with splitting the package into parser and CLI, but I'm not sure that solves the problem greatly, given there will still be constraints that are hard to anticipate whether they'll impact other packages.
<wistful-sigh>oh for nodejs/julia-style packages</wistful-sigh>
@joshbode
Thank you for your good advice.
Yes, I recommend pipx for installing this code generator.
I will write about pipx in the document.
I'm OK with splitting the package into parser and CLI, but I'm not sure that solves the problem greatly, given there will still be constraints that are hard to anticipate whether they'll impact other packages.
Yes, It's a very difficult problem. I often worry about it. If I change some dependency to optional, then I will do it.
oh for nodejs/julia-style packages
Sorry, What does it mean?
oh, for nodejs/julia-style packages
I'm just grumbling - I only meant that in other languages this is a solved problem :)
In Julia, for example, different packages (say a and b) installed in the same environment can have dependencies on different versions of a package (say a requires c==1.0.0 and b requires c==1.2.1) - see https://docs.julialang.org/en/v1/manual/code-loading/
@joshbode
Oh, it's great language. 😣
argcomplete>=1.10,<2.0 prance>=0.18.2,<1.0 openapi-spec-validator>=0.2.8,<0.3 jinja2>=2.11.2,<3.0 inflect>=4.1.0,<5.0 pydantic[email]>=1.5.1,<2.0 black>=19.10b0,<20 isort>=4.3.21,<5.0 PySnooper>=0.4.1,<1.0.0 toml>=0.10.1,<1.0.0 genson>=1.2.1,<2.0
I guess this parser is dependent on a bit of packages. It's worth a try.
Btw, How do you think the name pydantic-schema-parser?
Or, Do you know better name?
@koxudaxi Tried to add it and failed. I now checked all listed dependencies above and checked against the existing dependencies in my project. Unfortunatly I can only add things and I'm not able to use pipx since I'm not the maintainer.
jinja2>=2.10.1
toml >=0.10.0
@dennis-wey OK, I have released a new version as 0.5.29. The changes are a special case. We should not do it every time. But, We should decide the rule of the dependency version.
After rethinking this, would it make sense to keep it at one package and include the cli stuff as extra dependency? The Cons would be, that the main use case would be hidden behind an extra and the name would not fit so well anymore.
On the other topic: In order to to use the parser sas library, I think they need a clearer API. So basically I would assume that I have some in python represenation of the schema (most likely a dictionairy) and in return get something like a List of pydantic classes (or of a return Object where I can get the class from among other things)
After rethinking this, would it make sense to keep it at one package and include the cli stuff as extra dependency?
Users expect that this package is a code-generator. If we can install it without any option then, It's good., Otherwise, They will be confused.
On the other topic: In order to to use the parser sas library, I think they need a clearer API.
I agree. The code-generator accepts to be access API. But, It's designed as an internal API. If we can design the API, then we can change it.
So basically I would assume that I have some in python represenation of the schema (most likely a dictionairy) and in return get something like a List of pydantic classes (or of a return Object where I can get the class from among other things)
Sorry, What did you expect? Would you show me an example?
Sorry, What did you expect? Would you show me an example?
Sure let's assume the example from https://github.com/koxudaxi/datamodel-code-generator/issues/205 is hosted somewhere.
openapi_dict = requests.get("url to some fastapi service/openapi.json").dict()
parser = OpenApiParser(openapi_dict)
dict_name_to_pydantic_class = parser.create_classes()
item = dict_name_to_pydantic_class["Item"](foo=None, bar="")
In my last post I suggestes to return a list of pydantic classes but a dictionairy might make more sense, so you can address the pydantic representation of the schemas by name.
@dennis-wey
I have understood that you want to get a pydnatic model that is created by create_model()
I think that it's difficult to resolve the referenced model on the fly. the models often have nested models. Of course, It's not impossible.
Also, Can users understand the structure of models? IDE's and mypy don't know about the structures. And these can't guide about the structure. Users must know the structures from Schema. I feel it is not realistic. :thinking: How do you think about it?
If the feature needs some users then, I will be able to implement it. because the feature don't effect exists codes.
Basically the use case I have is the same as calling the cli and directly import the constructed file using runpy.run_path for example.
I think that it's difficult to resolve the referenced model on the fly. the models often have nested models. Of course, It's not impossible.
I don't see nested models are a problem since this seems to be the same issue in the cli. The question would be how to "import them". Just doing dirty "eval" or working with importing temporary files (would be mostly the same steps as I do this now, but relying on python functions instead of cli)
Also, Can users understand the structure of models? IDE's and mypy don't know about the structures. And these can't guide about the structure. Users must know the structures from Schema. I feel it is not realistic. 🤔 How do you think about it?
In my use case we are talking about a client with an explorative aspect. Yes with an IDE this doesn't make much sense but when working in an ipython/jupyter notebook dynamic pydantic Classes can help construcing required RequestBodys for example.
Thinking about what I just said: A good minimal solution would be to have nearly the same functionality as the CLI (instead of an input file input would be a dictionairy) as a python function. The output could then be something like a string containing the same as what would be in the output file (propably minus black and flake8). Then the user could decide how to construct the classes dynamically. (Maybe there could be a hint in the Readme)
@dennis-wey Thank you for explaining your thinking. I understand the use case. I want to use the library in IPython. OK, I will try it when I can get the time. (I work for other projects too.)
Btw, Did you need split this package to a parser and CLI? We have not decided on an answer to the suggestion yet.
Nope since with the newst direction my use case would still fall in the category of code generation (only that the code wouldn't be writte to a file).
OK, I will consider how to split the packages.
PR and suggestions are welcome!!
My use case is, that for a bigger project I would like to build pydantic classes out of openapi.json provided by a fastapi application. I don't want to automate the creation of the whole client but make the schemas available. So my use case would be using a parser and then accessing its
resultsattribute. So basically I wouldn't even need black or isort package but since their vital to the core use case, it wouldn't make sense to make them optional.We should set a cap version when we cut the pinpoint version (example: "pydantic>=1.51,<2.0")
Yes this sounds like a very good approach.
If I understand this right, you're looking for an imaginary package called openapi-to-jschonschema, right? Is that what you refer to as the "parser"?
If I understand this right, you're looking for an imaginary package called openapi-to-jschonschema, right? Is that what you refer to as the "parser"?
More like openapi_schema_to_pydantic_classes. But as mentioned before this is actually not so far away from the current supported feature and I guess the only addition which makes sense is to add a python interface in addition to the cli interface so you can use the code better as library. I try to make a pr when I find the time to do so.
@dennis-wey I'm working on changing the internal interface. https://github.com/koxudaxi/datamodel-code-generator/pull/236
It's not perfect. but, We will get good interfaces in the future.