[BUG][JAVA] Wrong return types generated for OpenAPI spec with more than one endpoint returning an array type
Description
The two endpoints in the partial yaml file below should have their CodegenOperation#returnType variables contain List<String> and List<SomeObject>, respectively. However the first endpoint that gets processed has it's return type leaking into the second return type. Therefore the return types of their CodegenOperation objects become the same. This results in getting APIs with the same return types: List<String> getListOfStrings() and List<String> getListOfSomeObjects() instead of List<SomeObject> getListOfSomeObjects().
So far this issue affects only array types.
openapi-generator version
OpenAPI generator: 6.0.0
This problem doesn't occur in 4.2.0
OpenAPI declaration file content or url
paths:
/somestrings:
get:
tags:
- Test
operationId: getListOfStrings
responses:
200:
description: OK
content:
application/json:
schema:
type: array
items:
type: string
/someobjects:
get:
tags:
- Test
operationId: getListOfSomeObjects
responses:
200:
description: OK
content:
application/json:
schema:
type: array
items:
'#/components/schemas/SomeObject'
-->
Steps to reproduce
Run the JavaClientCodegen generator.
Related issues/PRs
I don't see any issues related to this.
Suggest a fix/enhancement
The problem has to do with the Schema class' equal/hash method. Schema's equals/hash methods use properties of the Schema except the items property and the type of the items is one of the ways that can be used to distinguish between arrays. So the two array schemas in the yaml snippet above, let's label them a1 and a2, are the same (equals(a1, a2) == true) because the items of the arrays are ignored. This issue affects the keys put into the cache: schemaCodegenPropertyCache(NamedSchema, CodegenProperty) in DefaultCodegen because instead of having two arrays in the cache, the first array that gets put in the cache ends up representing all arrays.
Please add items property to the equals and hash methods in Schema or override the equals and hash methods in ArraySchema.
We are also facing this bug for generator version 6.2.0, indeed caused by the schemaCodegenPropertyCache for arrays fields with the same name and different item types.
Looking into a fix now.
@Breus Hi! Is there a fix (or workaround) for this issue yet ? If not, how can I help ?
@alyelalwany for this issue, can you please PM me via Slack when you've time?
https://join.slack.com/t/openapi-generator/shared_invite/zt-12jxxd7p2-XUeQM~4pzsU9x~eGLQqX2g
@alyelalwany we have quite some internal workarounds for OpenAPI generator bugs and this is one of them. We do this by extending org.openapitools.codegen.languages.JavaClientCodegen and specifically overriding the behavior of org.openapitools.codegen.languages.JavaClientCodegen#fromModel.
Specifically, before calling super.fromModel(name,model) we have this workaround to solve this bug:
// Quick and dirty trick:
// schemaCodegenPropertyCache is a Map<NamedSchema, CodegenProperty> where the key is the NamedSchema.
// NamedSchema is a tuple of (name, Schema), this means that when we have duplicated arrays (ie. Collection in
// Java properties in different objects with the same name, the NamedSchema will be considered the same
// This leads to conflicts and generate wrong types (e.g. Set<String> instead of Set<Long>)
// Here we do a workaround setting the Schema description (an not used field for code generation)
// with the parent bean name, so the Schemas will be different even with the same name/type
if (model.getProperties() != null) {
Map<String, Schema> properties = model.getProperties();
for (Map.Entry<String, Schema> entry : properties.entrySet()) {
entry.getValue().setDescription(entry.getKey() + "-name" + name);
}
}
As for the reason why we maintain internal fixes and don't fix the upstream instead (which we would actually prefer doing). The last time I did so in this PR it took 9 months(!) to get it merged and almost a full year to get it in a release.
@Breus sorry for the delay in getting your PR merged. Agreed 9 months is way too long for review.
There are just too many PRs for this project and we're always short-handed in reviewing PRs :(
Please file PRs when you've time and we'll try out best to get them merged.