Possible Error In DockerFile
I think this commit might have introduced an error into the Dockerfile.
When I try and run the generator I get a failure building the image
mkdir /source && cd /source && mkdir openapi-generator && cd openapi-generator && git init && git remote add origin https://github.com/OpenAPITools/openapi-generator.git && git fetch --progress --depth=1 origin v6.6.0 && git checkout v6.6.0 && git config --system --add safe.directory /source/openapi-generator
That says
error: pathspec 'v6.6.0' did not match any file(s) known to git
I don't think the below line is needed anymore
git checkout $OPENAPI_GENERATOR_COMMIT
So I created a fork and removed the git checkout $OPENAPI_GENERATOR_COMMIT but then the next step in the dockerfile failed..
=> ERROR [4/8] RUN chmod -R go+rwx /root && umask 0 && cd /source/openapi-generator && mvn install -DskipTests -Dmaven.test.skip=true -pl modules/openapi-generator-maven-plugin -am && cp -r /root 2.1s
------
> [4/8] RUN chmod -R go+rwx /root && umask 0 && cd /source/openapi-generator && mvn install -DskipTests -Dmaven.test.skip=true -pl modules/openapi-generator-maven-plugin -am && cp -r /root/.m2/* /usr/share/maven/ref:
2.024 [INFO] Scanning for projects...
2.071 [ERROR] [ERROR] Could not find the selected project in the reactor: modules/openapi-generator-maven-plugin @
2.072 [ERROR] Could not find the selected project in the reactor: modules/openapi-generator-maven-plugin -> [Help 1]
2.073 [ERROR]
2.074 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
2.074 [ERROR] Re-run Maven using the -X switch to enable full debug logging.
2.075 [ERROR]
2.075 [ERROR] For more information about the errors and possible solutions, please read the following articles:
2.075 [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MavenExecutionException
@jrshaffe thank you for the report, I will try to reproduce locally.
I've experienced this issue too, it's introduced by https://github.com/kubernetes-client/gen/pull/251 This version of cloning doesn't work with tags, so now it requires a commit hash instead of a tag name. Could you try to use a commit hash instead (7f8b853f502d9039c9a0aac2614ce92871e895ed for tag v6.6.0)? It works for me.
Please feel free to send a PR if that fixes things.
Using the commit at least get's me through building the docker image part. We could probably default the value of OPENAPI_GENERATOR_COMMIT to 7f8b853f502d9039c9a0aac2614ce92871e895ed in the java.sh.
But I've ran into compilation issues with the generated java code that I can log a different issue about.
I'm experiencing the same original problem described here. But just before I started experiencing that, I started getting weird compilation issues with the generated code. I don't recall the specifics, but @jrshaffe , I wonder if your issues have something to do with some static methods being called that aren't actually static? (I sorta remember that's what the problems were.)
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.