Jay Cary
Jay Cary
> > When we call retire_workers the scheduler handles the graceful shutdown and the worker process exits. Ideally this should allow the Kubernetes Pod to go into a Completed phase....
@jacobtomlinson I agree that a new issue should be opened to handle the issues of zombie workers following retire_workers, but I think you (or maybe @philipp-sontag-by) should be the one...
I was able to get around this using this at the top of my handler file: ``` import pyproj pyproj.datadir.set_data_dir(os.path.join(os.path.dirname(__file__), 'share/proj')) ```
I think I'm having the same problem here, it appears that the resampling around tile edges is doing something odd. Here's one example using `-r near` which causes one or...
With some experimentation, I was able to determine the issue was due to the first gdalwarp reprojection not using the same origin / resolution as was later used by gdal2tiles....
Unfortunately, I can't share any more than I already have. Sorry.
I am using version 9.2.1 of the Amplify CLI and also have this issue, but was able to workaround it by overriding the auth vtl file for the query (listReportingUnits)...
Incredibly, I didn't see your replies on this PR until just now! I have continued to add improvements on my fork, but will go back and incorporate the changes you...
Assuming IAM role from within a EKS Pod Identity-enabled container does not work using named profile
I am encountering this as well, which is breaking our gitlab CI that uses `apk add aws-cli`. Here is the relevant section from a working run from yesterday: ``` $...
Assuming IAM role from within a EKS Pod Identity-enabled container does not work using named profile
My bad @rkubik-hostersi, the timing of when you submitted this issue and the environment you described, then followed by what drunkensway said made me think we were encountering different versions...