oshinko-s2i
oshinko-s2i copied to clipboard
in template oshinko-python-build-dc APP_EXIT = true should mean that the app will exit when it completes.
{ "description": "Setting this value to 'false' prevents the application from being re-deployed if/when it completes", "name": "APP_EXIT", "value": "false", "required": true }
The code from start.sh that reads APP_EXIT does this
function app_exit {
# Sleep forever so the process does not complete
while [ ${APP_EXIT:-false} == false ]
do
sleep 1
done
exit 0
}
That should cause an exit if APP_EXIT is true. If not, then we have a bug. Do you have a reproducer we an look at?
Start.sh waits for the spark app to complete and then exits, OR intercepts a signal from openshift and the signal handler sets app_exit to TRUE to force an exit.
Maybe we just need to reword the description. If APP_EXIT=True, OpenShift will keep redeploying the app because it's created with a deployment config. So preventing exit prevents redeploy. The wording there might be bad.
Without using a job, there is no way to keep OpenShift from relaunching a spark app that completes.
We do a workaround to resolv that problem, doing a change in function app_exit in start.sh
DC_NAME=$(hostname | awk -F- '{ print $1}') NAMESPACE=$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)
curl -H "Content-Type: application/json-patch+json"
-H "Accept: application/json"
-X PATCH
http://${OSHINKO_WEB_PROXY_PORT_8001_TCP_ADDR}:${OSHINKO_WEB_PROXY_SERVICE_PORT_OC_PROXY_PORT}/proxy/apis/apps.openshift.io/v1/namespaces/${NAMESPACE}/deploymentconfigs/${DC_NAME}
--data '[{ "op": "replace", "path": "/spec/replicas", "value": "0" }]'
Now, replica is set to 0 in Deployment Config.