incubator-streampark
incubator-streampark copied to clipboard
Not Found the image of version 2.1.1
Search before asking
- [X] I had searched in the issues and found no similar issues.
Java Version
No response
Scala Version
2.11.x
StreamPark Version
2.1.1
Flink Version
1.16.2
deploy mode
None
What happened
The 2.1.0 version was found in the docker warehouse, but the 2.1.1 version image was not found. Using the v2.1.1 version image, the pod keeps restarting.
Error Exception
No response
Screenshots
Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!(您是否要贡献这个PR?)
Code of Conduct
- [X] I agree to follow this project's Code of Conduct
I tried to use the Dockfile in deploy to compile the 2.1.1 image by myself. After the image is compiled, deploying with helm has the same effect as using the official V2.1.1 image, and it keeps restarting.
I tried to use the Dockfile in deploy to compile the 2.1.1 image by myself. After the image is compiled, deploying with helm has the same effect as using the official V2.1.1 image, and it keeps restarting.
I've encountered this issue before. In the startup script, if it uses 'start' instead of 'start_docker', the application will start in the background, causing the Pod to enter the 'completed' state. I'm currently looking into how to fix this better. If your startup command uses 'start', you can temporarily remove the '&'.
I tried to use the Dockfile in deploy to compile the 2.1.1 image by myself. After the image is compiled, deploying with helm has the same effect as using the official V2.1.1 image, and it keeps restarting.
I've encountered this issue before. In the startup script, if it uses 'start' instead of 'start_docker', the application will start in the background, causing the Pod to enter the 'completed' state. I'm currently looking into how to fix this better. If your startup command uses 'start', you can temporarily remove the '&'.
Thank you very much for your reply, but I still don't quite understand what you mean. I saw that the helm file is started directly like this. I don't quite understand what you mean by start_docker and removing '&'. Can you explain in detail, thank you !
I tried to use the Dockfile in deploy to compile the 2.1.1 image by myself. After the image is compiled, deploying with helm has the same effect as using the official V2.1.1 image, and it keeps restarting.
I've encountered this issue before. In the startup script, if it uses 'start' instead of 'start_docker', the application will start in the background, causing the Pod to enter the 'completed' state. I'm currently looking into how to fix this better. If your startup command uses 'start', you can temporarily remove the '&'.
Hello, I see that start() is used by default in the startup script, and I didn't find the '&' you said that needs to be removed.
shellcheck disable=SC2120
start() {
shellcheck disable=SC2006
local PID=$(get_pid)
if [ $PID -gt 0 ]; then # shellcheck disable=SC2006 echo_r "StreamPark is already running pid: $PID , start aborted!" exit 1 fi
Bugzilla 37848: only output this if we have a TTY
if [[ ${have_tty} -eq 1 ]]; then echo_w "Using APP_BASE: $APP_BASE" echo_w "Using APP_HOME: $APP_HOME" if [[ "$1" = "debug" ]] ; then echo_w "Using JAVA_HOME: $JAVA_HOME" else echo_w "Using JRE_HOME: $JRE_HOME" fi echo_w "Using APP_PID: $APP_PID" fi
local PROPER="${APP_CONF}/application.yml" if [[ ! -f "$PROPER" ]] ; then echo_r "ERROR: config file application.yml invalid or not found! "; exit 1; else echo_g "Usage: config file: $PROPER "; fi
shellcheck disable=SC2046
eval $(parse_yaml "${PROPER}" "conf_")
shellcheck disable=SC2001
shellcheck disable=SC2154
shellcheck disable=SC2155
local workspace=$(echo "$conf_streampark_workspace_local" | sed 's/#.*$//g') if [[ ! -d $workspace ]]; then echo_r "ERROR: streampark.workspace.local: "$workspace" is invalid path, Please reconfigure in application.yml" echo_r "NOTE: "streampark.workspace.local" Do not set under APP_HOME($APP_HOME). Set it to a secure directory outside of APP_HOME. " exit 1; fi if [[ ! -w $workspace ]] || [[ ! -r $workspace ]]; then echo_r "ERROR: streampark.workspace.local: "$workspace" Permission denied! " exit 1; fi
if [ "${HADOOP_HOME}"x == ""x ]; then echo_y "WARN: HADOOP_HOME is undefined on your system env,please check it." else echo_w "Using HADOOP_HOME: ${HADOOP_HOME}" fi
classpath options:
1): java env (lib and jre/lib)
2): StreamPark
3): hadoop conf
shellcheck disable=SC2091
local APP_CLASSPATH=".:${JAVA_HOME}/lib:${JAVA_HOME}/jre/lib"
shellcheck disable=SC2206
shellcheck disable=SC2010
local JARS=$(ls "$APP_LIB"/.jar | grep -v "$APP_LIB/streampark-flink-shims_..jar$")
shellcheck disable=SC2128
for jar in $JARS;do APP_CLASSPATH=$APP_CLASSPATH:$jar done
if [[ -n "${HADOOP_CONF_DIR}" ]] && [[ -d "${HADOOP_CONF_DIR}" ]]; then echo_w "Using HADOOP_CONF_DIR: ${HADOOP_CONF_DIR}" APP_CLASSPATH+=":${HADOOP_CONF_DIR}" else APP_CLASSPATH+=":${HADOOP_HOME}/etc/hadoop" fi
shellcheck disable=SC2034
shellcheck disable=SC2006
local vmOption=$_RUNJAVA -cp "$APP_CLASSPATH" $PARAM_CLI --vmopt
local JAVA_OPTS=""" $vmOption $DEFAULT_OPTS $DEBUG_OPTS """
eval $NOHUP $_RUNJAVA $JAVA_OPTS
-classpath "$APP_CLASSPATH"
-Dapp.home="${APP_HOME}"
-Dlogging.config="${APP_CONF}/logback-spring.xml"
-Dspring.config.location="${PROPER}"
-Djava.io.tmpdir="$APP_TMPDIR"
$APP_MAIN >> "$APP_OUT" 2>&1 "&"
local PID=$!
local IS_NUMBER="^[0-9]+$"
# Add to pid file if successful start
if [[ ${PID} =~ ${IS_NUMBER} ]] && kill -0 $PID > /dev/null 2>&1 ; then
echo $PID > "$APP_PID"
# shellcheck disable=SC2006
echo_g "StreamPark start successful. pid: $PID"
else
echo_r "StreamPark start failed."
exit 1
fi
}
@A-little-bit-of-data
-
After you have built the image (by executing ./build.sh), if you encounter an error message stating that the /opt/streampark_workspace directory is missing during runtime, you can create it and add the necessary access permissions. Then, test it locally first to ensure it works correctly (accessing http://localhost:10000/). You should be able to log in and see the project interface. This step is to verify if the compiled image has any issues.
-
Temporarily modify the Java startup parameters in the streampark-console/streampark-console-service/src/main/assembly/bin/streampark.sh file. The "&" at the end is used to start the program in the background. Remove the "&" to delete it.
eval $NOHUP $_RUNJAVA $JAVA_OPTS \
-classpath "$APP_CLASSPATH" \
-Dapp.home="${APP_HOME}" \
-Dlogging.config="${APP_CONF}/logback-spring.xml" \
-Dspring.config.location="${PROPER}" \
-Djava.io.tmpdir="$APP_TMPDIR" \
$APP_MAIN >> "$APP_OUT" 2>&1 "&"
- Recompile the project. Just like we tested locally, it requires the /opt/streampark_workspace directory. Add the following line to the Dockerfile: "RUN mkdir -p /opt/streampark_workspace". Then use the Dockerfile to build your image. After that, update the image in your Helm chart. Now you can successfully start it.
Additionally, it's worth mentioning that I am using the Kubernetes cluster provided by Docker. If you don't have an existing cluster, you can consider exploring other Helm chart deployment templates for various components in the central Helm chart repository. They can be used in conjunction with Streampark. However, I recommend deploying Streampark directly on the host machine and deploying other components that are not convenient to deploy locally using Docker or Kubernetes. If you encounter issues connecting Streampark on the host machine to components in different Kubernetes clusters, I highly recommend using https://alibaba.github.io/kt-connect/#/. ↗ It can effectively help you solve this problem.
3. streampark_workspace
Thank you very much for your detailed answer, following your explanation here, I have succeeded,
Currently, we only provide the latest version of the docker image, and the bug of starting in docker has been fixed.