wsadmin Script to set WebSphere JVM Properties
How we can use a wsadmin Script to set WebSphere JVM Properties? We use the websphere-traditional:profile Image
Thanks, Chris
Hi Chris - when does the script need to run? Is it something that you would do at build time or does it need to be done at runtime to take in to account some environment specific properties?
Hi David,
the script must run at runtime to take into account some environment specific properties. But these properties should be taken into account in the image. These should not be set each time the container is started.
Thanks, Chris
If you take a look at update_hostname in the start_server script you'll see one possibility. This gets run at container startup and runs a Jython script with conntype NONE, passes in a variable to the script, and touches a file so that it isn't run again if the container is stopped and restarted.
Does this work even without conntype NONE?
At that point, the server is not running so, no, it would not work. But then I don't think you need it to be running for JVM args. You could start the server and then run the script but then, in the case of JVM args, you'd have to restart the server again. All possible but not great for your startup time.
This kind of coincides with how we use WAS currently (non Docker) - we install WAS on a VM, and then we have a set of scripts to deploy applications, stop them, start them, etc.. We may need to add a new JVM arg to a server, so we use wsadmin.sh to modify the JVM args. We don't use the GUI. Our scripts make a series of wsadmin.sh calls for whatever work they need to do.
If I now have WAS running in a Docker container, from the HOST I still need to run the wsadmin.sh that's in the container. I think "docker exec" may work, but maybe there's a fundamental flaw to the way we do it?
If a user needs an enterprise app restarted, we essentially do "wsadmin stop" "wsadmin start" (using the proper syntax and parameters obviously).. If we are running WAS in Docker, should we NOT be running the wsadmin from the installed WebSphere but be doing something else instead?
We have solved it so that we run the script with docker exec and then perform a docker commit. Then we'll give you a new docker tag
That's effectively all a docker build is - a series of docker run commands with a commit after each. Whether you want to start doing things like wsadmin stop inside a container depends on how you're treating it. If you're just treating your VM as a lightweight container then that's fine. If you really bought in to the Docker way then you'd just restart the container or even throw it away and spin up a new one. That's the point at which you realise Liberty would be a much better fit for this pattern!
My apologies for coming into this discussion late - but its contents match perfectly what we are trying to do. Our end goal is that we want to containerize some of our WAS applications, but we don't want to build "environment-specific" images (i.e. images which have specific resource endpoints defined in it (MQ destinations, JDBC URLs/usernames/passwords, etc)).
As someone who is relatively new to the Docker world I'm looking for some best practices as to what the best way to organize/create/deploy/run these images might be. My thought was we would create an image (from the ibmcom/websphere-traditional image) and deploy the app into it, but then once the container is running do docker exec wsadmin stuff with a script that contains the environment-specific information. I think the problem with that though is that if the runtime platform then needs to scale the image horizontally (i.e. add more instances) then how does it know it needs to do that? Plus the script it executes would have to already exist on the Docker image (or the image has a volume mounted to it where the script can be placed after image creation).
What I want to try & avoid doing is having to build (for each application) a DEV / INTEGRATION / QA / PERFORMANCE / PRODUCTION version of the image.
Just like traditional artifacts (.war/.ear files) they should be ephemeral, meaning I should be able to take the same binary & move it from one environment to another without having to rebuild it. Shouldn't the same principles apply here to Docker images?
Any help/guidance would be much appreciated! Thanks in advance!
Hi @edeandrea - I think you're 90% of the way there. I'd just suggest that, rather than doing the wsadmin pieces via docker exec you should modify the start_server script that is run when the container starts so that it performs those actions (either before or after starting the server depending on whether you need a running server). You then just need to work out how to get the environment specific information (or indeed the whole script) in to the container. Depending on how you're running your containers that might be just mounting a volume or using something like a ConfigMap in Kubernetes. (This is probably a good point to advocate trying to migrate your application to Liberty where performing such config updates at startup time is much easier.)
Thanks @davidcurrie for the quick reply. We have a pretty massive legacy portfolio, some of which requires full-blown WAS for lots of different reasons. We do have efforts going on to re-build some of our stack with more cloud-native/friendly alternatives, but at the end there are still some things which for lots of other reasons (some technical, some not) need to remain on full-blown WAS.
I suppose we could build a custom start_server script which looks for things via environment variables, which we can then pass in via -e parameters when the container starts?
We haven't come up with what we'd consider "best practices" yet, but if we do I'll be sure to give input on the thread. Let me know what you end up doing.At David's suggestion, I am first looking to see about using the Liberty profile, but like you we have hundreds of legacy applications which are on full-blown WAS.Every night, we deploy our app at least 200 times, on 50+ VM's each with WebSphere installed. Each WAS has about 5 Servers(JVM)s, one for each running instance of the app. The thought or running WebSphere 200 times (if each container has WAS in it) seems just "wrong" to me..On January 3, 2018 at 12:36 PM Eric Deandrea [email protected] wrote:Thanks @davidcurrie for the quick reply. We have a pretty massive legacy portfolio, some of which requires full-blown WAS for lots of different reasons. We do have efforts going on to re-build some of our stack with more cloud-native/friendly alternatives, but at the end there are still some things which for lots of other reasons (some technical, some not) need to remain on full-blown WAS.I suppose we could build a custom start_server script which looks for things via environment variables, which we can then pass in via -e parameters when the container starts?—You are receiving this because you commented.Reply to this email directly, view it on GitHub, or mute the thread.
@edeandrea - environment variables are one option. The downside they have is that they are readily retrieved from the container meta-data and the Docker APIs so may not be suitable if you have sensitive variables e.g. a database password.
@davidcurrie We were able to pass environment variables , as we are trying to create Datasources and MQ's during runtime, we are passing Datasource userID and password other MQ details as env variables, we have container up and running without any issues, but we are seeing these environment variables clearly in the websphere logs which would be security violation for us, is there any other way to pass these ? just want to check for more options.
@PeramSubhash - are you able to mount in a file with the necessary variables and then execute a wsadmin script at container startup to configure the appropriate resources based on the content of that file?
@davidcurrie till yesterday i ran scripts by passing variables as -e after docker run command and websphere started writing all the variables like username/password to the logs. Instead of passing them as -e, I imported into script using configparser and it did the trick, now i see my scripts ran right after the container starts up and also logs look clean.
@PeramSubhash can you share your Dokerfile and your entrypoint.sh?
@cniweb and anyone interested in using datasources, connection factories etc etc.
I've managed to change credentials, create or edit data sources etc at runtime using properties based configuration.
If the devs are ok with it, I can make a push request with these changes.
First, check the docs about config properties files. https://www.ibm.com/support/knowledgecenter/en/SSAW57_8.5.5/com.ibm.websphere.nd.multiplatform.doc/ae/rxml_7propbasedconfig.html
Dockerfile
- websphere user and group can be changed on build via parameter;
- jython scripts to deploy or undeploy apps offline, no need to start websphere first;
- create a directory /etc/websphere to store config properties.
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y openssl wget procps
**ARG USER=websphere
ARG GROUP=websphere**
ARG TAR_URL
COPY start_server create_profile create_and_start modify_password \
updateHostName.py updatePassword.py **ear_deploy_offline.py ear_undeploy_offline.py** /work/
RUN groupadd $GROUP --gid 510 \
&& useradd $USER -g $GROUP -m --uid 500 \
**&& mkdir /etc/websphere \
&& chown -R $USER:$GROUP /work /opt /etc/websphere**
USER $USER
ENV PATH /opt/IBM/WebSphere/AppServer/bin:$PATH
RUN wget -q -O - $TAR_URL | tar xz
CMD ["/work/create_and_start"]
ear_deploy_offline.py
# -*- coding: utf-8 -*-
import os
import sys
opts = ['-cell', 'DefaultCell01', '-defaultbinding.virtual.host', 'default_host', '-usedefaultbindings']
params = sys.argv
while params[0].startswith('--'):
opts.append(params[0][1:])
opts.append(params[1])
params = params[2:]
total = len(params)
success = 0
for ear in params:
print ''
try:
AdminApp.install(ear, opts)
AdminConfig.save()
success += 1
except:
print 'Can\'t deploy '+ear
if success < total:
os._exit(200)
ear_undeploy_offline.py (I don't know why anyone would bother to uninstall an application in a container image, but there you go).
# -*- coding: utf-8 -*-
import os
import sys
total = len(sys.argv)
success = 0
for appname in sys.argv:
print ''
try:
AdminApp.uninstall(appname)
AdminConfig.save()
success += 1
except:
print('Can\'t undeploy application ' + appname)
if success < total:
os._exit(200)
start_server script The most important change I've made was to start_server (which is called by the entrypoint create_and_start). It now scans /etc/websphere for any *conf file and applies it offline, before starting websphere.
#!/bin/bash
#####################################################################################
# #
# Script to start the server and wait. #
# #
# Usage : start_server #
# #
#####################################################################################
PROFILE_NAME=${PROFILE_NAME:-"AppSrv01"}
SERVER_NAME=${SERVER_NAME:-"server1"}
update_hostname()
{
wsadmin.sh -lang jython -conntype NONE -f /work/updateHostName.py ${NODE_NAME:-"DefaultNode01"} $(hostname)
touch /work/hostnameupdated
}
start_server()
{
echo "Starting server ..................."
/opt/IBM/WebSphere/AppServer/profiles/$PROFILE_NAME/bin/startServer.sh $SERVER_NAME
}
stop_server()
{
echo "Stopping server ..................."
kill -s INT $PID
}
**config_properties()
{
for i in $(ls /etc/websphere/*conf); do
fileName=$( echo $i | cut -d\/ -f4)
echo "====================================================================================="
echo $fileName
echo "====================================================================================="
wsadmin.sh -lang jython -conntype NONE <<EOF
AdminTask.applyConfigProperties('[-propertiesFileName $i -reportFileName /tmp/${fileName}.ext ]')
AdminConfig.save()
quit
EOF
echo ""
echo "====================================================================================="
done
}**
if [ ! -f "/work/passwordupdated" ]; then
/work/modify_password
fi
if [ "$UPDATE_HOSTNAME" = "true" ] && [ ! -f "/work/hostnameupdated" ]; then
update_hostname
fi
**count=$(ls /etc/websphere/*.conf | wc -l)
if [ $count != 0 ]; then
config_properties
fi**
trap "stop_server" TERM INT
start_server || exit $?
PID=$(ps -C java -o pid= | tr -d " ")
tail -F /opt/IBM/WebSphere/AppServer/profiles/$PROFILE_NAME/logs/$SERVER_NAME/SystemOut.log --pid $PID -n +0 &
tail -F /opt/IBM/WebSphere/AppServer/profiles/$PROFILE_NAME/logs/$SERVER_NAME/SystemErr.log --pid $PID -n +0 >&2 &
while [ -e "/proc/$PID" ]; do
sleep 1
done
This will apply any config property you want at runtime. But if you want to apply some of them on image build time, this script might come in handy:
for i in $(ls /work/*conf); do
fileName=$( echo $i | cut -d\/ -f5)
echo "====================================================================================="
echo $fileName
echo "====================================================================================="
wsadmin.sh -lang jython -conntype NONE <<EOF
AdminTask.applyConfigProperties('[-propertiesFileName $i -reportFileName /tmp/${fileName}.ext ]')
AdminConfig.save()
quit
EOF
echo ""
echo "====================================================================================="
done
DO NOT ADD any config properties file to /etc/websphere on build time, otherwise they'll be applied everytime a container starts, which will slow down your startup time.
At build time, add your config properties to /work.
At runtime, mount a volume with any additional property files to /etc/websphere (changing JAAS passwords or data source host at runtime).
Now, after you've built your base websphere image, you can build your app images from it.
Add this to your app Dockerfile to deploy your apps.
RUN wsadmin.sh -lang jython -conntype NONE -f /work/ear_deploy_offline.py /path/to/your/app.ear
Now... I'll admit that this property based configuration stuff isn't very easy. I won't share the ones I've generated because the are too verbose, long, tedious and full of dependencies.
Instead, I suggest you go to any available websphere installation, extract the existing configs, apply to your container, test and work your up from there, trying to figure out which dependencies you'll need. Ex.
wsadmin.sh
AdminConfig.list('DataSource')
Will get me a list of all datasources created on websphere:
This will extract a portable file you can apply on any other websphere:
wsadmin.sh
AdminTask.extractConfigProperties('ATL_DS(cells/DefaultCell01|resources.xml#DataSource_1529698314152)', '[-propertiesFileName /tmp/data-source.conf -options [[PortablePropertiesFile true]] ]')
You can also run the official websphere image, exposing the admin port, create everything through GUI, the way you're probably used to, open up a shell to the running container and try to extract the configs from the objects you've created through the GUI.
To create a datasource, you'll need to:
- Create JDBCProvider
- Create another another JDBCProvider for XA
- Create JAASAuthData (with the user ID and Password in XOR+Base64 format, I'm sure you already know some sites to make the convertion for you)
- Then finally create your DataSource, which will reference all the previous objetcs I've mentioned.
Check what you have on wsadmin and extract everything to see what you'll need to change for your container.
wsadmin.sh
AdminConfig.list('JDBCProvider')
AdminConfig.list('JAASAuthData ')
AdminConfig.list('DataSource')
I hope this helps.
Also, here's a sample deployment on kubernetes, using the image I wrote and configMap to store the config properties files.
kind: Deployment
apiVersion: apps/v1
metadata:
annotations:
name: my-app-backend
namespace: qa-ns
labels:
k8s-app: my-app-backend
kubernetes.io/cluster-service: 'true'
spec:
replicas: 1
selector:
matchLabels:
k8s-app: my-app-backend
template:
metadata:
labels:
k8s-app: my-app-backend
spec:
volumes:
- name: my-app-config-properties-vol
configMap:
name: my-app-backend-properties
defaultMode: 420
containers:
- name: my-app-backend
image: 'private.registry.local:30000/my-app-backend:3.0.0'
ports:
- name: http
containerPort: 9080
protocol: TCP
- name: console
containerPort: 9043
protocol: TCP
- name: https
containerPort: 9443
protocol: TCP
resources:
limits:
cpu: '1'
memory: 4000Mi
requests:
cpu: 50m
memory: 400Mi
volumeMounts:
- name: my-app-config-properties-vol
mountPath: /etc/websphere
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: {}
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 100%
maxSurge: 100%
thanks a lot for the info @DSTOLF, this is great. Please feel free to make a PR and we'll definitely take a look. I am interested in learning more about your kubernetes environment - which k8 framework are you deploying tWAS base containers on? If you want to continue a chat offline, please send me a ping at [email protected]
CC @bensonlatibm (maybe you can guide this PR?)
thanks @DSTOLF. We're planning on adding a new doc page explaining how to configure apps and your contributions fit very well in that area. @leochr will be getting that going.
@cniweb
DockerFile
FROM ibmcom/websphere-traditional:8.5.5.13-profile
USER root
COPY config_jvm.py /tmp
RUN chmod 0755 /tmp/config_jvm.py && chown was:was /tmp/config_jvm.py
USER was
ENV PATH $PATH:/opt/IBM/WebSphere/AppServer/bin/
RUN wsadmin.sh -lang jython -conntype NONE -f /tmp/config_jvm.py
config_jvm.py
# -*- coding: utf-8 -*-
import os
AdminTask.setJVMMaxHeapSize(['-serverName', 'server1', '-nodeName', 'DefaultNode01', '-maximumHeapSize', '3072'])
AdminTask.setJVMInitialHeapSize(['-serverName', 'server1', '-nodeName', 'DefaultNode01', '-initialHeapSize', '2048'])
os._exit(0)
@DSTOLF
Do you know parameters for AdminConfig.create for DataSource? I stucked on this part -.-
Already have ready oart fir JDBC provider, JAAS auth data, but still cannot find proper attributes for DataSource:
print "Creating JDBC provider"
provider = AdminConfig.create(
'JDBCProvider', security,
[["classpath",
"/opt/IBM/WebSphere/AppServer/java/jre/lib/ext/ojdbc8.jar"],
["implementationClassName",
"oracle.jdbc.pool.OracleConnectionPoolDataSource"],
["name", "Oracle JDBC Driver"],["description", "Oracle JDBC Driver"]])
AdminConfig.save()
print "Creating Auth Alias"
authId = AdminConfig.create('JAASAuthData', AdminConfig.getid('/Security:/'), [['alias', 'Master DB Security Alias'], ['userId', 'admin'], ['password', 'password123']])
AdminConfig.save()
# Part giving error:
AdminConfig.create('DataSource', AdminConfig.getid('/Node:DefaultNode01/Server:server1/JDBCProvider:Oracle JDBC Driver'), [
['name', 'MASTER_DATASOURCE'],
['jndiName', 'MasterDataSource'],
['provider', provider],
['authDataAlias', 'Master DB Security Alias'],
['datasourceHelperClassname', 'com.ibm.websphere.rsadapter.Oracle11gDataStoreHelper'],
])
maybe this sample helps you @knightdave? https://github.com/38leinaD/docker-images/blob/master/was-9-jdbc/jdbc.py#L33 it's from @38leinaD