biemond-orawls
biemond-orawls copied to clipboard
Adding 'cluster' property to resource 'wls_server'.
On issue #317 , I mentioned an inability on configuring a Weblogic Cluster without knowing a priori all servers and machines on the infrastructure.
With this update, "cluster" property to the "wls_server" resource that allows the server to provide which cluster it will belong to. This partially solves the problem, since a puppet script can declare which clusters are available in the admin server and its main properties, and on each managed server, which cluster it will belong.
There's a remaining issue on updating the wls_server cluster and machine information, since it requires a managed server restart, before activation. This contribution still don't solve these edge cases.
Coverage remained the same at 38.714% when pulling 1cb5166797483b1882676da18cbf979eeb1801bc on diegoliber:master into 59e04a80aa78fa72b162ca4948f66d53feeb3a03 on biemond:master.
Hi,
Can you check this #200, it should does the same only delays it to the cluster creation. In this you can create clusters first , create servers and run it again or first servers and then clusters. It works in every situation.
you can do this on a wls_server
server_parameters: 'WebCluster' and on wls_cluster use this.
servers: 'inherited'
Hi,
I understand the features provided by #200. I just believe it's a fragile implementation, since it depends on an specific format on the Notes attribute on the managed server (which is just a String of comma-separated values), and require a puppet agent execution on the managed server, followed by a puppet execution on the admin server, in this order. I still have to check if this solution solves the problem of requiring a managed server restart.
Ok,
but you can add the cluster config also to all nodes, no need to run it again on the adminserver
so basically you want to do 1 time provisioning on all nodes. So on the admin ,just a domain with some empty clusters . After that the server nodes which will create a machine, managed server add itself to the cluster. This can also work for persistence and a jms server.
How do you want to handle datasources and jms modules. You also need to have something to add the server to a datasource target or a jms server to a subdeployment . If you don't like this and not using notes it will remove the other targets.
also I have managed server control which can subscribe to managed server changes and do an auto restart when the config changes. This can solve your restart problem.
Maybe we should make a shared vagrant box where we can make this use case.
Thanks