pega-helm-charts
pega-helm-charts copied to clipboard
Issue setting up backingservices on AKS
Describe the bug Hi, I am trying to install SRS using this doc https://github.com/pegasystems/pega-helm-charts but not working
To Reproduce Run the following commands and a brand new AKS : helm repo add pega https://pegasystems.github.io/pega-helm-charts helm inspect values pega/backingservices > backingservices.yaml kubectl create namespace pegabackingservices $ helm install backingservices pega/backingservices --namespace pegabackingservices --values backingservices.yaml
Errors are visible on the containers :
"Error starting Micronaut server:
Bean definition [com.pega.fnx.search.storage.SingleStorageManager] could not be loaded:
Error instantiating bean of type [com.pega.fnx.search.storage.SingleStorageManager]:
Cannot create an Elasticsearch connector for endpoint 'http://elasticsearch-master.pegabackingservices.svc:9200'",
"logger_name":"io.micronaut.runtime.Micronaut","thread_name":"main","level":"ERROR","level_value":40000,"stack_trace":
"io.micronaut.context.exceptions.BeanInstantiationException:
Bean definition [com.pega.fnx.search.storage.SingleStorageManager] could not be loaded:
Error instantiating bean of type [com.pega.fnx.search.storage.SingleStorageManager]:
Cannot create an Elasticsearch connector for endpoint 'http://elasticsearch-master.pegabackingservices.svc:9200'\n\t
at io.micronaut.context.DefaultBeanContext.initializeContext(DefaultBeanContext.java:1549)\n\t
at io.micronaut.context.DefaultApplicationContext.initializeContext(DefaultApplicationContext.java:220)\n\t
at io.micronaut.context.DefaultBeanContext.readAllBeanDefinitionClasses(DefaultBeanContext.java:2780)\n\t
at io.micronaut.context.DefaultBeanContext.start(DefaultBeanContext.java:233)\n\t
at io.micronaut.context.DefaultApplicationContext.start(DefaultApplicationContext.java:166)\n\t
at io.micronaut.runtime.Micronaut.start(Micronaut.java:64)\n\t
at io.micronaut.runtime.Micronaut.run(Micronaut.java:299)\n\tat io.micronaut.runtime.Micronaut.run(Micronaut.java:285)\n\t
at com.pega.fnx.search.SearchApplication.main(SearchApplication.java:37)\n
Caused by: io.micronaut.context.exceptions.BeanInstantiationException:
Error instantiating bean of type [com.pega.fnx.search.storage.SingleStorageManager]:
Cannot create an Elasticsearch connector for endpoint 'http://elasticsearch-master.pegabackingservices.svc:9200'\n\t
at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1927)\n\t
at io.micronaut.context.DefaultBeanContext.createAndRegisterSingletonInternal(DefaultBeanContext.java:2647)\n\t
at io.micronaut.context.DefaultBeanContext.loadContextScopeBean(DefaultBeanContext.java:2183)\n\t
at io.micronaut.context.DefaultBeanContext.initializeContext(DefaultBeanContext.java:1543)\n\t...
8 common frames omitted\nCaused by: com.pega.fnx.search.storage.StorageException:
Cannot create an Elasticsearch connector for endpoint 'http://elasticsearch-master.pegabackingservices.svc:9200'\n\t
at com.pega.fnx.search.storage.es.ElasticsearchConnectorFactory.createConnector(ElasticsearchConnectorFactory.java:80)\n\t
at com.pega.fnx.search.storage.SingleStorageManager.initializeConnection(SingleStorageManager.java:52)\n\t
at com.pega.fnx.search.storage.$SingleStorageManagerDefinition.initialize(Unknown Source)\n\t
at com.pega.fnx.search.storage.$SingleStorageManagerDefinition.build(Unknown Source)\n\t
at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1898)\n\t...
11 common frames omitted\nCaused by: java.io.IOException: elasticsearch-master.pegabackingservices.svc: System error\n\t
at org.elasticsearch.client.RestClient.extractAndWrapCause(RestClient.java:854)\n\t
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:259)\n\t
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:246)\n\t
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1613)\n\t
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1598)\n\t
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1565)\n\t
at org.elasticsearch.client.RestHighLevelClient.info(RestHighLevelClient.java:766)\n\t
at com.pega.fnx.search.storage.es.ESClientFacade$$Lambda$859.0000000029813B30.apply(Unknown Source)\n\t
at com.pega.fnx.search.storage.es.ESClientFacadeUtils.handleRuntimeException(ESClientFacadeUtils.java:44)\n\t
at com.pega.fnx.search.storage.es.ESClientFacade.info(ESClientFacade.java:109)\n\t
at com.pega.fnx.search.storage.es.ElasticsearchConnector.fetchServerInformation(ElasticsearchConnector.java:138)\n\t
at com.pega.fnx.search.storage.es.ElasticsearchConnector.
Expected behavior Running bancking services
Chart version version: 1.2.0
Server (if applicable, please complete the following information):
- AKS version 1.20
Additional context We can see a "Inet6AddressImpl", don't know if this means it is trying to use ipv6, but if it does then it is wrong, as ipv6 is still not available on most of Kubernetes installations. Need to ba able to fallback to ipv4.
Thanks for you help !
@MarcAntoine-Niggemann Could you please check the elasticsearch-master statefulset's pods health. By default we deploy a 3 node elasticsearch cluster as an elasticsearch-master stateful set and the service should be available to SRS through the elasticsearch-master.
You can verify the elasticsearch cluster formation from elasticsearch pod container logs looking out for node discovery logs and see if current elasticsearch node is able to detect other elasticsearch nodes.
If the cluster is well formed, we need to check what is preventing the service on 9200 port from being available to SRS pods.
@MarcAntoine-Niggemann please update this issue if the issue has been resolved.
I have the same issue. I think it relates to pods in AKS not being able to access the core dns service before they are in 'Ready' state.
Hi,
I think the issue comes from not following this instruction, but I don't understand it : To use an internal Elasticsearch cluster (srs.srsStorage.provisionInternalESCluster:true) for your deployment, you must run $ make es-prerequisite NAMESPACE=<NAMESPACE_USED_FOR_DEPLOYMENT>.
Where should we run this command ? form inside a container, which one ?
Thanks for clarifying this.
@MarcAntoine-Niggemann : Can you try giving the clusterIp directly in the SRS deployment yaml. Same issue happened to me on the GKE but i hope the same resolution works.. step 1 : scale down the srs replica=0 step 2 : Replace "elasticsearch-master.xxxx.svc" with the cluster IP in the deployment yaml. step 3 : scale up the replica to 1.
@MarcAntoine-Niggemann @paveldokov is this still an issue?
@MarcAntoine-Niggemann @paveldokov is this still an issue? Can you update the latest status?
Hello,
Please close, we managed to resolve it.
Kind regards, Pavel