postgresql_cluster icon indicating copy to clipboard operation
postgresql_cluster copied to clipboard

Consul role uses inside is not support consul versions higher 1.11.x since were introduced breaking changes in config file from consul side

Open garry-t opened this issue 1 year ago • 4 comments

Need to update role according to master branch of role https://github.com/ansible-collections/ansible-consul. Fun starts from version 1.12.x

See https://developer.hashicorp.com/consul/docs/upgrading/upgrade-specific#consul-1-12-x See https://developer.hashicorp.com/consul/docs/upgrading/upgrade-specific#consul-1-13-x See https://developer.hashicorp.com/consul/docs/upgrading/upgrade-specific#consul-1-14-x

tls, addresses ,ports - are affected by these changes master branch of consul ansible role already has it theirs configs

garry-t avatar Oct 10 '24 07:10 garry-t

Now we are installing the latest available version of Consul from the hashicorp repository (now it's v1.19) and tests show that it works.

  TASK [consul : Looking up latest version of Consul] ****************************
  ok: [10.172.0.21]
  ok: [10.172.0.20]
  ok: [10.172.0.22]
  
...

  TASK [consul : Add hashicorp repository] ***************************************
  changed: [10.172.0.22]
  changed: [10.172.0.21]
  changed: [10.172.0.20]
  
  TASK [consul : Install consul package] *****************************************
  changed: [10.172.0.21]
  changed: [10.172.0.20]
  changed: [10.172.0.22]
...

  TASK [consul : Start Consul] ***************************************************
  changed: [10.172.0.21]
  changed: [10.172.0.20]
  changed: [10.172.0.22]
  
  TASK [consul : Check Consul HTTP API (via TCP socket)] *************************
  ok: [10.172.0.21]
  ok: [10.172.0.20]
  ok: [10.172.0.22]

...


  TASK [deploy-finish : Postgres Cluster info] ***********************************
  ok: [10.172.0.20] => {
      "msg": [
          "+ Cluster: postgres-cluster (7423961020152107459) --+-----------+-----------------+------------------------------------+-----------------+",
          "| Member   | Host        | Role    | State     | TL | Lag in MB | Pending restart | Pending restart reason             | Tags            |",
          "+----------+-------------+---------+-----------+----+-----------+-----------------+------------------------------------+-----------------+",
          "| pgnode01 | 10.172.0.20 | Leader  | running   |  2 |           | *               | max_prepared_transactions: 2000->0 | datacenter: dc1 |",
          "|          |             |         |           |    |           |                 |                                    | key1: value1    |",
          "+----------+-------------+---------+-----------+----+-----------+-----------------+------------------------------------+-----------------+",
          "| pgnode02 | 10.172.0.21 | Replica | streaming |  2 |         0 | *               | max_prepared_transactions: 2000->0 | datacenter: dc1 |",
          "|          |             |         |           |    |           |                 |                                    | key1: value1    |",
          "+----------+-------------+---------+-----------+----+-----------+-----------------+------------------------------------+-----------------+",
          "| pgnode03 | 10.172.0.22 | Replica | streaming |  2 |         0 | *               | max_prepared_transactions: 2000->0 | datacenter: dc1 |",
          "|          |             |         |           |    |           |                 |                                    | key1: value1    |",
          "+----------+-------------+---------+-----------+----+-----------+-----------------+------------------------------------+-----------------+"
      ]
  }
  
  TASK [deploy-finish : Connection info] *****************************************
  ok: [10.172.0.20] => {
      "msg": {
          "address": {
              "primary": "master.postgres-cluster.service.consul",
              "replica": "replica.postgres-cluster.service.consul"
          },
          "password": "tUnMwGfxt3eZzg15C1VGr3MGqdUqxXnG",
          "port": "6432",
          "superuser": "postgres"
      }
  }

I don't use Consul in production (I use etcd), so it's hard for me to say how important these configuration changes really are. Could you take on this task and offer a PR?

vitabaks avatar Oct 10 '24 09:10 vitabaks

I need to check also from my side, since I use this role in customised way. I'll get back a bit later.

garry-t avatar Oct 10 '24 10:10 garry-t

yes, and I missed your latest release of this role, ~~so this issue in not relevant since you already points to latest version. But what I'll do it will test this latest role against my consul cluster.~~

garry-t avatar Oct 10 '24 11:10 garry-t

Ok, I've checked role config you have here, config.json is not ready for consul higher version than 1.12.x To fix problem need to use master version of consul role, but master branch has at least 3 opened issue which will potentially impact on cluster deployment.
As example https://github.com/ansible-collections/ansible-consul/blob/a342eefd6308d92c346324bdcd6506dcf16cd549/templates/config.json.j2#L55-L57

So I'll test and will provide some fixes, after I'll finish with consul cluster upgrade

garry-t avatar Oct 10 '24 11:10 garry-t

Started playing with role 2.0

Run this

docker run -d --name pg-console \
  --publish 80:80 \
  --publish 8080:8080 \
  --env PG_CONSOLE_API_URL=http://localhost:8080/api/v1 \
  --env PG_CONSOLE_AUTHORIZATION_TOKEN=secret_token \
  --env PG_CONSOLE_DOCKER_IMAGE=vitabaks/postgresql_cluster:latest \
  --volume console_postgres:/var/lib/postgresql \
  --volume /var/run/docker.sock:/var/run/docker.sock \
  --volume /tmp/ansible:/tmp/ansible \
  --restart=unless-stopped \
  vitabaks/postgresql_cluster_console:2.0.0

Login works but everything else no.. ) http://localhost:8080/api/v1/settings?offset=0&limit=999999999 net::ERR_CONNECTION_REFUSED What I missed? @vitabaks

garry-t avatar Nov 08 '24 15:11 garry-t

@garry-t Are you running the console locally or on a dedicated server?

If you have set up the console on a different server, replace 'localhost' with the server's address.
And please make sure that ports 80 and 8080 are open.

vitabaks avatar Nov 08 '24 16:11 vitabaks

Ok, thanks. now is ok.

@garry-t Are you running the console locally or on a dedicated server?

If you have set up the console on a different server, replace 'localhost' with the server's address. And please make sure that ports 80 and 8080 are open.

garry-t avatar Nov 08 '24 17:11 garry-t

https://github.com/vitabaks/postgresql_cluster/pull/806

garry-t avatar Nov 14 '24 13:11 garry-t

BTW I found that

ok: [10.172.0.20] => {
      "msg": {
          "address": {
              "primary": "master.postgres-cluster.service.consul",
              "replica": "replica.postgres-cluster.service.consul"
          },
          "password": "tUnMwGfxt3eZzg15C1VGr3MGqdUqxXnG",
          "port": "6432",
          "superuser": "postgres"
      }
  }
  

password prints here is not what I set, so far cant figure where this value takes.

garry-t avatar Nov 14 '24 13:11 garry-t

patroni_superuser_password

If not defined, will be generated automatically during deployment.

vitabaks avatar Nov 14 '24 13:11 vitabaks

patroni_superuser_password

If not defined, will be generated automatically during deployment.

It defined, that is why I'm confusing. But, I'll debug a bit more..

garry-t avatar Nov 14 '24 14:11 garry-t

merged

vitabaks avatar Nov 14 '24 19:11 vitabaks