postgresql_cluster icon indicating copy to clipboard operation
postgresql_cluster copied to clipboard

unable to SSH into your server after a reboot

Open emanfeah opened this issue 2 years ago • 13 comments

hello agian ,

i don't know why when i reboot a replica server I can't ssh it give me .

22 port refused after , maybe because something in playbook

emanfeah avatar Jul 12 '23 11:07 emanfeah

Hmm, I do not have such a problem.

try a connection with the ssh -v option to get detailed output and a trace that can help identify possible problems.

Can you connect to the server in other ways? For example, a console in a cloud platform or your hypervisor. To check the system logs, the status of the sshd service.

By the way, have you included the firewall_enabled_at_boot variable for configuring iptables?

vitabaks avatar Jul 12 '23 12:07 vitabaks

`firewall_enabled_at_boot: true  # or 'true' for configure firewall (iptables)

firewall_allowed_tcp_ports_for:
  master:
    - "8008"
    - "5432"
    - "6432"
  replica:
    - "8008"
    - "5432"
    - "6432"
  pgbackrest: []
  postgres_cluster:
    - "{{ ansible_ssh_port | default(22) }}"
    - "{{ postgresql_port }}"
    - "{{ pgbouncer_listen_port }}"
    - "8008"
    - "19999"  # Netdata
#    - "10050"  # Zabbix agent
#    - ""
  etcd_cluster:
    - "{{ ansible_ssh_port | default(22) }}"
    - "2379"  # ETCD port
    - "2380"  # ETCD port
#    - ""`

its that what you mean ..

emanfeah avatar Jul 12 '23 12:07 emanfeah

Yes judging by this example you have enabled firewall

but you have a rule for ssh (ansible_ssh_port or port 22) for postgres_cluster group servers, so there should be no problems with blocking ssh access

If you can connect to the server (via the management console), check the result of the iptables -L command as well as the system logs

  • /var/log/auth.log
  • /var/log/syslog
  • /var/log/kern.log

And firewall service status

sudo systemctl status firewall

vitabaks avatar Jul 12 '23 13:07 vitabaks

can you check public ip of servers if it changes or not after reboot

fatmaAliGamal avatar Jul 16 '23 09:07 fatmaAliGamal

If you are talking about a public IP address, then make sure that you have a permanent one, otherwise it may change every time you restart the server.

Also make sure that you have specified private IP addresses in inventory so that the cluster components listen to private addresses and not public ones.

vitabaks avatar Jul 31 '23 13:07 vitabaks

Also make sure that you have specified private IP addresses in inventory so that the cluster components listen to private addresses and not public ones.

could you please give me more detail...?

emanfeah avatar Aug 01 '23 08:08 emanfeah

See README https://github.com/vitabaks/postgresql_cluster#deployment-quick-start

Specify (non-public) IP addresses and connection settings (ansible_user, ansible_ssh_pass or ansible_ssh_private_key_file for your environment

Comment from inventory file

The specified ip addresses will be used to listen by the cluster components.

vitabaks avatar Aug 01 '23 10:08 vitabaks

yes .. i specified and work fine for me but the problem is i can't get ssh :22 after reboot

emanfeah avatar Aug 01 '23 10:08 emanfeah

please give me more detail

Use of Internal and External IP Addresses in Ansible Inventory

It has been identified that there may be some confusion when it comes to using both internal and external IP addresses within the Ansible inventory. Here is some clarification:

In Ansible, the inventory_hostname represents the hostname within your configuration. This value can be referenced within your Ansible playbooks and roles. On the other hand, ansible_host is used to specify the IP address or domain name where Ansible should establish a connection to the remote host.

When setting these values in the format private_ip_address ansible_host=public_ip_address, Ansible will:

Use the private_ip_address internally within its playbooks and roles (the IP addresses specified as inventory_hostname will be used by the cluster components for listening), and connect to the host via the public_ip_address.

Example:

[etcd_cluster]
10.128.64.140 ansible_host=34.72.80.145
10.128.64.142 ansible_host=35.123.45.67
10.128.64.143 ansible_host=36.192.89.10

This configuration is useful when the cluster components need to communicate over internal IP addresses, but Ansible commands need to be run over the public IP address.

UPD:

Inventory: Add a comment about using public IP addresses - https://github.com/vitabaks/postgresql_cluster/commit/4c197115a44b1e615132c978ebe096c9a7acf8fd

vitabaks avatar Aug 01 '23 10:08 vitabaks

i can't get ssh :22 after reboot

via public IP?

vitabaks avatar Aug 01 '23 10:08 vitabaks

i don't have a public ip only use a private ip


also, where i get or find a inventory_hostname and ansible_host ?

if dcs_exists: false and dcs_type: "etcd" [etcd_cluster] # recommendation: 3, or 5-7 nodes 10.128.64.140 // used private Ip 10.128.64.142 // used private Ip 10.128.64.143 // used private Ip

also i do ssh by private ip using jump server

emanfeah avatar Aug 01 '23 11:08 emanfeah

Ok. Good.

If you can connect to the server (via the management console), check the result of the iptables -L -v command as well as the system logs

/var/log/auth.log
/var/log/syslog
/var/log/kern.log

And firewall service status

sudo systemctl status firewall

vitabaks avatar Aug 01 '23 14:08 vitabaks

@emanfeah Is the problem still relevant?

vitabaks avatar Feb 15 '24 22:02 vitabaks