ansible-oracle
ansible-oracle copied to clipboard
RAC install not running scripts on other nodes.
I am trying to deploy a 2-Node RAC setup and I can't get the secondary node to get grid installed. It looks like the "Run root script after installation (Other Nodes)" is skipping the second node.
TASK [oraswgi-install : install-home-gi | Run root script after installation (Other Nodes)] *****************************************************
skipping: [pipeline0] => (item=[0, u'pipeline0'])
skipping: [pipeline0] => (item=[1, u'pipeline1'])
Inventory file:
[pipeline]
pipeline0
pipeline1
Variables file
hostgroup: pipeline
configure_cluster: True
oracle_gi_cluster_type: STANDARD
Hi, Which version are you trying to install?
12.2.0.1
oracle_install_version_gi: 12.2.0.1
db_homes_config:
12201-pipeline:
home: pipeline
version: 12.2.0.1
edition: EE
So would the version be the issue?
Hi - sorry about not getting back to you sooner. I haven't had time to look at this until now. The variables look fine and I know the installation works, so something else is going on. I just wanted the version to have a starting point for debugging.
Did something fail for pipeline1
earlier in the play? If the output you provided is the entire output for the task, it looks like pipeline1
is not even considered (and if an earlier task failed, this could happen)
It should look something like this:
TASK [oraswgi-install : install-home-gi | Run root script after installation (Other Nodes)]
skipping: [racnode-dc1-1] => (item=[0, u'racnode-dc1-1'])
skipping: [racnode-dc1-1] => (item=[1, u'racnode-dc1-2'])
skipping: [racnode-dc1-2] => (item=[0, u'racnode-dc1-1'])
changed: [racnode-dc1-2] => (item=[1, u'racnode-dc1-2'])
and the fourth line above is the actual task that was run for the second node (for a 2-node cluster).
The conditional that decides what to run on the other nodes is:
when: configure_cluster and inventory_hostname != cluster_master and inventory_hostname == item.1 and oracle_home_gi not in checkgiinstall.stdout
-
configure_cluster
is correct, so that is not a problem -
cluster_master
is picked automatically (the first host in the hostgroup, unless you've explicitly set it to something else), so that shouldn't be a problem either - I'm also going to assume that this is a fresh install and that the GI-home was not present in the inventory (
oracle_home_gi not in checkgiinstall.stdout
). checkgiinstall is set from theinstall-home-gi | Check if GI is already installed
task inmain.yml
Could you maybe give me your entire group_vars for this config, either inline or as a gist?
Here are the only errors that I see earlier in the play:
TASK [orahost : Check dns for host] *************************************************************************************************************
fatal: [pipeline1]: FAILED! => {"changed": false, "cmd": ["nslookup", "pipeline1"], "delta": "0:00:00.165154", "end": "2018-09-18 07:28:45.243174", "msg": "non-zero return code", "rc": 1, "start": "2018-09-18 07:28:45.078020", "stderr": "", "stderr_lines": [], "stdout": "Server:\t\t<ip>\nAddress:\t<ip>#53\n\n** server can't find pipeline1: NXDOMAIN", "stdout_lines": ["Server:\t\t<ip>", "Address:\t<ip>#53", "", "** server can't find pipeline1: NXDOMAIN"]}
...ignoring
fatal: [pipeline0]: FAILED! => {"changed": false, "cmd": ["nslookup", "pipeline0"], "delta": "0:00:00.164571", "end": "2018-09-18 07:28:45.463773", "msg": "non-zero return code", "rc": 1, "start": "2018-09-18 07:28:45.299202", "stderr": "", "stderr_lines": [], "stdout": "Server:\t\t<ip>\nAddress:\t<ip>#53\n\n** server can't find pipeline0: NXDOMAIN", "stdout_lines": ["Server:\t\t<ip>", "Address:\t<ip>#53", "", "** server can't find pipeline0: NXDOMAIN"]}
...ignoring
Here are my groupvars (domains altered):
hostgroup: oci
role_separation: False
device_persistence: udev
configure_ssh: True
configure_host_disks: False
disable_se_linux: True
old_ssh_config: False
configure_public_yum_repo: false
configure_epel_repo: False
oracle_sw_copy: True
oracle_sw_unpack: True
configure_interconnect: False
install_os_packages: True
oracle_base: /u01/app/oracle
oracle_sw_source_local: /dba/software/oracle/databases/12.2.0.1/
## GI Variables
configure_cluster: True
oracle_install_version_gi: 12.2.0.1
oracle_gi_cluster_type: STANDARD
oracle_home_gi: "/u01/app/grid/product/{{ oracle_install_version_gi }}/grid"
oracle_scan: pipeline1-scan.test.com
oracle_vip: -vip
oracle_gi_nic_pub: ens3
oracle_gi_nic_priv: ens4
apply_patches_gi: False
oracle_scan_port: 1521
asm_diskgroups:
- diskgroup: crs
properties:
- {redundancy: external, ausize: 4}
attributes:
- {name: compatible.asm, value: "{{ oracle_install_version_gi }}"}
disk:
- {device: '/oradata/crs_data/disk0', asmlabel: disk0}
- {device: '/oradata/crs_data/disk1', asmlabel: disk1}
- {device: '/oradata/crs_data/disk2', asmlabel: disk2}
oracle_asm_init_dg: crs
oracle_asm_disk_string: /oradata/crs_data/
# Oracle Homes
db_homes_config:
12201-pipeline:
home: pipeline
version: 12.2.0.1
edition: EE
## Oracle Databases
db_homes_installed:
- home: 12201-pipeline
apply_patches: False
state: present
oracle_version_db: 12.2.0.1
oracle_databases:
- home: 12201-pipeline
oracle_db_name: pipeline
oracle_db_type: RAC
is_container: True
oracle_version_db: 12.2.0.1
pdb_prefix: pdb
num_pdbs: 0
storage_type: FS
redolog_size: 1G
redolog_size_in_mb: 1000
oracle_db_mem_totalmb: 8000
oracle_database_type: OLTP
datafile_dest: /oradata/data
recoveryfile_dest: /oradata/fra_data
listener_name: LISTENER
listener_port: 1521
state: present
init_parameters:
- {name: db_create_file_dest, value: '/oradata/data', scope: both, state: present}
- {name: open_cursors, value: 1000, scope: both, state: present}
- {name: processes, value: 5000, scope: both, state: present}
The whole installation tasks have been changed from 19c onwards. There is no plan to fix issues in Clusterware installations for unsupported versions of Oracle anymore.