ansible-container icon indicating copy to clipboard operation
ansible-container copied to clipboard

Error parsing / templating container.yml

Open brucellino opened this issue 7 years ago • 6 comments

ISSUE TYPE

Bug Report

container.yml
version: “2”
services:
{% for slave in slaves %}
  build-slave-{{ slave.name }}:
    image: "{{ slave.base_image}}:{{ slave.base_image_tag }}"
    ports:
        - "{{ slave.host_port }}:{{ container_port }}"
    command: ["/usr/sbin/sshd”, "-D”,”-p {{ container_port}}"]
    #command: ["/bin/sh”,”-c”,“while true; do sleep 1000; done”]
    volumes_from:
      - CODE-RADE-src:rw
      - CODE-RADE-modules:rw
      - CODE-RADE-soft:rw
      - CODE-RADE-repo:rw
      - CODE-RADE-cvmfs:rw
{% endfor %}


  CODE-RADE-src:
    image: alpine
    command: ["/bin/sh”,”-c”,“while true; do sleep 1000; done”]
    volumes:
      - "{{ src_dir }}"
  CODE-RADE-soft:
    image: alpine
    command: ["/bin/sh”,”-c”,“while true; do sleep 1000; done”]
    volumes:
      - "{{ soft_dir }}"
  CODE-RADE-repo:
    image: alpine
    command: ["/bin/sh”,”-c”,“while true; do sleep 1000; done”]
    volumes:
      - "{{ repo_dir }}"
  CODE-RADE-modules:
    image: alpine
    command: ["/bin/sh”,”-c”,“while true; do sleep 1000; done”]
    volumes:
      - "{{ modules_dir }}"
  CODE-RADE-cvmfs:
    image: alpine
    command: ["/bin/sh”,”-c”,“while true; do sleep 1000; done”]
    volumes:
      - "{{ cvmfs_dir }}"


registries:
  docker:
    url: https://hub.docker.com
    namespace: /u/aaroc
  quay:
    url: https://quay.io
    namespace: aaroc

vars :

# Vars for Buildslave
---
slaves:
  - name: centos6
    base_image: centos
    base_image_tag: 6
    host_port: 5000
  - name: centos7
    base_image: centos
    base_image_tag: 7
    host_port: 5001
  - name: ubuntu1404
    base_image: ubuntu
    base_image_tag: "14.04"
    host_port: 5002
  - name: ubuntu1610
    base_image: ubuntu
    base_image_tag: "16.10"
    host_port: 5003

container_port: "5200"
modules_dir: /data/modules
src_dir: /data/src/
repo_dir: /data/artefacts
soft_dir: /data/ci-build
cvmfs_dir: /cvmfs
main.yml
- name: Raw Setup (ubuntu)
  hosts: build-slave-ubuntu*
  gather_facts: false
  tasks:
  - name: Install Python (Ubuntu)
    raw: which python || apt-get -y update && apt-get install -y python

- name: Raw Setup (CentOS)
  hosts: build-slave-centos*
  gather_facts: false
  tasks:
    - name: install python
      raw: which python || yum -y update && yum install -y python

- name: Prepare Jenkins environment
  hosts: build-slave*
  tasks:
    - name: add keys to the authorized keys
      authorized_key:
        user: root
        key: https://github.com/{{ item }}.keys
        validate_certs: False
      with_items:
        - brucellino
        - jenkinssagrid

    - name: install sshd
      package:
        name: openssh-server
        state: present
    - name: generate host keys
      command: "ssh-keygen -f /etc/ssh/ssh_host_{{item }}_key -N '' -t {{ item }}"
      args:
        creates: "/etc/ssh/ssh_host_{{item }}_key"
      with_items:
        - rsa
        - dsa
        - ecdsa

    - name: ensure run dir present
      file:
        dest: /var/run/sshd
        state: directory
        owner: root

    - name: Replace the pam login
      lineinfile:
        dest: /etc/pam.d/sshd
        line: "session    optional     pam_loginuid.so"
        regexp: "session    required     pam_loginuid.so"
        state: present

- name: CODE-RADE secret sauce
  hosts: build-slave*
  tasks:
  - name: install prerequisites
    package:
      name: "{{ item }}"
      state: present
    with_items:
      - make
      - git
      - environment-modules
      - wget
      - bzip2
      - vim
      - which
      - tree
      - java-1.8.0-openjdk.x86_64
      - perl-CPAN
      - libX11-devel
    when: ansible_os_family == "RedHat"
  # See, now this just makes me upset. I have to put in this dirty workaround because
  # there is a circular dependency on a frikkin perl module
  # (need cpanm for Test::more, which needs cpanm to install)
  - block:
      - name: Ensure that cpanm is available
        uri:
          url: https://cpanmin.us/
          dest: /bin/cpanm
          creates: /bin/cpanm
      - name: Ensure executable
        file:
          path: /bin/cpanm
          mode: "u+rwx"
    rescue:
      - debug:
          msg: "Ah, fuckit"

    when: ansible_os_family == "RedHat"

  - name: Install Required Groups (RedHat)
    yum:
      name: "{{ item }}"
      state: present
    when: ansible_os_family == 'RedHat'
    with_items:
      - '@X Software Development'
      - '@Development tool'

  - name: Install prerequisites (Debian)
    package:
      name: "{{ item }}"
      state: present
    with_items:
      - build-essential
      - gfortran
      - git
      - environment-modules
      - wget
      - bzip2
      - vim
      - default-jdk
      - tree
      - curl
      - m4
      - cpanminus
      - libx11-dev
      - zip
    when: "{{ ansible_os_family == 'Debian'}}"

  - name: Ensure Testing packages are installed
    cpanm:
      name: Test::More

  - name: Pull in vars
    include_vars:
      file: code-rade.yml
      name: code_rade

  - name: Template modules
    template:
      src: "templates/{{ item.path[ansible_os_family] }}/{{item.name }}.j2"
      dest: "/{{ item.path[ansible_os_family] }}/{{item.name}}"
    with_items: "{{code_rade.modules}}"

OS / ENVIRONMENT
Ansible Container, version 0.9.1rc0
Linux, serbaggio, 4.4.0-72-generic, #93-Ubuntu SMP Fri Mar 31 14:07:41 UTC 2017, x86_64
2.7.12 (default, Nov 19 2016, 06:48:10) 
[GCC 5.4.0 20160609] /usr/bin/python
{
  "ContainersPaused": 0, 
  "Labels": null, 
  "CgroupDriver": "cgroupfs", 
  "ContainersRunning": 0, 
  "ContainerdCommit": {
    "Expected": "4ab9917febca54791c5f071a9d1f404867857fcc", 
    "ID": "4ab9917febca54791c5f071a9d1f404867857fcc"
  }, 
  "InitBinary": "docker-init", 
  "NGoroutines": 126, 
  "Swarm": {
    "Managers": 1, 
    "ControlAvailable": true, 
    "NodeID": "q687zdq49cxj9q63z8zkgvxkp", 
    "Cluster": {
      "Spec": {
        "Name": "default", 
        "TaskDefaults": {}, 
        "Orchestration": {
          "TaskHistoryRetentionLimit": 5
        }, 
        "EncryptionConfig": {
          "AutoLockManagers": false
        }, 
        "Raft": {
          "HeartbeatTick": 1, 
          "LogEntriesForSlowFollowers": 500, 
          "KeepOldSnapshots": 0, 
          "ElectionTick": 3, 
          "SnapshotInterval": 10000
        }, 
        "CAConfig": {
          "NodeCertExpiry": 7776000000000000
        }, 
        "Dispatcher": {
          "HeartbeatPeriod": 5000000000
        }
      }, 
      "Version": {
        "Index": 11
      }, 
      "ID": "42khy9138y1jh6rmin6z5tv87", 
      "CreatedAt": "2017-04-22T08:33:52.369122993Z", 
      "UpdatedAt": "2017-04-22T20:33:52.379339094Z"
    }, 
    "Nodes": 1, 
    "Error": "", 
    "RemoteManagers": [
      {
        "NodeID": "q687zdq49cxj9q63z8zkgvxkp", 
        "Addr": "192.168.1.8:2377"
      }
    ], 
    "LocalNodeState": "active", 
    "NodeAddr": "192.168.1.8"
  }, 
  "LoggingDriver": "json-file", 
  "OSType": "linux", 
  "HttpProxy": "", 
  "Runtimes": {
    "runc": {
      "path": "docker-runc"
    }
  }, 
  "DriverStatus": [
    [
      "Root Dir", 
      "/var/lib/docker/aufs"
    ], 
    [
      "Backing Filesystem", 
      "extfs"
    ], 
    [
      "Dirs", 
      "40"
    ], 
    [
      "Dirperm1 Supported", 
      "true"
    ]
  ], 
  "OperatingSystem": "Ubuntu 16.04.2 LTS", 
  "Containers": 0, 
  "HttpsProxy": "", 
  "BridgeNfIp6tables": true, 
  "MemTotal": 8235982848, 
  "SecurityOptions": [
    "name=apparmor", 
    "name=seccomp,profile=default"
  ], 
  "Driver": "aufs", 
  "IndexServerAddress": "https://index.docker.io/v1/", 
  "ClusterStore": "", 
  "InitCommit": {
    "Expected": "949e6fa", 
    "ID": "949e6fa"
  }, 
  "Isolation": "", 
  "SystemStatus": null, 
  "OomKillDisable": true, 
  "ClusterAdvertise": "", 
  "SystemTime": "2017-04-24T17:56:26.845795694+02:00", 
  "Name": "serbaggio", 
  "CPUSet": true, 
  "RegistryConfig": {
    "InsecureRegistryCIDRs": [
      "127.0.0.0/8"
    ], 
    "IndexConfigs": {
      "docker.io": {
        "Official": true, 
        "Name": "docker.io", 
        "Secure": true, 
        "Mirrors": null
      }
    }, 
    "Mirrors": []
  }, 
  "DefaultRuntime": "runc", 
  "ContainersStopped": 0, 
  "NCPU": 8, 
  "NFd": 31, 
  "Architecture": "x86_64", 
  "KernelMemory": true, 
  "CpuCfsQuota": true, 
  "Debug": false, 
  "ID": "O4O7:C7V3:PT4A:3GDE:PDWG:KHS2:MOI2:CNIJ:DMZH:OH7U:WOTC:5345", 
  "IPv4Forwarding": true, 
  "KernelVersion": "4.4.0-72-generic", 
  "BridgeNfIptables": true, 
  "NoProxy": "", 
  "LiveRestoreEnabled": false, 
  "ServerVersion": "17.03.1-ce", 
  "CpuCfsPeriod": true, 
  "ExperimentalBuild": false, 
  "MemoryLimit": true, 
  "SwapLimit": false, 
  "Plugins": {
    "Volume": [
      "local"
    ], 
    "Network": [
      "bridge", 
      "host", 
      "macvlan", 
      "null", 
      "overlay"
    ], 
    "Authorization": null
  }, 
  "Images": 15, 
  "DockerRootDir": "/var/lib/docker", 
  "NEventsListener": 0, 
  "CPUShares": true, 
  "RuncCommit": {
    "Expected": "54296cf40ad8143b62dbcaa1d90e520a2136ddfe", 
    "ID": "54296cf40ad8143b62dbcaa1d90e520a2136ddfe"
  }
}
{
  "KernelVersion": "4.4.0-72-generic", 
  "Arch": "amd64", 
  "BuildTime": "2017-03-27T17:14:09.765618756+00:00", 
  "ApiVersion": "1.27", 
  "Version": "17.03.1-ce", 
  "MinAPIVersion": "1.12", 
  "GitCommit": "c6d412e", 
  "Os": "linux", 
  "GoVersion": "go1.7.5"
}

SUMMARY

I recently upgraded to the version of ansible-container specified above. This seems to break the build of my templated containers. I am using a for loop to build the topology, since there are several similar services, but ansible-container doesn't like that. I get a

ERROR	Invalid container.yml: Parsing container.yml - while scanning for the next token
found character '%' that cannot start any token
STEPS TO REPRODUCE

EXPECTED RESULTS

I expect the builds to work, as they did until (at least) 0.2.0.

ACTUAL RESULTS
2017-04-24T18:00:52.318639 Invalid container.yml: Parsing container.yml - while scanning for the next token
found character '%' that cannot start any token
  in "/home/becker/Ops/AAROC/DevOps/Containers/Ansible-Container/CODE-RADE-build-containers/ansible/container.yml", line 3, column 2 [container.cli] caller_file='/usr/local/lib/python2.7/dist-packages/ansible_container-0.9.1rc0-py2.7.egg/container/cli.py' caller_func='__call__' caller_line=283

This is ansible-container from pip installation.

brucellino avatar Apr 24 '17 14:04 brucellino

@brucellino, This is a breaking change in 0.9. We're no longer supporting templates in container.yml to the extent of your example. We support variable substitution and filters, but not a full-scale Jinja2 template.

chouseknecht avatar Apr 24 '17 19:04 chouseknecht

Well, :hankey: :smiley:

This use case is for generating multiple testing environments. I can surely find a way around (or just code in stuff by hand ( :nauseated_face: ), so no stress. However, if one were to consider sending a PR to re-introduce this functionality, could you please explain why it has been removed or pared down. Perhaps there's a good reason to not have this. Just to give a heads-up to whoever comes next to avoid a wild-goose chase.

I'm quite happy to have this closed for now, if you think that's the right thing to do. Thanks !

brucellino avatar Apr 25 '17 12:04 brucellino

Also for this use case, you could always have an ad-hoc Ansible command that generates your container.yml ahead. So for you, you'd have an ansible-container.yml.j2, then run something like ansible localhost -m template -a [all the args to fill in the template and output your container.yml.

Also, depending on what these slaves are for/how different they are, you might be better served by having multiple instances of the same service, instead of autogenerating a bunch of services. Can you explain your goal a little more?

ryansb avatar Apr 25 '17 13:04 ryansb

Hi and thanks for the comment @ryansb

The goal is to test the build of scientific software in many different environments. These containers are built to simulate the actual environment out in the real world in a heterogeneous compute cloud.

We build these containers for use as slaves in a Jenkins CI environment, where they are nodes in a test matrix. So, we need a common base to be applied to several OS'es, so that we can test reliably.

The containers are thus very similar to each other (which is different to the typical container app setup, where each container does something specific and different). As I said before though, we don't lose anything other than elegance by having to write container.yml by hand, so I'm happy to lose this functionality, I just want to know if it's a design decision or an issue of prioritising the port to the new version.

brucellino avatar Apr 25 '17 13:04 brucellino

I see - yeah it was an intentional move to keep container.yml from becoming almost a full Ansible playbook but a little bit different and not as well documented. We also hadn't seen any use cases such as yours where templating was an important part of the workflow. Thank you for raising this!

ryansb avatar Apr 25 '17 13:04 ryansb

Maybe a change in the doc is needed, https://docs.ansible.com/ansible-container/container_yml/template.html#how-it-works

l4r1k4 avatar Mar 11 '19 11:03 l4r1k4