Kubespray: Jinja-Error: Unknown tag 'do'

Created on 30 Mar 2018  路  7Comments  路  Source: kubernetes-sigs/kubespray

BUGREPORT

The task TASK [kubernetes/node : Write kubelet config file (non-kubeadm)] throws following error:
"AnsibleError: template error while templating string: Encountered unknown tag 'do'. Jinja was looking for the following tags: 'elif' or 'else' or 'endif'.

The template which is used in this task is kubelet.standard.env.j2
Jinja2 Version:
Name: Jinja2 Version: 2.10
According to StackOverflow
jinja_env = Environment(extensions=['jinja2.ext.do']) needs to be set.

Any Idea how to do this within kubespray since I don't develop with python.

Thanks

Environment:

  • Cloud provider or hardware configuration:
    Install on two VMs who run locally.
  • OS (printf "$(uname -srm)\n$(cat /etc/os-release)\n"):
    DISTRIB_ID=LinuxMint DISTRIB_RELEASE=18 DISTRIB_CODENAME=sarah DISTRIB_DESCRIPTION="Linux Mint 18 Sarah" NAME="Linux Mint" VERSION="18 (Sarah)" ID=linuxmint ID_LIKE=ubuntu PRETTY_NAME="Linux Mint 18" VERSION_ID="18" HOME_URL="http://www.linuxmint.com/" SUPPORT_URL="http://forums.linuxmint.com/" BUG_REPORT_URL="http://bugs.launchpad.net/linuxmint/" UBUNTU_CODENAME=xenial
  • Version of Ansible (ansible --version):
    ansible 2.5.0

Kubespray version (commit) (git rev-parse --short HEAD):
f619eb0

Network plugin used:
Calico

Copy of your inventory file:

node1 ansible_ssh_host=192.168.178.46 ansible_user=vagrant
node2 ansible_ssh_host=192.168.178.45 ansible_user=vagrant

[kube-master]
node1

[etcd]
node1

[kube-node]
node2

[kube-ingress]
node2

[k8s-cluster:children]
kube-node
kube-master

Command used to invoke ansible:
ansible-playbook -i inventory/test-cluster/hosts.ini cluster.yml -b

Most helpful comment

I'm the author of the PR, I must have forgotten ansible.cfg in my last "rebase" (it passed the CI with the ansible.cfg in tests/ansible.cfg which I didn't forget...). Sorry for that and thx for the PR @rsmitty

All 7 comments

produced in merged PR https://github.com/kubernetes-incubator/kubespray/pull/2290
ansible now needs jinja extension jinja2.ext.do to run kubespray playbooks. i think it's bad idea to use such extensions

to me that defeats the whole purpose of ansible when my config-management suddenly needs config-management

Is this not fixed by adding the jinja2_extensions line to your ansible.cfg? That seems to be how it passed CI here. It also seems to me that the version of jinja installed with ansible must already have this extension included but not enabled. So doesn't seem to far of a cry to simply include that line in the default ansible.cfg as well. I'll drop a PR for that to fix this error. It should have been included as part of that PR.

I'm the author of the PR, I must have forgotten ansible.cfg in my last "rebase" (it passed the CI with the ansible.cfg in tests/ansible.cfg which I didn't forget...). Sorry for that and thx for the PR @rsmitty

First of all thanks for the quick response. I just reran it with the added line in the ansible.cfg file which looks like this:

[ssh_connection]
pipelining=True
ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null
#control_path = ~/.ssh/ansible-%%r@%%h:%%p
[defaults]
host_key_checking=False
gathering = smart
fact_caching = jsonfile
fact_caching_connection = /tmp
stdout_callback = skippy
library = ./library
callback_whitelist = profile_tasks
roles_path = roles:$VIRTUAL_ENV/usr/local/share/kubespray/roles:$VIRTUAL_ENV/usr/local/share/ansible/roles:/usr/share/kubespray/roles
deprecation_warnings=False
inventory_ignore_extensions = ~, .orig, .bak, .ini, .cfg, .retry, .pyc, .pyo, .creds
jinja2_extensions = jinja2.ext.do

However now it fails at a different point.
TASK [kubernetes/node : Write kubelet config file (non-kubeadm)]
With the following message:
fatal: [node2]: FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: 'unicode object' has no attribute 'iteritems'"}
which again comes from the kubelet.standard.env.j2 file

https://github.com/kubernetes-incubator/kubespray/blob/7e58b963285a6fe0b2e1362174bf4e91f56086d3/roles/kubernetes/node/templates/kubelet.standard.env.j2#L97

The inventory file is mentioned in the original post.

Ideas?

I guess I should have given an example of node_labels definition. Actually it must be defined as a dict like

node_labels:
 label1_name: label1_value
 label2_name: label2_value

BTW the changes in my original PR were made before "kube-ingress" stuff thus the folllowing part may be the cause of your issue...

{% elif inventory_hostname in groups['kube-ingress']|default([]) %}
{%   set node_labels %}--node-labels=node-role.kubernetes.io/ingress=true{% endset %}

I'll drop a PR within minutes.

Just reran it with your proposed changes in the kubelet.standard.env.j2 file and it works now!

Thanks for the great support!

Was this page helpful?
0 / 5 - 0 ratings