I am trying to use the VMware inventory plugin vmware_vm_inventory.py however when running the sync job for it I am getting the following error:
[WARNING]: * Failed to parse /tmp/awx_44_4bjjtmsl/vmware_vm_inventory.yml with
auto plugin: inventory config '/tmp/awx_44_4bjjtmsl/vmware_vm_inventory.yml'
spcifies unknown plugin 'community.vmware.vmware_vm_inventory'
---
validate_certs: False
alias_pattern: "{{ config.name }}"
groupby_patterns: "{{ config.guestFullName | lower }},{{ guest['net'][0]['network'] | lower }}"
host_filters: "{{ runtime.powerState == 'poweredOn' }},{{ 'VMware' not in config.annotation }},{{ 'esxi' not in config.name }},{{'msc-lex' in config.name }}"
lower_var_keys: True
max_object_level: 1
host_pattern: "{{ guest.hostname }}"
vCenter inventory plugin found and inventory created.
Error produced that indicates the community.vmware.vmware_vm_inventory plugin not found
4.306 INFO Updating inventory 2: VMWARE-all
5.349 DEBUG Using base command: python /usr/bin/ansible-inventory -i /tmp/awx_44_4bjjtmsl/vmware_vm_inventory.yml --playbook-dir /tmp/awx_44_4bjjtmsl -vvvvv
5.350 INFO Reading Ansible inventory source: /tmp/awx_44_4bjjtmsl/vmware_vm_inventory.yml
5.358 INFO Using VIRTUAL_ENV: /var/lib/awx/venv/ansible
5.358 INFO Using PATH: /var/lib/awx/venv/ansible/bin:/var/lib/awx/venv/awx/bin:/usr/pgsql-10/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
5.358 INFO Using PYTHONPATH: /var/lib/awx/venv/ansible/lib/python3.6/site-packages:
Traceback (most recent call last):
File "/var/lib/awx/venv/awx/bin/awx-manage", line 8, in <module>
sys.exit(manage())
File "/var/lib/awx/venv/awx/lib/python3.6/site-packages/awx/__init__.py", line 154, in manage
execute_from_command_line(sys.argv)
File "/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/__init__.py", line 381, in execute_from_command_line
utility.execute()
File "/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/__init__.py", line 375, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/base.py", line 323, in run_from_argv
self.execute(*args, **cmd_options)
File "/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/base.py", line 364, in execute
output = self.handle(*args, **options)
File "/var/lib/awx/venv/awx/lib/python3.6/site-packages/awx/main/management/commands/inventory_import.py", line 1149, in handle
raise exc
File "/var/lib/awx/venv/awx/lib/python3.6/site-packages/awx/main/management/commands/inventory_import.py", line 1039, in handle
venv_path=venv_path, verbosity=self.verbosity).load()
File "/var/lib/awx/venv/awx/lib/python3.6/site-packages/awx/main/management/commands/inventory_import.py", line 215, in load
return self.command_to_json(base_args + ['--list'])
File "/var/lib/awx/venv/awx/lib/python3.6/site-packages/awx/main/management/commands/inventory_import.py", line 198, in command_to_json
self.method, proc.returncode, stdout, stderr))
RuntimeError: ansible-inventory failed (rc=1) with stdout:
stderr:
ansible-inventory 2.9.11
config file = /etc/ansible/ansible.cfg
configured module search path = ['/var/lib/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible-inventory
python version = 3.6.8 (default, Apr 16 2020, 01:36:27) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
[WARNING]: * Failed to parse /tmp/awx_44_4bjjtmsl/vmware_vm_inventory.yml with
auto plugin: inventory config '/tmp/awx_44_4bjjtmsl/vmware_vm_inventory.yml'
specifies unknown plugin 'community.vmware.vmware_vm_inventory'
File "/usr/lib/python3.6/site-packages/ansible/inventory/manager.py", line 280, in parse_source
plugin.parse(self._inventory, self._loader, source, cache=cache)
File "/usr/lib/python3.6/site-packages/ansible/plugins/inventory/auto.py", line 53, in parse
raise AnsibleParserError("inventory config '{0}' specifies unknown plugin '{1}'".format(path, plugin_name))
[WARNING]: Unable to parse /tmp/awx_44_4bjjtmsl/vmware_vm_inventory.yml as an
inventory source
ERROR! No inventory was parsed, please check your configuration and options.
I have validated that the script is on the box and located here(with permission denied messages removed for clarity):
Tylers-MBP:Ansible-K8 Tyler$ kubectl exec -it pod/awx-79c6cc456b-r74zt -n ansible-awx -c awx-task -- find / -name vmware_vm_inventory.py
/var/lib/awx/vendor/awx_ansible_collections/ansible_collections/community/vmware/plugins/inventory/vmware_vm_inventory.py
/usr/lib/python3.6/site-packages/ansible/plugins/inventory/vmware_vm_inventory.py
can you visit the API for that update /api/v2/inventory_updates/<inventory update number> and report back what you see for the job_env?
Here is the output of that section:
"job_env": {
"AWX_SERVICE_PORT_80_TCP_ADDR": "10.43.184.167",
"LC_ALL": "en_US.UTF-8",
"SUPERVISOR_WEB_CONFIG_PATH": "/supervisor.conf",
"LANG": "en_US.UTF-8",
"HOSTNAME": "awx-c574885db-ckgql",
"AWX_SERVICE_PORT": "tcp://10.43.184.167:80",
"AWX_SERVICE_PORT_80_TCP_PORT": "80",
"AWX_SKIP_MIGRATIONS": "1",
"KUBERNETES_PORT_443_TCP_PROTO": "tcp",
"KUBERNETES_PORT_443_TCP_ADDR": "10.43.0.1",
"AWX_SERVICE_PORT_80_TCP_PROTO": "tcp",
"KUBERNETES_PORT": "tcp://10.43.0.1:443",
"PWD": "/",
"HOME": "/var/lib/awx",
"AWX_SERVICE_PORT_80_TCP": "tcp://10.43.184.167:80",
"KUBERNETES_SERVICE_PORT_HTTPS": "443",
"AWX_SERVICE_SERVICE_PORT": "80",
"KUBERNETES_PORT_443_TCP_PORT": "443",
"AWX_SERVICE_SERVICE_HOST": "10.43.184.167",
"KUBERNETES_PORT_443_TCP": "tcp://10.43.0.1:443",
"MY_POD_UID": "03ed1eac-c469-479c-a1b9-9ab9439ddd90",
"MY_POD_IP": "10.42.146.84",
"SHLVL": "1",
"LANGUAGE": "en_US.UTF-8",
"KUBERNETES_SERVICE_PORT": "443",
"PATH": "/var/lib/awx/venv/awx/bin:/usr/pgsql-10/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"KUBERNETES_SERVICE_HOST": "10.43.0.1",
"AWX_SERVICE_SERVICE_PORT_HTTP": "80",
"_": "/usr/local/bin/supervisord",
"SUPERVISOR_ENABLED": "1",
"SUPERVISOR_SERVER_URL": "unix:///tmp/supervisor.sock",
"SUPERVISOR_PROCESS_NAME": "dispatcher",
"SUPERVISOR_GROUP_NAME": "tower-processes",
"LC_CTYPE": "en_US.UTF-8",
"DJANGO_SETTINGS_MODULE": "awx.settings.production",
"DJANGO_LIVE_TEST_SERVER_ADDRESS": "localhost:9013-9199",
"TZ": "UTC",
"ANSIBLE_FORCE_COLOR": "True",
"ANSIBLE_HOST_KEY_CHECKING": "False",
"ANSIBLE_INVENTORY_UNPARSED_FAILED": "True",
"ANSIBLE_PARAMIKO_RECORD_HOST_KEYS": "False",
"ANSIBLE_VENV_PATH": "/var/lib/awx/venv/ansible",
"AWX_PRIVATE_DATA_DIR": "/tmp/awx_82_repyc6bx",
"VIRTUAL_ENV": "/var/lib/awx/venv/awx",
"INVENTORY_SOURCE_ID": "12",
"INVENTORY_UPDATE_ID": "82",
"ANSIBLE_INVENTORY_EXPORT": "True",
"ANSIBLE_VERBOSE_TO_STDERR": "True",
"VMWARE_USER": "[email protected]",
"VMWARE_PASSWORD": "**********",
"VMWARE_HOST": "vc-mgmt.example.lab",
"VMWARE_VALIDATE_CERTS": "False",
"ANSIBLE_COLLECTIONS_PATHS": "/var/lib/awx/venv/awx/lib/python3.6/site-packages/awx/vendor/awx_ansible_collections",
"ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS": "never",
"ANSIBLE_INVENTORY_ENABLED": "auto",
"ANSIBLE_CALLBACK_PLUGINS": "/var/lib/awx/venv/awx/lib/python3.6/site-packages/ansible_runner/callbacks",
"ANSIBLE_STDOUT_CALLBACK": "awx_display",
"ANSIBLE_RETRY_FILES_ENABLED": "False",
"AWX_ISOLATED_DATA_DIR": "/tmp/awx_82_repyc6bx/artifacts/82",
"PYTHONPATH": "/var/lib/awx/venv/awx/lib/python3.6/site-packages/ansible_runner/callbacks",
"RUNNER_OMIT_EVENTS": "False",
"RUNNER_ONLY_FAILED_EVENTS": "False"
}
The place it's looking for the content is /var/lib/awx/venv/awx/lib/python3.6/site-packages/awx/vendor/awx_ansible_collections
Can you confirm that the image you have running is ansible/awx:14.1.0 from DockerHub? I might see what's going on.
Checking the image myself...
$ docker run --interactive --rm --tty ansible/awx ls /var/lib/awx/venv/awx/lib/python3.6/site-packages/awx/vendor
ls: cannot access '/var/lib/awx/venv/awx/lib/python3.6/site-packages/awx/vendor': No such file or directory
That's not consistent with your setting for this.
$ docker run --interactive --rm --tty ansible/awx ls /var/lib/awx/vendor/awx_ansible_collections/ansible_collections/community/vmware/plugins/inventory
__init__.py vmware_vm_inventory.py
This suggests that the setting should be
"ANSIBLE_COLLECTIONS_PATHS": "/var/lib/awx/vendor/awx_ansible_collections",
This comes from the AWX setting AWX_ANSIBLE_COLLECTIONS_PATHS
That appears to be set correctly here:
maaaybe it was set incorrectly here?
@AlanCoding Here is the output of the kubectl describe for the AWX pod that is running, it shows the image running is the ansible/awx:14.1.0 image:
Tylers-MacBook-Pro:Ansible-K8 Tyler$ kubectl describe pod/awx-c574885db-pr9mt -n ansible-awx
Name: awx-c574885db-pr9mt
Namespace: ansible-awx
Priority: 0
Node: awx/172.21.35.110
Start Time: Thu, 17 Sep 2020 14:22:26 -0400
Labels: app=awx
pod-template-hash=c574885db
Annotations: cni.projectcalico.org/podIP: 10.42.146.111/32
cni.projectcalico.org/podIPs: 10.42.146.111/32
Status: Running
IP: 10.42.146.111
IPs:
IP: 10.42.146.111
Controlled By: ReplicaSet/awx-c574885db
Containers:
memcached:
Container ID: docker://e3ef284ab8dd8674e6110b00b1b982153e29a17fed731be2bb0ba98d326bf957
Image: memcached:alpine
Image ID: docker-pullable://memcached@sha256:4f7648c92fc89a2f6ec56e0281a051efe983b6de83a31e7a52bd4a1a8ed6308f
Port: 1121/TCP
Host Port: 0/TCP
Command:
memcached
-s
/var/run/memcached/memcached.sock
-a
0666
State: Running
Started: Thu, 17 Sep 2020 14:22:28 -0400
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/memcached from awx-memcached-socket (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-xfqh8 (ro)
redis:
Container ID: docker://31382428144c72883d53e2a1aa88838a17de7c1cd0af232bc28b287171a41877
Image: redis:latest
Image ID: docker-pullable://redis@sha256:1cfb205a988a9dae5f025c57b92e9643ec0e7ccff6e66bc639d8a5f95bba928c
Port: 6379/TCP
Host Port: 0/TCP
Args:
redis-server
/etc/redis.conf
State: Running
Started: Thu, 17 Sep 2020 14:22:29 -0400
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/etc/redis.conf from awx-redis-config (ro,path="redis.conf")
/var/run/redis from awx-redis-socket (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-xfqh8 (ro)
awx-web:
Container ID: docker://916c493f6fc6a5e7272985176932560f08e34534558b58bde0f322fb9a2c917e
Image: ansible/awx:14.1.0
Image ID: docker-pullable://ansible/awx@sha256:24b8f551e99a10a9c943966cf540b248c85f7c3daf4b24a3b0e7123affbca389
Port: 8052/TCP
Host Port: 0/TCP
State: Running
Started: Thu, 17 Sep 2020 14:22:30 -0400
Ready: True
Restart Count: 0
Requests:
cpu: 1
memory: 2Gi
Environment: <none>
Mounts:
/etc/nginx/nginx.conf from awx-nginx-conf (ro,path="nginx.conf")
/etc/tower/SECRET_KEY from awx-secret-key (ro,path="SECRET_KEY")
/etc/tower/conf.d/ from awx-application-credentials (ro)
/etc/tower/settings.py from awx-settings (ro,path="settings.py")
/supervisor.conf from awx-supervisor-web-config (ro,path="supervisor.conf")
/usr/bin/launch_awx.sh from awx-launch-awx-web (ro,path="launch_awx.sh")
/var/lib/awx/rsyslog from rsyslog-dir (rw)
/var/run/awx-rsyslog from rsyslog-socket (rw)
/var/run/memcached from awx-memcached-socket (rw)
/var/run/redis from awx-redis-socket (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-xfqh8 (ro)
/var/run/supervisor from supervisor-socket (rw)
awx-task:
Container ID: docker://c02f6e05f9af72c5de6c34b145af93f85cdfee418ec66888548e88c78ddd38c8
Image: ansible/awx:14.1.0
Image ID: docker-pullable://ansible/awx@sha256:24b8f551e99a10a9c943966cf540b248c85f7c3daf4b24a3b0e7123affbca389
Port: <none>
Host Port: <none>
Command:
/usr/bin/launch_awx_task.sh
State: Running
Started: Thu, 17 Sep 2020 14:22:31 -0400
Ready: True
Restart Count: 0
Requests:
cpu: 500m
memory: 1Gi
Environment:
SUPERVISOR_WEB_CONFIG_PATH: /supervisor.conf
AWX_SKIP_MIGRATIONS: 1
MY_POD_UID: (v1:metadata.uid)
MY_POD_IP: (v1:status.podIP)
Mounts:
/etc/tower/SECRET_KEY from awx-secret-key (ro,path="SECRET_KEY")
/etc/tower/conf.d/ from awx-application-credentials (ro)
/etc/tower/settings.py from awx-settings (ro,path="settings.py")
/supervisor.conf from awx-supervisor-web-config (ro,path="supervisor.conf")
/supervisor_task.conf from awx-supervisor-task-config (ro,path="supervisor_task.conf")
/usr/bin/launch_awx_task.sh from awx-launch-awx-task (ro,path="launch_awx_task.sh")
/var/lib/awx/rsyslog from rsyslog-dir (rw)
/var/run/awx-rsyslog from rsyslog-socket (rw)
/var/run/memcached from awx-memcached-socket (rw)
/var/run/redis from awx-redis-socket (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-xfqh8 (ro)
/var/run/supervisor from supervisor-socket (rw)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
awx-application-credentials:
Type: Secret (a volume populated by a Secret)
SecretName: awx-secrets
Optional: false
awx-secret-key:
Type: Secret (a volume populated by a Secret)
SecretName: awx-secret-key
Optional: false
awx-settings:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: awx-awx-configmap
Optional: false
awx-nginx-conf:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: awx-awx-configmap
Optional: false
awx-redis-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: awx-awx-configmap
Optional: false
awx-launch-awx-web:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: awx-launch-awx
Optional: false
awx-supervisor-web-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: awx-supervisor-config
Optional: false
awx-launch-awx-task:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: awx-launch-awx
Optional: false
awx-supervisor-task-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: awx-supervisor-config
Optional: false
awx-redis-socket:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
awx-memcached-socket:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
supervisor-socket:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
rsyslog-socket:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
rsyslog-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
default-token-xfqh8:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-xfqh8
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned ansible-awx/awx-c574885db-pr9mt to awx
Normal Created 6m29s kubelet, awx Created container memcached
Normal Pulled 6m29s kubelet, awx Container image "memcached:alpine" already present on machine
Normal Started 6m28s kubelet, awx Started container memcached
Normal Pulling 6m28s kubelet, awx Pulling image "redis:latest"
Normal Pulling 6m27s kubelet, awx Pulling image "ansible/awx:14.1.0"
Normal Pulled 6m27s kubelet, awx Successfully pulled image "redis:latest"
Normal Created 6m27s kubelet, awx Created container redis
Normal Started 6m27s kubelet, awx Started container redis
Normal Pulled 6m26s kubelet, awx Successfully pulled image "ansible/awx:14.1.0"
Normal Created 6m26s kubelet, awx Created container awx-web
Normal Started 6m26s kubelet, awx Started container awx-web
Normal Pulled 6m26s kubelet, awx Container image "ansible/awx:14.1.0" already present on machine
Normal Created 6m26s kubelet, awx Created container awx-task
Normal Started 6m25s kubelet, awx Started container awx-task
I would agree that this does appear to be a path issue for the ANSIBLE_COLLECTIONS_PATHS, is that environment variable something that maybe the installer would normally set? As im using the awx-operator for installation, im wondering if the variable would need to be set on the deployment or if this is caused by something being built incorrectly.
I have looked a bit further into the images that are running for AWX and I noticed that the settings.py file that is in the container looks different than the one that you linked @AlanCoding
Here is the config-map for the /etc/tower/settings.py mount on the container:
Tylers-MacBook-Pro:Ansible-K8 Tyler$ kubectl describe cm awx-awx-configmap -n ansible-awx
Name: awx-awx-configmap
Namespace: ansible-awx
Labels: app=awx
Annotations:
Data
====
environment:
----
DATABASE_USER=awx
DATABASE_NAME=awx
DATABASE_HOST='awx-postgres.ansible-awx.svc.cluster.local'
DATABASE_PORT='5432'
DATABASE_PASSWORD=awxpass
MEMCACHED_HOST='awx-memcached.ansible-awx.svc.cluster.local'
MEMCACHED_PORT='11211'
REDIS_HOST='awx-redis.ansible-awx.svc.cluster.local'
REDIS_PORT='6379'
AWX_SKIP_MIGRATIONS=true
nginx_conf:
----
worker_processes 1;
pid /tmp/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
server_tokens off;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /dev/stdout main;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
sendfile on;
#tcp_nopush on;
#gzip on;
upstream uwsgi {
server 127.0.0.1:8050;
}
upstream daphne {
server 127.0.0.1:8051;
}
server {
listen 8052 default_server;
# If you have a domain name, this is where to add it
server_name _;
keepalive_timeout 65;
# HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
add_header Strict-Transport-Security max-age=15768000;
add_header Content-Security-Policy "default-src 'self'; connect-src 'self' ws: wss:; style-src 'self' 'unsafe-inline'; script-src 'self' 'unsafe-inline' *.pendo.io; img-src 'self' *.pendo.io data:; report-uri /csp-violation/";
add_header X-Content-Security-Policy "default-src 'self'; connect-src 'self' ws: wss:; style-src 'self' 'unsafe-inline'; script-src 'self' 'unsafe-inline' *.pendo.io; img-src 'self' *.pendo.io data:; report-uri /csp-violation/";
# Protect against click-jacking https://www.owasp.org/index.php/Testing_for_Clickjacking_(OTG-CLIENT-009)
add_header X-Frame-Options "DENY";
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
location /static/ {
alias /var/lib/awx/public/static/;
}
location /favicon.ico {
alias /var/lib/awx/public/static/favicon.ico;
}
location /websocket {
# Pass request to the upstream alias
proxy_pass http://daphne;
# Require http version 1.1 to allow for upgrade requests
proxy_http_version 1.1;
# We want proxy_buffering off for proxying to websockets.
proxy_buffering off;
# http://en.wikipedia.org/wiki/X-Forwarded-For
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if you use HTTPS:
proxy_set_header X-Forwarded-Proto https;
# pass the Host: header from the client for the sake of redirects
proxy_set_header Host $http_host;
# We've set the Host header, so we don't need Nginx to muddle
# about with redirects
proxy_redirect off;
# Depending on the request value, set the Upgrade and
# connection headers
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
location / {
# Add trailing / if missing
rewrite ^(.*)$http_host(.*[^/])$ $1$http_host$2/ permanent;
uwsgi_read_timeout 120s;
uwsgi_pass uwsgi;
include /etc/nginx/uwsgi_params; proxy_set_header X-Forwarded-Port 443;
}
}
}
redis_conf:
----
unixsocket /var/run/redis/redis.sock
unixsocketperm 777
port 0
bind 127.0.0.1
settings:
----
import os
import socket
def get_secret():
if os.path.exists("/etc/tower/SECRET_KEY"):
return open('/etc/tower/SECRET_KEY', 'rb').read().strip()
ADMINS = ()
STATIC_ROOT = '/var/lib/awx/public/static'
PROJECTS_ROOT = '/var/lib/awx/projects'
JOBOUTPUT_ROOT = '/var/lib/awx/job_status'
SECRET_KEY = get_secret()
ALLOWED_HOSTS = ['*']
INTERNAL_API_URL = 'http://127.0.0.1:8052'
# Container environments don't like chroots
AWX_PROOT_ENABLED = False
# Automatically deprovision pods that go offline
AWX_AUTO_DEPROVISION_INSTANCES = True
CLUSTER_HOST_ID = socket.gethostname()
SYSTEM_UUID = '00000000-0000-0000-0000-000000000000'
CSRF_COOKIE_SECURE = False
SESSION_COOKIE_SECURE = False
SERVER_EMAIL = 'root@localhost'
DEFAULT_FROM_EMAIL = 'webmaster@localhost'
EMAIL_SUBJECT_PREFIX = '[AWX] '
EMAIL_HOST = 'localhost'
EMAIL_PORT = 25
EMAIL_HOST_USER = ''
EMAIL_HOST_PASSWORD = ''
EMAIL_USE_TLS = False
LOGGING['handlers']['console'] = {
'()': 'logging.StreamHandler',
'level': 'DEBUG',
'formatter': 'simple',
}
LOGGING['loggers']['django.request']['handlers'] = ['console']
LOGGING['loggers']['rest_framework.request']['handlers'] = ['console']
LOGGING['loggers']['awx']['handlers'] = ['console', 'external_logger']
LOGGING['loggers']['awx.main.commands.run_callback_receiver']['handlers'] = ['console']
LOGGING['loggers']['awx.main.tasks']['handlers'] = ['console', 'external_logger']
LOGGING['loggers']['awx.main.scheduler']['handlers'] = ['console', 'external_logger']
LOGGING['loggers']['django_auth_ldap']['handlers'] = ['console']
LOGGING['loggers']['social']['handlers'] = ['console']
LOGGING['loggers']['system_tracking_migrations']['handlers'] = ['console']
LOGGING['loggers']['rbac_migrations']['handlers'] = ['console']
LOGGING['loggers']['awx.isolated.manager.playbooks']['handlers'] = ['console']
LOGGING['handlers']['callback_receiver'] = {'class': 'logging.NullHandler'}
LOGGING['handlers']['task_system'] = {'class': 'logging.NullHandler'}
LOGGING['handlers']['tower_warnings'] = {'class': 'logging.NullHandler'}
LOGGING['handlers']['rbac_migrations'] = {'class': 'logging.NullHandler'}
LOGGING['handlers']['system_tracking_migrations'] = {'class': 'logging.NullHandler'}
LOGGING['handlers']['management_playbooks'] = {'class': 'logging.NullHandler'}
DATABASES = {
'default': {
'ATOMIC_REQUESTS': True,
'ENGINE': 'awx.main.db.profiled_pg',
'NAME': 'awx',
'USER': 'awx',
'PASSWORD': 'awxpass',
'HOST': 'awx-postgres.ansible-awx.svc.cluster.local',
'PORT': '5432',
}
}
if os.getenv("DATABASE_SSLMODE", False):
DATABASES['default']['OPTIONS'] = {'sslmode': os.getenv("DATABASE_SSLMODE")}
USE_X_FORWARDED_PORT = True
BROADCAST_WEBSOCKET_PORT = 8052
BROADCAST_WEBSOCKET_PROTOCOL = 'http'
Events: <none>
Looking at the key for settings there isnt any mention of the AWX_ANSIBLE_COLLECTIONS_PATHS would the absence of this value cause it to default to the wrong path?
I was able to find the configmap in the awx-operator repo, https://github.com/ansible/awx-operator/blob/devel/roles/awx/templates/tower_config.yaml.j2#L19
After some testing I was able to resolve this issue by adding the AWX_ANSIBLE_COLLECTIONS_PATHS = '/var/lib/awx/vendor/awx_ansible_collections' to the configmap/awx-awx-configmap:setting and the collection was correctly found and I was able to run the vmware_vm_inventory.py and gather my VMware inventory.
As that seems to have solved the issue this we are probably good to close this issue as it doesnt seem to be an image issue, just a settings issue for the awx-operator. I am going to submit a pull request to ansible/awx-operator to add the latest version of https://github.com/ansible/awx/blob/87e3d62684900f0ba7edb34a111854b3393aca4d/installer/roles/image_build/files/settings.py
Thanks for digging @frenchtoasters!
My gut reaction is that the operator is the wrong place for a fix. The operator shouldn't have the obligation of fixing a setting that the image set wrong, particularly related to pathing. I'd point back to:
And that comes down to BASE_DIR
Taken together, these 2 lines are obviously wrong. Where we use base dir, what we _really_ mean to indicate is the /var/lib/awx/ folder, or an equivalent. Other settings that use BASE_DIR also look very very wrong
Is there a scenario where log root should be /var/lib/awx/venv/awx/lib/python3.6/site-packages/awx? I don't think so.
Maybe I'm taking this too far, but I would prefer to introduce a new settings for /var/lib/awx and replace BASE_DIR with that. We used to not use this folder in some development scenarios, but these days, I don't think there are any exceptions where this needs to be customized.
@AlanCoding Id agree that the operator should not fail due to some change like this however to resolve that a change will still be needed on the operator based. This is because the settings.py script is mounted at run time for the container from a configmap. Additionally that script is ever so slightly different than the base one due to it being run on kubernetes, here is the file in question: https://github.com/ansible/awx-operator/blob/devel/roles/awx/templates/tower_config.yaml.j2
So there are two issues here that would need to be resolved:
ansible/awx I thought about this issue some last night and was looking at making a script of some kind to pull, diff, and update but its a pretty hacky solution.
Most helpful comment
After some testing I was able to resolve this issue by adding the
AWX_ANSIBLE_COLLECTIONS_PATHS = '/var/lib/awx/vendor/awx_ansible_collections'to theconfigmap/awx-awx-configmap:settingand the collection was correctly found and I was able to run thevmware_vm_inventory.pyand gather my VMware inventory.As that seems to have solved the issue this we are probably good to close this issue as it doesnt seem to be an image issue, just a settings issue for the
awx-operator. I am going to submit a pull request toansible/awx-operatorto add the latest version of https://github.com/ansible/awx/blob/87e3d62684900f0ba7edb34a111854b3393aca4d/installer/roles/image_build/files/settings.py