Salt-bootstrap: Python YAML module may not be present when creating YAML from JSON.

Created on 11 Nov 2016  路  14Comments  路  Source: saltstack/salt-bootstrap

Description

When not obtaining Salt from "git" and either "-J" or "-j" arguments are passed, an "import error" occurs if the appropriate Python YAML module was not already present before running the bootstrap.

Setup using Vagrant

Vagrantfile

Vagrant.configure("2") do |config|
  config.vm.define "centos7" do |centos7| 
    centos7.vm.box = "centos/7"
    centos7.vm.hostname = "centos7"
  end
  config.vm.define "ubuntu16" do |ubuntu16|
    ubuntu16.vm.box = "ubuntu/xenial64"
    ubuntu16.vm.hostname = "ubuntu16"
  end
end

Steps to Reproduce Issue

The steps below should reproduce the issue. Replacing "ubuntu16" with "centos7" should show the same thing on CentOS.

Commands:

vagrant up
vagrant ssh ubuntu16
sudo -i
cd /tmp
curl -o bootstrap_salt.sh -L https://bootstrap.saltstack.com
chmod 0755 bootstrap_salt.sh
/tmp/bootstrap_salt.sh -D -J '{"syndic_master":"192.168.1.124"}' -F -c /tmp -M -S stable

The develop branch of salt-bootstrap does the same when replacing the URL above with https://bootstrap.saltstack.com/develop.

Debug info:

 *  INFO: Running config_salt()
 * DEBUG: Traceback (most recent call last):
  File "<string>", line 1, in <module>
ImportError: No module named yaml
 * ERROR: Python error encountered. This is likely due to passing in a malformed JSON string. Please use -D to see stacktrace.
 * ERROR: Failed to run config_salt()!!!

Versions and Systems

Verified on Centos7.2 and Ubuntu16.04. I also examined the code on the salt-bootstrap "develop" branch when checking the order of calls.

The error occurs prior to Salt getting installed, so the Salt version is not involved. This was tried on both released and develop version of bootstrap_salt. Below should be the develop version.

ubuntu@ubuntu16:/tmp$ /tmp/bootstrap_salt.sh -v
/tmp/bootstrap_salt.sh -- Version 2016.10.25

Detailed info

Using "git" for obtaining Salt has a side effect of installing the Python YAML library, so that was omitted from the "I Found a Bug" instructions. The various install_XXXX_git_deps() will cause YAML packages to be installed.

Installing the appropriate Python YAML library by hand prior to running bootstrap_salt.sh also allows the bootstrap command to succeed.

The conversion of the passed JSON to YAML is done in config_salt() and is done prior to installing any Salt packages that would bring in the YAML module as a dependency. The bootstrap script properly does the configuration prior to installing Salt, so that the programs launch properly.

Possible solutions

Ensure that the Python YAML packages get installed, if not already present in one of the following places:

  • When running the part of config_salt() that needs the YAML module, or
  • If -j or -J is passed

The second option would not require keeping checking each time config_salt() had to convert JSON to YAML. Currently JSON could have to be handled up to twice, once for -j and once for -J.

Bug Fixed Pending Verification

All 14 comments

I got a chance to look a bit more into the code.

config_salt() calls __overwrite_config() to do the actual conversion. It does assume that the Python YAML module is loaded. It does do the check when picking the Python to call, but then calls the one that it chose. It never verifies that the good_python has YAML module.

    # If python does not have yaml installed we're on Arch and should use python2
    if python -c "import yaml" 2> /dev/null; then
        good_python=python
    else
        good_python=python2
    fi

The next thing that it does is call the in-line script using the good_python that it just set and that assumes the YAML module is available.

I have only briefly looked at the way all the system dependencies are handled, so likely do not totally understand the overall philosophy. Hopefully, the below is not too far off base over fitting in with the design.

The DEP_INSTALL_FUNC is called prior to calling salt_config(). It seems that it could be done someplace in the appropriate DEP_INSTALL_FUNC chain for the type and version of Linux. If done there, the Arch Linux chain would have to deal with which Python to use like above when figuring out if YAML module is present.

I prefer to stay with the managed packages where possible or I would use git for the Salt installs.

@brianholland99 Thank you very much for filing this bug. You're right that we're not checking if the imports are there before running the python call. I wrote this based on the assumption that salt should be installed (with all of it's needed deps, which includes yaml) before we run the config overwrite functions. I forgot to consider passing the -c without running this through git tags, and assumed those packages would be installed already. We'll need to get this fixed up.

@rallytime This should be resolved by PR #987
@brianholland99 Please give it a try.

Ah nice, thanks for grabbing that @vutny, as always. :)

@brianholland99 Please let us know how that fix goes.

It does solve the YAML import issue. However, it does create a slight issue. It will leave some residue (an unaccepted unintended minion name) on the master in at least one scenario.

I believe that a likely reason that the configuration was originally done prior to installation is due to distributions such as Ubuntu. It seems unfortunate, but for some reason the Ubuntu ones seem to start daemons as part of the install. On CentOs, the install, configure, and then start work perfectly. However, on Ubuntu, the installation of the salt packages start the daemons and use the available configuration for the initial run. The later configuration and restart will cause it to run with the updated data.

At least one way that this can demonstrate the issue is if the host name salt resolves on the box. The public Vagrant box of ubuntu/xenial64 seems to run fine with the Vagrantfile in the first post above. The host name salt does not resolve, so the initial run of minion does not end up connecting with anything.

However, if the name salt does resolve whether static or looked up via default domain set on a box, then it will connect to that system with the original configuration.

To test this out I did the following:

  • Put a copy of the new bootstrap-salt.sh in my Vagrant configuration directory. (so ubuntu VM can access it via /vagrant)
  • Added salt to /etc/hosts (the quick way to make it resolve)
  • Ran the command - /vagrant/bootstrap-salt.sh -D -J '{"syndic_master":"192.168.1.124"}' -i mintest -F -c /tmp -M -S stable

On salt-master before bootstrap of minion:

# salt-key -l un
Unaccepted Keys:

On salt-master after bootstrap of minion:

# salt-key -l un
Unaccepted Keys:
mintest
ubuntu-xenial.localdomain

It had the name ubuntu-xenial.localdomain when the minion daemon first ran, which was the name that the minion daemon picked out of the /etc/hosts. It then had the minion_id set to mintestubun and was then restarted to connect with that minion_id. Both keys above were the same as one might expect. If accepting the real name, but not clearing the initial name it will also cause a denied entry the next time a VM using that box was brought up.

In my Vagrant setup, I preseed the minion keys with keys already accepted on the master so I might not notice.

There may be other issues due to the daemons being started prior to being configured, but the above was what I could think of quickly and set up a test to check. Note: I did not check to see if the above was always guaranteed to show up both names. There may or may not be timing issues that might cause it to not always show. (I.e., will minion run enough to contact master with original configuration before being configured and restarted?). But, it did appear in all the samples that I ran for Ubuntu.

Okay, there was a silly attempt to fix larger issue on Friday's afternoon :smile:
My shame, I forgot about auto-starting services default policy on Debian and derivatives.

I think we could resolve this by configuring Salt right after the dependencies installation, and just make sure that Python YAML module installed prior to the config step.
This requires patching of several distro dependencies installation functions... I'll try to handle that tomorrow.

That sounds like a reasonable approach, but is a lot more work than it would be without the auto-starting services. I only recently started installing Debian distros and still dislike that policy of services being run before locally customizing their configurations.

@brianholland99 I've amended my PR to cover Debian distros.
Also, added Python YAML module to the dependencies on RHEL, Arch Linux and FreeBSD. Other systems have been already covered.

As a side note: that Debian auto-starting by default policy is really a legacy of the times before mature configuration management tools (such as Salt) existed. Combining it with the power of debconf and Debian Installer, you are able to "preseed" packages to install with applying configuration changes. That's how fully automated installation with minimal scripting could be done. It is still used in some projects.

@vutny The new code does properly work configuring the JSON argument prior to installing the Salt utilities and solves the real issue that I had wanting to pass JSON from Vagrant and can pass the minion ID in JSON as well as the master configuration. The YAML dependencies work in the tests that I tried.

However, the test example that I gave above still cause an issue. I built the test trying to demonstrate the issue of installing prior to configuring. However, it turns out that some items, such as the minion_id, were independently configured after installation even in the original code base. So, my original test of your first patch actually showed a different ordering issue than what I intended to show.

New test:

/vagrant/bootstrap-salt.sh -D -J '{"syndic_master":"192.168.1.124"}' -j '{"id":"mintest"}' -F -c /tmp -M -S stable

Notice that I replaced the -i mintest with JSON for the minion configuration. That works perfectly fine in your patched code.

Below, I kept the -i mintest from my one test, but removed the JSON for the master configuration, so that I could test on both the new patched and the original code.

Modified old test:

/vagrant/bootstrap-salt.sh -D -i mintest -F -c /tmp -M -S stable

This command still causes the issue in both the new patch and also the currently released code. (both still end up with the new node connecting with two minion ids, one before the minion_id file is dropped and one after)

When I originally created the test, I did not properly check the code to notice that Salt was installed before dropping /etc/salt/minion_id rather than the config_salt() routine dropping the file. Sorry that I did not notice that as a separate issue. I only noticed when I reran the tests with your new patch and noticed that the issue still occurred. After glancing at the code to see that the minion_id argument was handled in a section after the install, I ran the test on the current release to verify that the issue existed prior. The first test in this response properly shows that your patch does work in the proper order.

So the patch definitely solves the issue that I originally noticed, but it may make sense to move the other configuration items prior to the install.

Line 6317 on your branch - # Install Salt.

Some of the next few sections below that may make sense to also be before the install. Dropping files for the master address and the minion_id should likely be done before the install at a minimum. I did not look at the other sections closely to see whether they were intended to mess with configuration or do some other post installation. Well, even if they were intended to be used for configuration, someone might be using post install functions for things that really needed to be done after the install. So, moving things like the post-install function could cause issues for people using them in that manner, even if they were intended to be used for configuration.

I haven't finished reading though all of this yet, but one thing to note is that there is already an open issue about -i generating multiple keys that I haven't had a chance to look into yet. Please see issue #959.

@rallytime The #959 may very will be the issue that still shows up even after the patch by @vutny and is separate from the config_salt() and install order. I will comment on that ticket as to what the issue may be. If it is the same issue, then the remaining issue I have (unrelated to the YAML) could be handled by that ticket.

@brianholland99 Thank you for deep investigation and testing!

The Master address (-A) and Minion ID (-i) are definitely should be configured prior to Salt packages installation, because these options are naturally a part of configuration step. Good point about post-installation functions, it's better to keep them in the place where they are now.

I've updated my PR accordingly.

@vutny I looked at your code changes and reran tests. Both issues seem to be resolved on the platforms that I tried.

Thanks.

Thanks for testing that @brianholland99. I've merged in the change, so I'll close this out.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

bewing picture bewing  路  4Comments

yitzhakbg picture yitzhakbg  路  7Comments

vielmetti picture vielmetti  路  7Comments

ninja- picture ninja-  路  15Comments

nickgarber picture nickgarber  路  14Comments