salt-ssh doesn't pack custom grains into thin tarball

Created on 2 Aug 2019  路  31Comments  路  Source: saltstack/salt

Description of Issue

I've been struggling to get custom grains working with salt ssh, and inspecting the state on minions after running commands with salt-ssh -Wt, I've noticed that while it packs in the builtin grains, it doesn't seem to even be attempting to pack in the custom grains I've written from _grains.

note that this doesn't change if I run salt-ssh minionid saltutil.sync_grains, so this may be related to bug #53806.

Setup

write a custom grain, drop it in _grains, verify it works locally with salt-call.

Steps to Reproduce Issue

run salt-ssh -Wt minion_id grains.items and see that grain doesn't show up.
look at the deployed salt-ssh thin dir on the minion and inspect the pyall/salt/grains dir, notice custom grains are not present.

Versions Report

Salt Version:
           Salt: 2019.2.0

Dependency Versions:
           cffi: Not Installed
       cherrypy: Not Installed
       dateutil: 2.7.3
      docker-py: Not Installed
          gitdb: Not Installed
      gitpython: Not Installed
          ioflo: Not Installed
         Jinja2: 2.10.1
        libgit2: Not Installed
        libnacl: Not Installed
       M2Crypto: Not Installed
           Mako: Not Installed
   msgpack-pure: Not Installed
 msgpack-python: 0.6.1
   mysql-python: Not Installed
      pycparser: Not Installed
       pycrypto: 2.6.1
   pycryptodome: Not Installed
         pygit2: Not Installed
         Python: 3.7.3 (default, Apr  3 2019, 05:39:12)
   python-gnupg: Not Installed
         PyYAML: 3.13
          PyZMQ: 18.0.2
           RAET: Not Installed
          smmap: Not Installed
        timelib: Not Installed
        Tornado: 4.5.3
            ZMQ: 4.3.1

System Versions:
           dist: Ubuntu 19.04 disco
         locale: UTF-8
        machine: x86_64
        release: 5.0.0-21-generic
         system: Linux
        version: Ubuntu 19.04 disco
Confirmed Documentation doc-rework

All 31 comments

Could you try with a lowercase w? The capital -W is --rand-thin-dir whereas the lowercase -w is --wipe which may be more what you're intending here.

salt-ssh -wt

It was an option added to address this issue: https://github.com/saltstack/salt/pull/34974

-w doesn't produce different results, neither does manually wiping the dir on the remote host between runs. not that I would expect it to, since inspecting the remote dir shows the custom grains aren't being copied at all. I'm having a hard time figuring out where in the code is responsible for packing custom grains into the thin tarball, if I could get a pointer at that I might be able to do some debugging to track down what's happening here.

You might look in salt\utils\thin.py

@saltstack/team-core @team-ssh Any ideas here?

yeah, I took a look at that file, but I couldn't find where custom grains get added to the list it builds there. don't know the codebase well enough I guess. a sense of what log messages I should be seeing around custom grain packing at debug log level would also help track it down, I guess. for anyone that has this is working. ;)

Hi @nergdron I'm going to take a quick look at this today and see what's going on. At least get started on what's not working correctly.

@nergdron I know you commented on https://github.com/saltstack/salt/issues/33629 but I just want to follow up and ask if any of that was useful. Otherwise, I'm going to see what I can reproduce.

@xeacott unfortunately I have tried the stuff suggested there. I only opened this ticket after I'd found that one and investigated the options presented.

Gotcha! And to answer your question about where to start, /salt/utils/thin.py L339 def gen_thin is my starting point just how @twangboy mentioned. Also looking at the log files on the minion is a good place for clues.

So here's what I've done...

  1. run a salt-call to get local grains of my machine, and they return.
  2. write a custom grain and put it in /srv/salt/_grains/custom_grain.
  3. run a salt-call saltutil.sync_all and it shows this:
local:
    ----------
    ...
    executors:
    grains:
        - grains.custom_grain
    log_handlers:
    ...
  1. run a salt-call to get local grains of my machine, and they return now with this:
local:
   ----------
    zzzz:
        works
  1. run a salt-ssh -Wt managed grains.items, and after received this:
local:
   ----------
    zzzz:
        works

ok so it seems like the saltutil sync stuff doesn't see my salt dirs correctly.

we're doing fully serverless and running as regular users, so it's entirely possible there's something in the sync code which doesn't like this setup. I've got everything in ~/.salt, with a ~/.salt/Saltfile like this:

salt-call:
  config_dir: /home/tessa/.salt/conf/
salt-ssh:
  config_dir: /home/tessa/.salt/conf/
  ssh_wipe: False

under ~/.salt/conf/minion, I have:

id: local
root_dir: /tmp/.salt-root
file_client: local
file_roots:
  base:
    - /home/tessa/.salt/_states
module_dirs:
  - /home/tessa/.salt/

and that file is symlinked to ~/.salt/conf/master, so there's only one unified config to deal with between salt-call and salt-ssh.

when I run sync_all, I get:

local:
    ----------
    beacons:
    clouds:
    engines:
    grains:
    log_handlers:
    matchers:
    modules:
    output:
    pillar:
    proxymodules:
    renderers:
    returners:
    sdb:
    serializers:
    states:
    thorium:
    utils:

but when I look at one of my custom grains locally, it works:

$ salt-call grains.get home_dir
local:
    /home/tessa

so it's finding and executing the custom grains under ~/.salt/_grains, but it isn't finding them when it comes time to sync. I guess that's what's breaking salt-ssh as well.

Yeah I'm wondering if salt-ssh doesn't see a custom file-root.

Right after the thin is generated when we create an SSH object, there's one last method that looks at any other information it needs to pull in. It's right there that it pulls in my custom_grain and it needs to know it's in salt://. I know with salt-call --help you can change the file-root but, salt-ssh I don't know.

But anyways, with these extra files loaded in, they get placed into /var/cache/salt/master/..../. I think it was a previous suggestion to blow this dir away so that a -Wt would re-create it.

sorry for the late followup, been pulled away on other work. but yeah, if you're talking about the cache dir on the remote minion, I've definitely manually blown that away and it never seems to copy over my custom grain.

Hmmm, okay thanks for the reply. I'll give it a shot to change my file roots to match yours and see how that plays...

I think I am also experiencing this issue.

This is what I found out:
if I delete the tarball /var/cache/salt/master/thin/thin.tgz and then I look at the grains with salt-ssh myminion grains.items, everything seems to be working fine. I can fetch the grains multiple times and they still work. They also still work if I use salt-ssh --refresh myminion grains.items to force a cache refresh.

Once I issue salt-ssh myminion saltutil.sync_all everything in {{ thin_dir }}/running_data/var/cache/salt/minion/extmods/ just disappears and all custom grains/pillars/modules etc.. stops working.

with a custom root for salt, unfortunately my thin tarball never contains the custom grains and modules. so I definitely think there's something messed up in the sync / tarball build code.

Had to divert my attention, sorry guys. Somewhere right when we look for the extra files that get packed into that tmp tar, we look at the file-roots, and it needs to load opts['file_roots']. Still looking as to where that needs to go...

ahhhh it's all been a red herring! it was me misunderstanding the docs. from how the docs read, I thought I had to set modules_dir: to where all my special dirs (like _grains and _modules) are for them to get picked up. but what I really needed is this!

file_roots:
  base:
    - /home/tessa/.salt
    - /home/tessa/.salt/_states

I didn't have the first one before, and so of course urls like salt://_grains/mine.py weren't resolving. the thing that confused me is the fact I also need the second one, because without it it can't actually find /home/tessa/.salt/_states/top.sls, even though it's in the special _states dir. is there perhaps something in the code that doesn't look for _states under the file root if it's not /srv/salt? note that it also doesn't seem to find custom stuff under ~/.salt/_pillar either, I have to set pillar_roots: separately.

in any case. not a bug! just might be helpful to have more thorough examples for using a full salt tree including custom everything as a regular user instead of root. the current docs make it a pretty trial and error process to work through this all and figure it out. moreso, it'd be nice to be able to just set file_roots to the place where all your _* dirs live and have that Just Work. my config right now looks like this to get everything sorted out:

file_client: local
file_roots:
  base:
    - /home/tessa/.salt
    - /home/tessa/.salt/_states
log_level: error
module_dirs:
  - /home/tessa/.salt
pillar_roots:
  base:
    - /home/tessa/.salt/_pillar
root_dir: /tmp/.salt-root

Okay, I understand this now. This is also my noobness/lack of understanding when it comes to the Saltfile in general. And, I will agree, when I was investigating documentation for the Saltfile itself, it did not cover this instance whatsoever.

@mtorromeo See if this addresses your issue, too. Since I do not have the keen understanding of the Saltfile, I'll make the documentation gets updated exhaustively with an example such as this so that users in the future do not run into this.

I'll make a new issue about it so it gets done and sorted.
/Edit - When that issue is created I'll also close this.

thanks @xeacott, appreciate your help and patience with this!

if you felt like opening an enhancement request to simplify this use case, I'd also love to see that in some future release. 馃槈

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue.

Not stale

Thank you for updating this issue. It is no longer marked as stale.

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue.

Not stale.

Thank you for updating this issue. It is no longer marked as stale.

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue.

Not stale.

Thank you for updating this issue. It is no longer marked as stale.

I believe this PR should get this addressed https://github.com/saltstack/salt/pull/55806

Ill close the ticket with that merged PR reference above. If there are still issues this can be reopened.

Was this page helpful?
0 / 5 - 0 ratings