We configure a splunk returner on the master with master job cache.
This is the config file in master.d, no config for the minion - as described here:
splunk_http_forwarder:
token: 'SOME-TOKEN'
indexer: 'https://splunkindexer'
sourcetype: 'salt'
index: 'salt_index'
master_job_cache: splunk
When running local on salt master we get this in master log:
sudo salt-call test.ping --return splunk
2018-12-11 17:29:47,299 [salt.utils.job :45 ][ERROR ][10985] Returner 'splunk' does not support function prep_jid 2018-12-11 17:29:47,300 [salt.master :1795][ERROR ][10985] Error in function _return:
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/salt/master.py", line 1788, in run_func
ret = getattr(self, func)(load)
File "/usr/lib/python2.7/site-packages/salt/master.py", line 1587, in _return
self.opts, load, event=self.event, mminion=self.mminion)
File "/usr/lib/python2.7/site-packages/salt/utils/job.py", line 46, in store_job
raise KeyError(emsg)
KeyError: u"Returner 'splunk' does not support function prep_jid"
When running this on salt master:
sudo salt 'salt.dev*' test.ping --return splunk
This is the output on the salt cli:
Salt request timed out. The master is not responding. You may need to run your command with `--async` in order to bypas
s the congested event bus. With `--async`, the CLI tool will print the job id (jid) and exit immediately without listen
ing for responses. You can then use `salt-run jobs.lookup_jid` to look up the results of the job in the job cache later.
And this is the master log output:
2018-12-11 17:32:24,442 [salt.master :2141][ERROR ][10986] Failed to allocate a jid. The requested returner 'splunk' could not be loaded.
2018-12-11 17:32:24,444 [salt.transport.zeromq:691 ][ERROR ][10986] Some exception handling a payload from minion
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/salt/transport/zeromq.py", line 687, in handle_message
ret, req_opts = yield self.payload_handler(payload)
File "/usr/lib64/python2.7/site-packages/tornado/gen.py", line 870, in run
value = future.result()
File "/usr/lib64/python2.7/site-packages/tornado/concurrent.py", line 214, in result
raise_exc_info(self._exc_info)
File "/usr/lib64/python2.7/site-packages/tornado/gen.py", line 215, in wrapper
result = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/salt/master.py", line 1022, in _handle_payload
'clear': self._handle_clear}[key](load)
File "/usr/lib/python2.7/site-packages/salt/master.py", line 1053, in _handle_clear
ret = getattr(self.clear_funcs, cmd)(load), {'fun': 'send_clear'}
File "/usr/lib/python2.7/site-packages/salt/master.py", line 2087, in publish
payload = self._prep_pub(minions, jid, clear_load, extra, missing)
File "/usr/lib/python2.7/site-packages/salt/master.py", line 2179, in _prep_pub
self.event.fire_event({'minions': minions}, clear_load['jid'])
File "/usr/lib/python2.7/site-packages/salt/utils/event.py", line 741, in fire_event
salt.utils.stringutils.to_bytes(tag),
File "/usr/lib/python2.7/site-packages/salt/utils/stringutils.py", line 63, in to_bytes
return to_str(s, encoding, errors)
File "/usr/lib/python2.7/site-packages/salt/utils/stringutils.py", line 118, in to_str
raise TypeError('expected str, bytearray, or unicode')
TypeError: expected str, bytearray, or unicode
2018-12-11 17:32:24,445 [tornado.general :452 ][ERROR ][10986] Uncaught exception, closing connection.
Traceback (most recent call last):
File "/usr/lib64/python2.7/site-packages/zmq/eventloop/zmqstream.py", line 444, in _handle_events
self._handle_send()
File "/usr/lib64/python2.7/site-packages/zmq/eventloop/zmqstream.py", line 487, in _handle_send
status = self.socket.send_multipart(msg, **kwargs)
File "/usr/lib64/python2.7/site-packages/zmq/sugar/socket.py", line 363, in send_multipart
i, rmsg,
TypeError: Frame 0 (u'Some exception handling minion...) does not support the buffer interface.
2018-12-11 17:32:24,445 [tornado.application:611 ][ERROR ][10986] Exception in callback None
Traceback (most recent call last):
File "/usr/lib64/python2.7/site-packages/tornado/ioloop.py", line 865, in start
handler_func(fd_obj, events)
File "/usr/lib64/python2.7/site-packages/tornado/stack_context.py", line 274, in null_wrapper
return fn(*args, **kwargs)
File "/usr/lib64/python2.7/site-packages/zmq/eventloop/zmqstream.py", line 444, in _handle_events
self._handle_send()
File "/usr/lib64/python2.7/site-packages/zmq/eventloop/zmqstream.py", line 487, in _handle_send
status = self.socket.send_multipart(msg, **kwargs)
File "/usr/lib64/python2.7/site-packages/zmq/sugar/socket.py", line 363, in send_multipart
i, rmsg,
TypeError: Frame 0 (u'Some exception handling minion...) does not support the buffer interface.
Output of curl:
[user@saltmaster ~]$ curl -k -v splunk:8088
* About to connect() to splunk port 8088 (#0)
* Trying 10.0.0.1...
* Connected to splunk (10.0.0.1) port 8088 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: splunk:8088
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Date: Tue, 11 Dec 2018 16:36:03 GMT
< Content-Type: text/html; charset=UTF-8
< X-Content-Type-Options: nosniff
< Content-Length: 223
< Connection: Keep-Alive
< X-Frame-Options: SAMEORIGIN
< Server: Splunkd
<
<!doctype html><html><head><meta http-equiv="content-type" content="text/html; charset=UTF-8"><title>404 Not Found</tit
le></head><body><h1>Not Found</h1><p>The requested URL was not found on this server.</p></body></html>
* Connection #0 to host splunk left intact
CentOS Linux release 7.6.1810 (Core)
Linux devs0241 3.10.0-862.14.4.el7.x86_64 #1 SMP Wed Sep 26 15:12:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
(Include debug logs if possible and relevant.)
salt 2018.3.3 (Oxygen) Master/Minion
@mruepp Thank you for reporting this issue.
Is there a targetrelease for this bug?
Hi. I'm running into the same bug with salt 2019.2.0 (Fluorine). Any idea when it will be addressed?
Another 2019.2.0 user here with the exact same issue. Any workarounds?
So, For the record. my two cents on this one. I don't think this is possible. splunk would not make a great master_job_cache. splunk isn't a database that can be used to fetch data from in small amounts or making lots of requests against. both functionality that a master_job_cache needs to do.
I can see merit for adding event_returner functionality. but not a full master_job_cache, or ext_job_cache.
see https://docs.saltstack.com/en/latest/ref/returners/ for a list of functions needed for each type of returner.
Ah yes. I just tried to list recent jobs and couldn't. I removed the ext_job_cache: splunk declaration and I was able to list my jobs. So your assessment is spot on.
I can see merit for adding
event_returnerfunctionality. but not a fullmaster_job_cache, orext_job_cache.
@whytewolf - Would this need to be configured to run on the master or would this need to be in a pillar for each minion? My hope is the former. It would be much easier for me to receive data to Splunk from a single server (saltmaster) rather than having to whitelist traffic from individual minions.
An event_returner would be set up on the master.