This is a really strange error I only seem to be experiencing on 1 particular instance with a particular set of tags in our testing environment, but I thought I'd report it here regardless.
I have Salt report failed states to Slack and I noticed I was getting this during one of our Consul states.
ID: unzip-consul
Function: archive.extracted
Name: /opt/consul/bin/consul_0.6.3
Result: False
Comment: Unable to manage file: [Errno 8] _ssl.c:493: EOF occurred in violation of protocol
Started: 14:38:07.716777
Duration: 350.336 ms
Changes:
Relevant info from consul state
# Template state
{{ install(app='consul', version='0.6.3',sha1sum='a291f5ba462414addcbbbefb1dc6c710b0b0b8ca') }}
[...]
# Consul state
unzip-{{ app }}:
archive.extracted:
- source: https://releases.hashicorp.com/{{ app }}/{{ version }}/{{ app }}_{{ version }}_linux_amd64.zip
- source_hash: sha1={{ sha1sum }}
- name: /opt/consul/bin/{{ app }}_{{ version }}
- archive_format: zip
- user: consul
- if_missing: /opt/consul/bin/{{ app }}_{{ version }}/{{ app }}
- require:
- file: create-{{ app }}-dir
- file: create-{{ app }}-data-dir
I experienced this on salt-call 2016.11.1 (Carbon) and so I downgraded to salt-minion 2015.8.8.2 (Beryllium) because I had recently upgraded, but the issue persisted. By the way, I downgraded via Terraform so this was a completely new instance with an AMI that I've been using elsewhere to bring up dynamic test instances.
This is the ONLY instance to exhibit this error. Other Beryllium instances don't experience this same issue. Some Stack Overflow responses claim upgrading python was a valid fix, while others say to force using TLSv1. Not sure where to begin with this one. If it was easily reproducible, I'd have a better idea of what the problem is, but the only difference between instances right now are the AWS tags, and that obviously shouldn't be the issue here.
As a temporary work-around for myself, I've posted the zips in question to S3 and updated the Salt state to get everything back in working order.
Salt version output
sudo salt-call --versions
Salt Version:
Salt: 2015.8.8.2
Dependency Versions:
Jinja2: 2.7.2
M2Crypto: 0.21.1
Mako: Not Installed
PyYAML: 3.10
PyZMQ: 14.3.1
Python: 2.6.9 (unknown, Sep 1 2016, 23:34:36)
RAET: Not Installed
Tornado: 4.2.1
ZMQ: 3.2.5
cffi: Not Installed
cherrypy: Not Installed
dateutil: Not Installed
gitdb: Not Installed
gitpython: Not Installed
ioflo: Not Installed
libgit2: Not Installed
libnacl: Not Installed
msgpack-pure: Not Installed
msgpack-python: 0.4.6
mysql-python: Not Installed
pycparser: Not Installed
pycrypto: 2.6.1
pygit2: Not Installed
python-gnupg: Not Installed
smmap: Not Installed
timelib: Not Installed
System Versions:
dist:
machine: x86_64
release: 3.14.48-33.39.amzn1.x86_64
$ cat /etc/os-release
NAME="Amazon Linux AMI"
VERSION="2015.03"
ID="amzn"
ID_LIKE="rhel fedora"
VERSION_ID="2015.03"
PRETTY_NAME="Amazon Linux AMI 2015.03"
ANSI_COLOR="0;33"
CPE_NAME="cpe:/o:amazon:linux:2015.03:ga"
HOME_URL="http://aws.amazon.com/amazon-linux-ami/"
$ openssl version
OpenSSL 1.0.1k-fips 8 Jan 2015
@rodriguezsergio we just recently released amazon ami specific salt packages with version 2016.11.0*. Previously we just pointed users to using the redhat6 packages on amazon linux. The reason I explain this is I'm wondering when you upgrdaed if you upgraded with the redhat6 packages or our native amazon linux packages? I believe you should be able to find this information by running rpm -qa | grep -i salt or checking what your repo files look like in /etc/yum.repos.d
This is an important distinction because I know some users in the past hae had issues with the redhat6 packages and their python version. I'm wondering if upgrading to the ami packages would resolve the issue. Thanks
Also to mention if you are running an upgrade path teh only way to upgrdae form the redhat6 packages to the native amazon linux packages is to remove and install as documented here: http://repo.saltstack.com#amzn
I have the same problem with salt :
ID: consul|install-consul
Function: archive.extracted
Name: /opt/terraform
Result: False
Comment: Failed to cache https://releases.hashicorp.com/terraform/0.8.5/terraform_0.8.5_linux_amd64.zip: [Errno 8] _ssl.c:492: EOF occurred in violation of protocol
Started: 09:43:00.406742
Duration: 766.142 ms
Changes:`
This is happening on locally on vagrant machine as well during packer build in AWS
salt-call 2016.11.2 (Carbon)
Downgrading to salt-call 2015.5.10 (Lithium) resolved problem for now :/
I just hit this problem on two instances...
# rpm -qa | grep -i salt
salt-amzn-repo-2016.3-1.ami.noarch
salt-minion-2016.3.5-1.el6.noarch
salt-2016.3.5-1.el6.noarch
This one has been running for around a month and downloads from Hashicorp used to work...
# rpm -qa | grep -i salt
salt-amzn-repo-2016.3-1.ami.noarch
salt-minion-2016.3.4-1.el6.noarch
salt-2016.3.4-1.el6.noarch
I guess they changed some SSL stuff on their server.
I was able to comment out a try/except and got a stack trace:
[DEBUG ] Requesting URL https://releases.hashicorp.com/consul/0.7.0/consul_0.7.0_linux_amd64.zip using GET method
[DEBUG ] Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/salt/states/file.py", line 1747, in managed
**kwargs
File "/usr/lib/python2.6/site-packages/salt/modules/file.py", line 3646, in get_managed
sfn = __salt__['cp.cache_file'](source, saltenv)
File "/usr/lib/python2.6/site-packages/salt/modules/cp.py", line 459, in cache_file
result = _client().cache_file(path, saltenv)
File "/usr/lib/python2.6/site-packages/salt/fileclient.py", line 178, in cache_file
return self.get_url(path, '', True, saltenv, cachedir=cachedir)
File "/usr/lib/python2.6/site-packages/salt/fileclient.py", line 717, in get_url
**get_kwargs
File "/usr/lib/python2.6/site-packages/salt/utils/http.py", line 487, in query
**req_kwargs
File "/usr/lib64/python2.6/site-packages/tornado/httpclient.py", line 102, in fetch
self._async_client.fetch, request, **kwargs))
File "/usr/lib64/python2.6/site-packages/tornado/ioloop.py", line 444, in run_sync
return future_cell[0].result()
File "/usr/lib64/python2.6/site-packages/tornado/concurrent.py", line 214, in result
raise_exc_info(self._exc_info)
File "<string>", line 3, in raise_exc_info
SSLError: [Errno 8] _ssl.c:493: EOF occurred in violation of protocol
@Ch3LL please take a look... I'm not very familiar with Tornado so I'm not sure how to dig deeper.
@cmclaughlin okay I was able to replicate this using the redhat6 pkgs on amazon which it looks like you are using, but i can't replicate this on a centos7 VM. So this is specific to amazon and the redhat6 pkgs.
When i installed the amazon native packages this started working. This is exactly why we have started providing amazon specific packages because there were previous issues that kept propping up due to python26 vs python27 pkg differences. The new amazon pkgs provide python27 pkgs which is the default version of python on amazon so it gets rid of a ton of issues users kept running into.
When i installed the amazn native pkgs I noticed it installed the following:
Installing:
salt-minion noarch 2016.11.2-1.amzn1 salt-amzn-latest 35 k
Installing for dependencies:
python27-futures noarch 3.0.3-1.3.amzn1 amzn-main 30 k
python27-msgpack x86_64 0.4.6-2.amzn1 salt-amzn-latest 84 k
python27-tornado x86_64 4.2.1-2.amzn1 salt-amzn-latest 944 k
python27-zmq x86_64 14.5.0-3.amzn1 salt-amzn-latest 571 k
salt noarch 2016.11.2-1.amzn1 salt-amzn-latest 8.1 M
I do think we still need to find a solution to this since 2016.3.5 is still failing and that is still a currently supported version of salt
@cmclaughlin does upgrading to amazon native pkgs work for you? You will need to remove and install
Excellent investigative work! Now that you mention it, I hacked salt-bootstrap to get the EL6 packages on AWS Linux. Glad to know I can undo that... I'll give it a shot.
I confirmed the new AWS packages fix this problem downloading Consul.
Side note - I'm not quite ready to upgrade to Salt 2016.11, so I'll live with this for a while. In particular, salt-bootstrap doesn't setup the new repo.. but there's a PR pending:
Thanks for confirming that @cmclaughlin !
I've pinged the maintainer of that repo internally and they will review that PR this week.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue.