Ansible: [Bug / Regression] mysql_db import fail to decompress dumps

Created on 12 Jan 2017  路  53Comments  路  Source: ansible/ansible

_From @bmalynovytch on June 2, 2016 15:43_

ISSUE TYPE

Bug Report (regression)

COMPONENT NAME

mysql_db (import)

ANSIBLE VERSION
ansible 2.1.0.0
  config file = /Users/benjamin/xxxxxxx/ansible.cfg
  configured module search path = Default w/o overrides
CONFIGURATION
[defaults]
host_key_checking = False
forks=500
pipelining=True
retry_files_enabled = False

gathering = smart
fact_caching = jsonfile
fact_caching_connection = ./.tmp/ansible
fact_caching_timeout = 3600

[ssh_connection]
control_path = /tmp/ansible-ssh-%%h-%%p-%%r
ssh_args = -o ControlMaster=auto -o ControlPersist=30m
OS / ENVIRONMENT

Deployment OS: Mac OS X 10.11.5, python v2.7.11 with pyenv/virtualenv
Destination OS: ubuntu jessie/sid

SUMMARY

Using mysql_db to import Gzipped or Bzipped SQL dumps used to work like a charm with ansible 2.0.2.0
Now, compressed imports fail with a broken pipe error, either if .gz or .bz2
Strangely, this does not happen on a small (compressed) file (1.8k gzip compressed, 6k uncompressed)
Maybe related to https://blog.nelhage.com/2010/02/a-very-subtle-bug/

STEPS TO REPRODUCE

Try to import a compressed (large enough) SQL dump with mysql_db.
Failure happen with a 3.5MB gzip compressed / 20MB uncompressed dump.

- name: Restore database
  mysql_db:
  args:
    name: my_db
    state: import
    target: /path_to_backups/backup-pre-release.sql.bz2
    login_host: "{{ db.host聽}}"
    login_port: "{{ db.port聽}}"
    login_user: "{{ db.user }}"
    login_password: "{{ db.passwd }}"
EXPECTED RESULTS

Import should just work.

ACTUAL RESULTS
fatal: [xxxxxx]: FAILED! => {"changed": false, "failed": true, 
"msg": 
"bzip2: I/O or other error, bailing out.  Possible reason follows.
 bzip2: Broken pipe
       Input file = /opt/xxxxxx/backup-pre-release.sql.bz2, output file = (stdout)
"}

_Copied from original issue: ansible/ansible-modules-core#3835_

affects_2.1 bug database has_pr module mysql community test

Most helpful comment

Apparently this issue appears only if database is already imported(on second and every next run). Could be more idempotent..

All 53 comments

_From @ansibot on July 30, 2016 16:32_

@Jmainguy ping, this issue is waiting for your response.
click here for bot help

_From @Jmainguy on July 30, 2016 20:10_

I tried to recreate this on ansible-2.2.0-0.git201605131739.e083fa3.devel.el7.centos.noarch and was unable to reproduce.

imported a 240mb .tar.gz (took a few hours, but it worked).

This was on centos, can you try again with devel on ubuntu and let me know if this is still happening?

_From @ansibot on September 8, 2016 20:59_

@Jmainguy, ping. This issue is still waiting on your response.
click here for bot help

_From @Jmainguy on September 9, 2016 18:19_

ansibot "needs_info"

_From @bmalynovytch on September 9, 2016 19:3_

The bug concerns any size of compressed dumps: the module doesn't uncompress them anymore.
It might have been fixed, in recent versions, but I don't have time to give it a try, as the platforms on which I use the mysql_db module now handle uncompression before calling mysql_db, and I'm not working on them for now.
I don't have time to test that part again for the moment, sorry.

!needs_info

_From @ilyapoz on September 27, 2016 18:24_

Any workarounds so far? Still reproducible on ubuntu trusty in a virtualbox
ansible 2.1.1.0

_From @ansibot on September 27, 2016 18:43_

@Jmainguy, ping. This issue is still waiting on your response.
click here for bot help

_From @ilyapoz on September 27, 2016 18:44_

Sorry, a possible workaround for small db's is not compressing the dump

_From @ansibot on December 9, 2016 19:50_

This repository has been locked. All new issues and pull requests should be filed in https://github.com/ansible/ansible

Please read through the repomerge page in the dev guide. The guide contains links to tools which automatically move your issue or pull request to the ansible/ansible repo.

_From @ulrith on December 20, 2016 11:35_

I have the same behavior with my Ansible 2.2.0.0 on Ubuntu 16.04.
Funny but the 'unarchive' module task also fails if I try to decompress the archive before feeding a dump to the mysql_db module. >:-<
Too bad...

_From @sachavuk on January 4, 2017 17:38_

Hi,

I have the same issue on Debian 7.8 with ansible 2.2.0.0 the same error message :cry:

Good evening,
I hope it was correct to move the issue to this repo as suggested in Move Issues and PRs to new Repo.

As @ulrith and @sachavuk I have this issue, too. In my case the issue is in rhel 7.3 with ansible 2.2.0.0.

The task in my play looks like:

- name: Restore database
  mysql_db:
    name: db_name
    state: import
    target: /tmp/db_dump.sql.bz2

During the playbook run I receive the following error message:

TASK [role-name : Restore database] ******************************************
fatal: [xx.xx.xx.xx]: FAILED! => {"changed": false, "failed": true, "msg": "\nbzip2: I/O or other error, bailing out.  Possible reason follows.\nbzip2: Broken pipe\n\tInput file = /tmp/db_dump.sql.bz2, output file = (stdout)\n"}

I don't know how to debug an ansible module. If there are any useful information I could provide, please tell me how to do that.

Regards,
Tronde

Hi there,
Not sure what's up exactly with these labels. But remember that version 2.2 is affected, too.

@Tronde I am still unable to reproduce this bug using latest devel. Can you try and reproduce using latest devel, I followed the instructions above, any compressed database that is larger than 3.5mb when compressed.

[root@phy01 ~]# ansible -i localhost localhost -m mysql_db -a "state=dump target=/tmp/db.sql.bz2 name=diaspora"                                                                               
 [WARNING]: Host file not found: localhost

 [WARNING]: provided hosts list is empty, only localhost is available

localhost | SUCCESS => {
    "changed": true, 
    "db": "diaspora", 
    "msg": ""
}


[root@phy01 ~]# ansible -i /tmp/hosts all -m mysql_db -a "name=icannotreproducethisbug state=import target=/tmp/db.sql.bz2"
centos7.soh.re | SUCCESS => {
    "changed": true, 
    "db": "icannotreproducethisbug", 
    "msg": ""
}



[root@centos7 ~]# ls -ltrh /tmp/db.sql.bz2 
-rw-r--r--. 1 root root 79M Jan 24 15:23 /tmp/db.sql.bz2

[root@phy01 ~]# rpm -qa ansible
ansible-2.3.0-100.git201701131819.d25a708.devel.el7.centos.noarch

@ansibot 'needs_info'

@Jmainguy I was able to reproduce the problem with the latest devel:
~
$ ansible-playbook --version
ansible-playbook 2.3.0 (devel 6a6fb28af5) last updated 2017/01/24 18:33:11 (GMT +200)
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
~

Running $ ansible-playbook my-it-brain.yml with the task:
~~~

  • name: Restore database
    mysql_db:
    name: my_db
    state: import
    target: /tmp/my_db.sql.bz2
    ~~~

results in:
~
TASK [role-name : Restore database] **************
fatal: [10.0.2.4]: FAILED! => {"changed": false, "failed": true, "msg": "nbzip2: I/O or other error, bailing out. Possible reason follows.nbzip2: Broken pipentInput file = /tmp/my_db.sql.bz2, output file = (stdout)n"}
~

~
$ ll roles/my-it-brain/files/
total 129332
-rw-rw-r--. 1 tronde tronde 9754808 Jan 8 20:45 my_db.sql.bz2
-rw-rw-r--. 1 tronde tronde 122675935 Jan 10 13:18 another_file.tar.bz2
~

Please tell me, if you need further information.

I am testing against centos7 with mariadb-server, are you testing against another DB? there is clearly something between our two env setups because I am unable to reproduce this. I just compiled and installed latest devel again to be sure.

[root@phy01 test]# ansible-playbook -i hosts  mysql_db.py 

PLAY [Please break] ************************************************************

TASK [import a bz2 file and break hopefully] ***********************************
changed: [centos7.soh.re]

PLAY RECAP *********************************************************************
centos7.soh.re             : ok=1    changed=1    unreachable=0    failed=0   

[root@phy01 test]# cat mysql_db.py 
---

- name: Please break
  hosts: all
  gather_facts: false
  tasks:
    - name: import a bz2 file and break hopefully
      mysql_db:
        name: my_db
        state: import
        target: /tmp/db.sql.bz2
[root@phy01 test]# rpm -qa ansible
ansible-2.3.0-100.git201701241457.8d4246c.devel.el7.centos.noarch

[root@centos7 ~]# ls -ltrh /tmp/db.sql.bz2
-rw-r--r--. 1 root root 79M Jan 24 15:23 /tmp/db.sql.bz2
[root@centos7 ~]# rpm -qa | grep -i maria
mariadb-5.5.52-1.el7.x86_64
mariadb-server-5.5.52-1.el7.x86_64
mariadb-libs-5.5.52-1.el7.x86_64

Is it possible bz2 is failing to uncompress because of disk running out or something? Why is bz2 failing in your (and other testers reproducing this) envs? bzip2: I/O or other error, bailing out.

Hi,

here are some information about my env.

Controller:
~
$ cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.3 (Maipo)
~

Target Node:
~~~
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.5 LTS
Release: 14.04
Codename: trusty

$ dpkg -l mysql-server
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-===========================-==================-==================-============================================================
ii mysql-server 5.5.54-0ubuntu0.14 all MySQL database server (metapackage depending on the latest v
~~~

The node should have enough disk space and ram to extract the bz2 file:
~
df -h
Filesystem Size Used Avail Use% Mounted on
udev 486M 4.0K 486M 1% /dev
tmpfs 100M 476K 99M 1% /run
/dev/sda1 12G 1.8G 9.4G 16% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 497M 0 497M 0% /run/shm
none 100M 0 100M 0% /run/user
~

I increased the memory to 2048 mb to be sure not to run in an out of memory issue. But I'm still getting the same error message.

I have no idea why I cannot reproduce this, can you try manually bunzip2 that file and see if it shoots the i/o error?

Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-32-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Tue Jan 24 20:07:20 CET 2017

  System load:  0.25              Processes:           86
  Usage of /:   4.2% of 38.02GB   Users logged in:     0
  Memory usage: 17%               IP address for eth0: 192.168.122.237
  Swap usage:   0%

  Graph this data and manage this system at:
    https://landscape.canonical.com/

New release '16.04.1 LTS' available.
Run 'do-release-upgrade' to upgrade to it.

*** System restart required ***
Last login: Tue Jan 24 20:07:20 2017 from 192.168.122.1
root@ubuntu1404:~# df -Th
Filesystem                      Type      Size  Used Avail Use% Mounted on
udev                            devtmpfs  487M  8.0K  487M   1% /dev
tmpfs                           tmpfs     100M  432K   99M   1% /run
/dev/mapper/ubuntu1404--vg-root ext4       39G  1.8G   35G   5% /
none                            tmpfs     4.0K     0  4.0K   0% /sys/fs/cgroup
none                            tmpfs     5.0M     0  5.0M   0% /run/lock
none                            tmpfs     498M     0  498M   0% /run/shm
none                            tmpfs     100M     0  100M   0% /run/user
/dev/sda1                       ext2      236M   40M  184M  18% /boot
root@ubuntu1404:~# ls -ltrh /tmp/
total 79M
-rw-r--r-- 1 root root 79M Jan 24 20:07 db.sql.bz2
root@ubuntu1404:~# bunzip /tmp/db.sql.bz2 
No command 'bunzip' found, did you mean:
 Command 'runzip' from package 'rzip' (universe)
 Command 'funzip' from package 'unzip' (main)
 Command 'ebunzip' from package 'eb-utils' (universe)
 Command 'unzip' from package 'unzip' (main)
 Command 'bunzip2' from package 'bzip2' (main)
 Command 'gunzip' from package 'gzip' (main)
 Command 'lunzip' from package 'lunzip' (universe)
bunzip: command not found
root@ubuntu1404:~# bunzip2 /tmp/db.sql.bz2 
root@ubuntu1404:~# ls -ltrh /tmp/
total 123M
-rw-r--r-- 1 root root 123M Jan 24 20:07 db.sql
root@ubuntu1404:~# dpkg -l mysql-server
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name                                      Version                   Architecture              Description
+++-=========================================-=========================-=========================-========================================================================================
un  mysql-server                              <none>                    <none>                    (no description available)
root@ubuntu1404:~# mysql
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 45
Server version: 5.5.54-0ubuntu0.14.04.1 (Ubuntu)

Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| my_db              |
| mysql              |
| performance_schema |
+--------------------+
4 rows in set (0.00 sec)

mysql> Bye
root@ubuntu1404:~# free -m
             total       used       free     shared    buffers     cached
Mem:           994        839        154          0         30        527
-/+ buffers/cache:        281        712
Swap:         1023          3       1020

Using bunzip2 to extract the file locally on the target node works just fine without any error.

Unfortunately I have no idea how to help you reproducing this error. :-(

I am unable to reproduce this bug. That being said, supposing it does exist, it has to do with stdout ending before the compression tool thinks it should. Guessing running out of ram / swap.

I added this code https://github.com/ansible/ansible/commit/1608163b26611171406b3d53135cf4aee63f1765

Which was reverted with this code https://github.com/ansible/ansible/commit/aa79810cc83f9f76a2cb9c117782b93056c5487f

So it seems going from decompressing to file, importing, then compressing back up was nixed for decompressing to stdout, then importing from that stdout.

I imagine going back to disk will fix this, at the cost of speed, and disk space while the playbook is running (it will compress after it imports).

Thoughts?

Guessing running out of ram / swap.

Well, I give it a try again, today. Using two different mysql dumps I encountered the same error. In both cases I had at least 120 MB of ram left. I thought that should be enough.

My understanding of this matter is not informed enough to give you any helpful thoughts on this. But if you are going back to disk, I would be happy to give it another run.

Hello,
After an upgrade to ansible v2.2.1.0-1 the import works:

 - name: Import DB
   mysql_db: name=testdb state=import target=/var/tmp/testdb.sql.bz2

Hi, unfortunately i get still the "gzip: stdout: Broken pipe" error when i try to import a GZ compressed dump, Import from a uncompressed MySQL dump forks fine:

...
TASK [Import DB] ***************************************************************
fatal: [db.example.com]: FAILED! => {"changed": false, "failed": true, "msg": "\ngzip: stdout: Broken pipe\n"}
...
...
fatal: [db.example.com]: FAILED! => {
    "changed": false, 
    "failed": true, 
    "invocation": {
        "module_args": {
            "collation": "", 
            "config_file": "/root/.my.cnf", 
            "connect_timeout": 30, 
            "encoding": "", 
            "login_host": "localhost", 
            "login_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", 
            "login_port": 3306, 
            "login_unix_socket": null, 
            "login_user": "root", 
            "name": "my_db", 
            "quick": true, 
            "single_transaction": false, 
            "ssl_ca": null, 
            "ssl_cert": null, 
            "ssl_key": null, 
            "state": "import", 
            "target": "/media/my_db.sql.gz"
        }, 
        "module_name": "mysql_db"
    }, 
    "msg": "\ngzip: stdout: Broken pipe\n"
}
...
$ ansible --version
ansible 2.2.1.0
  config file = /etc/ansible/ansible.cfg
  configured module search path = Default w/o overrides
# file /media/my_db.sql.gz
/media/my_db.sql.gz: gzip compressed data, last modified: Fri Feb 24 12:03:25 2017, from Unix
# gzip --version
gzip 1.6
...
# lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description:    Debian GNU/Linux 8.7 (jessie)
Release:    8.7
Codename:   jessie

Can confirm, getting "gzip: stdout: Broken pipe" as well.
```fatal: [zabbix-proxy1]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"collation": "",
"config_file": "/root/.my.cnf",
"connect_timeout": 30,
"encoding": "",
"login_host": "localhost",
"login_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"login_port": 3306,
"login_unix_socket": null,
"login_user": "zabbix",
"name": "zabbix",
"quick": true,
"single_transaction": false,
"ssl_ca": null,
"ssl_cert": null,
"ssl_key": null,
"state": "import",
"target": "/usr/share/doc/zabbix-proxy-mysql/schema.sql.gz"
},
"module_name": "mysql_db"
},
"msg": "ngzip: stdout: Broken pipen"
}


```$ ansible --version
ansible 2.2.1.0
  config file = /etc/ansible/ansible.cfg
  configured module search path = Default w/o overrides

Target node
```$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.2 LTS
Release: 16.04
Codename: xenial


$ file /usr/share/doc/zabbix-proxy-mysql/schema.sql.gz
/usr/share/doc/zabbix-proxy-mysql/schema.sql.gz: gzip compressed data, was "schema.sql", last modified: Wed Dec 21 08:11:14 2016, from Unix
```

Any update on this issue?

Hi,
The problem is that @Jmainguy was not able to reproduce the issue. I read that ansible 2.3.0 was relased in the last days. As soon as this version is available in EPEL I'm going to test the module again to see if the issue still exists.

In my case I had the same error as OP, but when I tried to manually load the file on remote (bunzip2 -c file.tar.bz2 | mysql db), I got the error: MySQL has gone away - the problem was max_allowed_packet too small.

HTH.

Apparently this issue appears only if database is already imported(on second and every next run). Could be more idempotent..

Want to add a "me too" for this issue. I'm importing a GZipped SQL file and getting Broken Pipe. My DB is 54MB compressed, ~400MB uncompressed. Using ansible 2.3 on WSL. Target box is Ubuntu 16.04.2 / MariaDB 10. Workaround for now is to uncompress the db file using a shell task (because unarchive doesn't support this op) and then importing.

Hey guys, here's another reason for the broken pipe error. Maybe it helps some of the can't reproduce problems above?

In my case, it boiled down to using passwords with special characters. Ansible is trying to be smart and quote passwords before sending it off to mysql, unfortunately mysql assumes the quotes are part of the password.

Basically, mysql_db module is doing:

        # cmd looks something like:
        cmd = ['mysql','--user=%s' % pipes.quote(user), '--password=%s' % pipes.quote(pass), ... ]
        # comp_prog_path is the compression tool based on target ext (gzip, bzip, etc)
        p1 = subprocess.Popen([comp_prog_path, '-dc', target], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
        p2 = subprocess.Popen(cmd, stdin=p1.stdout, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
        (stdout2, stderr2) = p2.communicate()
        p1.stdout.close()
        p1.wait()
        if p1.returncode != 0:
            stderr1 = p1.stderr.read()
            return p1.returncode, '', stderr1
        else:
            return p2.returncode, stdout2, stderr2

So, when the password is 'test1234!', ansible tries to pass ['mysql','--user=admin',"--password='test1234!'",'-D','testdb'] to subprocess. (Notice the quotes around password)

When exec passes this off to mysql, Mysql fails with ERROR 1045 (28000): Access denied for user 'admin'@'x.x.x.x' (using password: YES), and both p1 and p2 have non-zero error returns. Unfortunately, ansible only returns the broken pipe error (from the gzip, bzip, etc commands).

I monkey patched my version of ansible's mysql_db.py to remove pipes.quote() from the password and all worked fine.

Given the difficulty of troubleshooting the many different errors for broken pipes, I would suggest:

1) only returning p1 error if pipe wasn't broken (not sure the best way to handle this w/ subprocess, but bash does it right: gzip -dc foo.gz | mysql --password='wrong' does not show broken pipe error for gzip.)
2) not using pipes.quote() for passwords

Submitted PR #26504 for this.

Any chance we can get the fix into the 2.3 series as well?

cc @Xyon @michaelcoburn @oneiroi @tolland
click here for bot help

cc @Alexander198961
click here for bot help

cc @Andersson007
click here for bot help

cc @kurtdavis
click here for bot help

Time appropriate greetings everyone,
I would like to provide some new information to this issue.

Information about the test case

I've used a bzip2 compressed MySQL dump file and tried to import it on two different target systems.

Ansible Control Node

Red Hat Enterprise Linux Server release 7.7 (Maipo)
ansible 2.9.6
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/tronde/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]

Target nodes

  1. Debian Buster (current patch level)
  2. CentOS 7.7 (current patch level)

Scenario 1: Successful deployment on Debian Buster

Playbook

---

- name: Test case to debug mysql_db module
  hosts: 10.0.2.6
  tasks:
    - name: Copy database dump file
      copy:
        src: /tmp/test-case.sql.bz2
        dest: /tmp

    - name: Restore database
      mysql_db:
        name: test-case-db
        state: import
        target: /tmp/test-case.sql.bz2

Test file

-rwxr-xr-x. 1 1000 1000 12445232 Mar 17 20:40 /tmp/test-case.sql.bz2

Playbook run

PLAY [Test case to debug mysql_db module] **************************************

TASK [Gathering Facts] *********************************************************
ok: [10.0.2.6]

TASK [Check for nginx, php-fpm and mysql-server] *******************************
changed: [10.0.2.6]

TASK [Copy database dump file] *************************************************
changed: [10.0.2.6]

TASK [Restore database] ********************************************************
changed: [10.0.2.6]

PLAY RECAP *********************************************************************
10.0.2.6                   : ok=4    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Everything is fine.

Scenario 2: Unsuccessful deployment on CentOS 7.7

Playbook and test file are the same as in scenario 1.

Playbook run

PLAY [Test case to debug mysql_db module] **************************************

TASK [Gathering Facts] *********************************************************
ok: [10.0.2.15]

TASK [Copy database dump file] *************************************************
changed: [10.0.2.15]

TASK [Restore database] ********************************************************
fatal: [10.0.2.15]: FAILED! => {"changed": false, "msg": "\nbzip2: I/O or other error, bailing out.  Possible reason follows.\nbzip2: Broken pipe\n\tInput file = /tmp/test-case.sql.bz2, output file = (stdout)\n"}

PLAY RECAP *********************************************************************
10.0.2.15                  : ok=4    changed=3    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0

Running bunzip2 /tmp/test-case.sql.bz2 on the target node completed without error.

In case you need more information, please tell me which and how to gather them.

Regards,
Tronde

Hi @Tronde and others
https://github.com/ranger/ranger/issues/325 I found a similar issue (supbrocess).
People said that the error appears when they use python2 and disappear with python3.
I also found https://stackoverflow.com/questions/295459/how-do-i-use-subprocess-popen-to-connect-multiple-processes-by-pipes where mentioned that broken pipe can happen because there are different signals which are used by linux shells and subprocess module.

Who can reproduce the problem could you please create a simlink python -> python3 to force ansible use it and check then.
If it works ok it means it is not a problem of ansible but of subprocess module in python2 and we might make a note in the module's documentation to use python3 forcibly instead

(symlink on the target machine)

Who can reproduce the problem could you please create a simlink python -> python3 to force ansible use it and check then.

No need to use a symlink, just defining ansible_python_interpreter should suffice
See https://docs.ansible.com/ansible/latest/reference_appendices/interpreter_discovery.html

@bmalynovytch cool, thanks for the tip!

Hi, due to #67083 I'm not able to check with python3.

@Tronde maybe to clone the vm and install python3 manually?

if it's impossible, can anybody else try to reproduce the bug using python3 forcibly as @bmalynovytch described above?

@Andersson007 I won't install python3 manually. But I tried something else.

As we know from my test scenario the deployment on debian buster was successful. Debian 10 is shipped with python2 and python3. When @Andersson007 is right and the issue occurs only when using python2 I should be able to reproduce the issue after I've removed python3 from my buster machine.

I did so but the deployment was successful again. So I would argue that extracting a compressed sql dump file with python2 is possible in general.

But I noticed that buster comes with python 2.7.16 while CentOS is shipping 2.7.5.

I can't reproduce with following

dump file 14MB bz2 (several GBs uncompressed)

Works:

CentOS Linux release 6.10
Kernel 2.6.32-754.el6.x86_64
bzip2-1.0.5-7.el6_0.x86_64
Python 2.7.13rc1 (works with default python 2.6.6-66 too)
PyMySQL-0.9.3
MySQL-python 1.2.5

Works:

CentOS Linux release 7.7.1908
Kernel 3.10.0-229.el7.x86_64
bzip2 1.0.6-13.el7
python 2.7.5-86.el7 (updated because it was impossible to install pip without it)
PyMySQL-0.9.3

Works:

(default installation)
CentOS Linux release 7.1.1503
python 2.7.5-16 (default in CentOS 7.1)

but i know the solution. coming soon

https://github.com/ansible-collections/community.general/pull/151
could anybody please try to use that instead of your versions and confirm the bug disappeared ?

Hi, just to note it here, too. I can confirm that the bug disappeared using the code from ansible-collections/community.general#151.

@Tronde , thanks! we're wating for feedback from others until next week. if nobody's resonably against the changes, we'll merge that.

close_me

closed via https://github.com/ansible-collections/community.general/pull/151
@Tronde thanks much for reporting and helping!

Closing per above.

Was this page helpful?
0 / 5 - 0 ratings