Materialize: Security question: Use HTML by default for Toasts/Tooltips/Autocomplete and expose an XSS

Created on 8 Feb 2019  Â·  30Comments  Â·  Source: Dogfalo/materialize

Expected Behavior

Don't execute html/javascript by default for Toasts, Tooltips and Autocomplete

Current Behavior

By default when you add dynamics contents like Toast, Tooltips and autocomplete you inject inputs data as HTML. I think all people who use MaterializeCss implement compontents like your examples.

It's a bad practice and by default, your input data could be sanitize or use jquery.text('untrustData') instead jquery.html('untrustData') or innerHTML = 'untrustData'.

Possible Solution

If you want allow HTML, why not but sanitize the html and don't allow javascript. If the end user want allow HTML and javascript add a new configuration with a parameter like
options : { allowUnsafeData: true }

Steps to Reproduce (for bugs)

Toast

var userId = '<IFRAME SRC="javascript:alert(document.cookie);"></IFRAME>'
M.toast({html: `userId: ${userId} fail`})

Tooltips

<a class="btn tooltipped" data-position="bottom" data-tooltip="<IFRAME SRC='javascript:alert(document.cookie);'></IFRAME></script>">Hover me!</a>

Autocomplete

$('input.autocomplete').autocomplete({
    data: {
        "Apple": "\"><IFRAME SRC=\"javascript:alert(document.cookie);\"></IFRAME>",
        "Microsoft": null,
        "Google": 'https://placehold.it/250x250',
    }
});

Context

I'm agree about the developper need control his data before inject it in a third party library but sametime we forget to do it.

How i find this case :

  1. (Client) Send javascript in a field
  2. (Server) Server fail and return message with javascript
  3. (Client) Reuse the message from the server and use the Toast
  4. I have a reflected XSS

It's my fault, I didn't validate datas and I returned this script without sanitize. If by default your library don't allow html, I will not find this behavior.

Your Environment

  • Version used: 0.100.2 and 1.0.0

Most helpful comment

This is still reported here https://www.npmjs.com/advisories/818 on npm as moderate vulnerability, that means Github and npm will keep popping up the security issue.

@Dogfalo any idea when we can expect the fix for this ?

All 30 comments

If it needs to support custom content, it could take HTMLElement as argument. That's better than taking string as argument and then rendering it as HTML.

@FinDarkside, I'm agree with you but in 80% of use case you will use text. If you present your example with html content by defaut, all developer will use it. Sanitize input parameters is an another subject.

If we had the choice between html or text, it can be well but with a toast/tooltips, we have just html option. So include a vulnerability by default.

I see no XSS here.

If you have a found a real XSS with a PoC, please directly contact @Dogfalo in private (responsible disclosure).

@DanielRuf to be honest, what is a XSS for you?

Reflected XSS: The application or API includes unvalidated and unescaped user input as part of HTML output. A successful attack can allow the attacker to execute arbitrary HTML and JavaScript in the victim’s browser.
https://www.owasp.org/index.php/Top_10-2017_A7-Cross-Site_Scripting_(XSS)

On my first post you have a POC for each component. I can execute a JavaScript, it's a XSS.

MaterializeCss use by default HTML and for me it's a lack of security. You want use a content HTML, you must sanitizes datas.

  1. Toasts

The problem for me about toasts, it's the documentation. all examples used

M.toast({html: 'message'}).

Why use html for all examples and after to have an exemple for the Custom HTML. I'm a developer, I copy paste this simple example with the html parameter and I added a vulnerability in my project. For a static value it isn't a problem, for a dynamic value...
You can check bad usages:
https://github.com/search?q=%22M.toast%28%7Bhtml%3A%22&type=Code

If you want allow html, sanitaze "inputs datas", if you want use a trust content, you can use this syntaxe :

M.toast({html: 'message', sanitize:false}).

https://www.npmjs.com/package/sanitize-html

Bootstrap do this:
https://getbootstrap.com/docs/3.4/javascript/#js-sanitizer

  1. Tooltips

It's explained in the documentation

HTML String : Can take regular text or HTML strings.
https://materializecss.com/tooltips.html

How can I disable html content? For me it's HTML only.
https://github.com/Dogfalo/materialize/blob/v1-dev/js/tooltip.js#L71
And the innerHTML ...
https://github.com/OWASP/CheatSheetSeries/blob/master/cheatsheets/DOM_based_XSS_Prevention_Cheat_Sheet.md#example-dangerous-html-methods

  1. Autocomplete

Same story, if you allow unsafe html, you can inject javascript...
The problem isn't here
https://github.com/Dogfalo/materialize/blob/v1-dev/js/autocomplete.js#L292
but here
https://github.com/Dogfalo/materialize/blob/v1-dev/js/autocomplete.js#L384

Jquery + append()/html() = XSS

$('input.autocomplete').autocomplete({
    data: {
        "Apple": `x" onerror="javascript:alert(document.cookie)`,
        "Microsoft": null,
        "Google": 'https://placehold.it/250x250',
    }
});
  • And search Apple :)

If you check XSS for bootstrap, you have the same problem and it doesn't allow an unsafe html
https://github.com/twbs/bootstrap/issues?utf8=%E2%9C%93&q=xss
https://github.com/twbs/bootstrap/pull/28236
https://github.com/twbs/bootstrap/issues/28290
https://github.com/twbs/bootstrap/issues/26625

Sorry but this is not how XSS works. Show me a codepen with a default setup of our components which parse user input from a default field or the URL.

So far devs have to ensure what the data is allowed to contain and sanitize it.

User input can not create a malicious data object like you have shown.

Using the console does not show that there is a real XSS vuln as this is not user controlled input (form fields, addressbar, ...).

Reflected XSS is for example this:

Enter HTML and JS into search form.
Submit form.
Search result page ouputs it unsanitized.

This is exactly how XSS works. If you, for example do auto completion on user search, you can do XSS by naming your user properly. But as you've said, might not be the best idea to talk about this here, I've made a PoC and reported it to npm. They'll contact the package owner if they deem it to be XSS.

You don't have to form any objects, the malicious input in the example by @lucianot54 is a string.

No, there is no real direct input. entry.data is from the data object / response so you have to manipulate this part.

Still, no real XSS when a dev allows user input to be unsanitized in the output.

What is a Real XSS? You are confuse with a vulnerability and an exploitability.
Materialize doesn't control input parameters, so you are an injection. It's the first line from OWASP TOP 10 A1-injection

User-supplied data is not validated, filtered, or sanitized by the application.
https://www.owasp.org/index.php/Top_10-2017_A1-Injection

Also, Do you want a use case to exploit this vulnerability?
I store my autocompletion in a database, with name and image.
I'm a user who can update this list
I store the name (XSS) or my image (XSS). URL or javascript it's just a string
All people who use this autocomplete will be infected, It's a XSS stored.

Still, no real XSS when a dev allows user input to be unsanitized in the output.

Of course it is, because it's completely clear that the searchable input should be rendered as text instead of html. And Materializecss is what handles the output, so by your definition this is a vulnerability. There's nothing wrong with the input, there's lot of wrong on how autocomplete handles the input. I'm not going to argue about this anymore as it doesn't really matter, I've already made the report with PoC. What comes to sanitizing the input, it's actually not possible to do without breaking the auto completion.

To be clear, it is intended that we support HTML there. We do not explicitely filter or sanitize it.

If you use any user input in there you should ensure to sanitize any user input in general. As this is not the normal use case the sanitizing part is out of the projects scope imo.

Sure you can trigger this manually with malicious snippets like you demonstrated. But when such code can be executed through UI interactions you have a much bigger issue (no X-XSS header, no WAF and sanitization rules and so on).

To be clear, it is intended that we support HTML there.

It's actually very clear that it's not supposed to support HTML, as it indeed does not (XSS is possible though), HTML tags will break the auto completion. And it also makes no sense, as the search is performed on your whole input, which makes no sense if it's meant to be HTML. WAF nor X-XSS also have nothing to do with this and can't prevent this. And as I already said, the input can't be sanitized in a way that auto completion works properly.

The right side of input is supposed to be url. Are you really claiming that it's intended to be rendered as html, inside a img source attribute? The only use case for rendering string as html inside img tag source attribute is to do XSS attack. There's no reason for developers to expect that this library handles input that's supposed to be url as html. Would you say that if materialize ran the input trough eval it would also be users fault for not sanitizing the input?

As this is not the normal use case

Doubt this is the "official" opinion. And if it were, the readme will need a big warning about it.

It's actually very clear that it's not supposed to support HTML, as it indeed does not (XSS is possible though), HTML tags will break the auto

userId = '