Mastodon: Admin UI for account timelines needs explicit opt-in for viewing non-public statuses

Created on 2 May 2018  Â·  34Comments  Â·  Source: tootsuite/mastodon

One PR removed direct/private toots from admin UI. I agree that it was too easy to accidentally view them. However, admins need a way to explicitly opt into looking at them, because such a blind spot could be abused.

admin support

Most helpful comment

I am sorry, but I still can't see a strong case for this feature.

Somebody could be filling your database up with posts/media as DMs not addressed to anyone (or addressed to own alt accounts). Nobody would ever report those, nobody would see those except the author, but you'd be wondering why your DB is growing so fast and you'd go looking and have to go through Postgres with risk of messing stuff up accidentally. Previously we had a way to check this from within the Mastodon admin interface and I don't think it's good to take that function away completely.

All 34 comments

I agree.

In that case I think it should be logged into the audit log at least.

@hrefhref good point!

I think the user should have the ability to see what PT's have been viewed by an admin at least through a notification of some sort. If you're going to give the admins the ability to see them, I would think the user should at minimal be informed. Some may or may not agree, but that's my two cents anyway. ;)

Cheers

The user should be expressly notified in case such thing happens.
Also as far as I know the audit log is only accessible to admins but I feel like certains moderation decisions should be available in a public log, so any user on the instance can watch the watchers, as that kind of holds back the chances on power abuse.

If you let the user know when their DMs are accessed, you're giving abusers a tool to know when they're being watched, potentially who snitched on them, etc.

Certainly it should make an audit in the audit log, but I don't believe alerting the user is a good idea at all.

I agree with @KScl on this, on both points.

Agree with @KScl the users should rarely know this is happening. If they are innocent then alerting them will cause unneeded anxiety and if they are guilty of something they will work harder to hide it. As for power abuse, the instance admins typically have server access too and nothing stops us from just reading the database itself to read the messages without even leaving an audit log.

If you cant trust your admins, find a new instance. If you cant trust any admins make your own instance.

I am sorry, but I still can't see a strong case for this feature. Admins/moderators can already see all reported toots, regardless of their privacy. If someone reports an abuser, they are able to report the offending toots, regardless of privacy, and whether they are on the same instance or not. What blind spot is there to cover?

As for admins having access to the database anyway, while it is usually the case, not all Mastodon administrators and moderators are system administrators of their instance.

As you can guess, I'm pretty strongly opposed to this “feature”. If you decide to go forward with it anyway, this should be on a toot-per-toot basis, or at the very least the toot privacy should be clearly displayed (e.g., #6972), and such action should be logged in the audit log.

If someone reports an abuser, they are able to report the offending toots, regardless of privacy, and whether they are on the same instance or not. What blind spot is there to cover?

I strongly agree with this. There is absolutely no reason for an admin to go look for unreported DMs.

@Gargron

admins need a way to explicitly opt into looking at [DMs that haven't been reported], because such a blind spot could be abused.

I am very on the fence about this because I agree with ThibG and Aldarone, but if there is a way that "only visible when reported" DMs can be abused I would like to know how, and if there are situations where admins need to be able to look at DMs that haven't been reported I would like to know about those too.

(This stuff might be obvious to admins and mods, but I'm neither so I guess I need some context for this feature!)

RFC 7258 - Pervasive Monitoring Is an Attack

My major concern is legal. I still suspect lack of pervasive monitoring is illegal in few countries.

If someone reports an abuser, they are able to report the offending toots, regardless of privacy, and whether they are on the same instance or not. What blind spot is there to cover?

Victims of abuse that have left the server, fear drawing attention to themselves, or any other possible reason they wouldn't submit reports. Examining the behaviors of a known offender to see if their behavior follows a pattern or is one-off.

That makes sense. Sometimes I am not sure if someone is breaking the rules, so I don't report them until I know for sure they've crossed a line, and if I put myself in the admin's shoes I would want to look at their history to see if they've been consistently out of line.

In history, consistent abuse has been problematic especially in email (spam). I wonder how manual audit is common and has been accepted.

Again, I'm going to agree with @KScl, while I do see the concerns raised, I feel like this is a relevant use case.

I'm still happy to discuss it further.

IMO, the point is discerning adminitration from voyeurism. While the CLI allow system administrators to access to the instance's database, the GUI provides a too easy way to access to -unreported- private messages.

Also the admin team and the moderator team can be different…

I'm with ThibG and Alda on this one, I don't think we should have direct access unless reported.

That being said, if this still moves forward, I would say that the access needs to be logged in the audit log. (Though for smaller instances which are just one admin this won't make much of an effect, since they know they accessed the messages anyway)

I am sorry, but I still can't see a strong case for this feature.

Somebody could be filling your database up with posts/media as DMs not addressed to anyone (or addressed to own alt accounts). Nobody would ever report those, nobody would see those except the author, but you'd be wondering why your DB is growing so fast and you'd go looking and have to go through Postgres with risk of messing stuff up accidentally. Previously we had a way to check this from within the Mastodon admin interface and I don't think it's good to take that function away completely.

I think that's an unlikely scenario, but if it is a concern, I'd say that a better bet would be to implement a per-user media quota system, to deal with that specific concern.

Note that an instance admin wouldn't have to impose quotas with such a system if they didn't want to, but having the tools in place means that if there is a problem, an admin would be able to see who's using disproportionate amounts of storage, and work with (or ban if appropriate) those users to reduce their storage demand.

(If it's legitimate storage demand, that also suggests that other tools may be useful - for instance, providing tools to recompress a user's media after the fact, so they don't have to lose media or reupload it themselves.)

IMO, the point is discerning adminitration from voyeurism. While the CLI allow system administrators to access to the instance's database, the GUI provides a too easy way to access to -unreported- private messages.

This is why I agreed to hide DMs/private toots from admins by default, hiding them behind an explicit opt-in (and as others pointed out, an audit log entry is a good addition to that too).

It's a simple solution and a net improvement over 2.3.3. @Aldarone submitted a PR that implements the first half, hiding, which I merged a few weeks ago, and now we just need the 2nd half, the opt-in view. This is all within a single development cycle since no new releases have been made so far.

System administrators can always look at what is stored on their server. If your threat model is defence against the administrator, a) switch servers b) self-host c) only use DMs to share your username for an end-to-end encrypted messaging app like Wire or Signal. I can absolutely agree that admins shouldn't accidentally stumble into private messages, which is why I merged @Aldarone's PR, but this is no place for a moral panic about their fundamental ability to do so.

Gargron wrote:

Somebody could be filling your database up with posts/media as DMs not addressed to anyone (or addressed to own alt accounts).

bhtooefr wrote:

I think that's an unlikely scenario, but if it is a concern, I'd say that a better bet would be to implement a per-user media quota system, to deal with that specific concern.

I heard a similar scenario a little time ago. That is a real concern.

However I believe administrators should _not_ see private toots. Administrators can still respond to such a situation by 1) requesting the abuser to tell what he posted, 2) requesting the abuser to delete such posts, 3) suspending the account, if he does not respond to those requests (it is not that different from the case that the abuser is telling he is malicious) or it is urgent. They are viable options.

The feature would help only if it tells how much the user is taking the quota. That kind feature can still be implemented in a different way.

Sorry, I just believe that a user should know have the right to know whether or not their unencrypted private communications are being monitored that's all. This is similar to when a party between two people may hear certain DTMF tones during private telephone calls when they are being tapped.

I really don't care on the outcome, but believe in freedom of speech. If I'm able to speak freely, I should also know the entirety of my audience that may I be communicating with.

That's just me. But hey, i don't contribute to the software =) I think the end user should have a shared admin role in this. The problem is that nobody's thinking about their personal lives in this situation, It would break my heart if there was a potential situation between an admin and myself because I posted a private message of my physical address to a friend privately. I should know before my house burns down that something was shared privately to an admin.

Your instance may be safe, however... every community instance which makes up the network is different, there may be some with people and or admins that may wish to participate in burning down bridges.

Also, All admins aren't going to necessarily have access to the back-end. That's a moot point. Audit Logs can also be cleaned easily once eavesdropping occurs if a 'rogue' database admin knows how to truncate log tables.

Webpush notifications are great!
Anyway,

Have a great day

i actually ran into this with technowix the other day because they couldn't look at the DMs of an abusive user in order to decide what to do. I think it should definitely be added back so that this sort of thing doesn't happen in the future.

Also, All admins aren't going to necessarily have access to the back-end. That's a moot point. Audit Logs can also be cleaned easily once eavesdropping occurs if a 'rogue' database admin knows how to truncate log tables.

You're wishing for the software to solve an organizational problem. Again it comes down to having an admin you can trust. The software is not built around the assumption that the admin is the evil person. The only software built around that are end-to-end encrypted messaging apps. With GMail, Fastmail, Twitter, Facebook, Instagram & whatever else you are also trusting that the employees who work for those companies stick to the rules imposed by the company not to look at customer data without a good reason or process. Mastodon admins can also have a good reason and process for doing it, which is why I'm suggesting an explicit opt-in.

The feature would help only if it tells how much the user is taking the quota. That kind feature can still be implemented in a different way.

I don't know if you're advocating for some kind of quota feature as suggested above, but that is very far from a simple solution. First of all, we can already see at a glance how much disk space a user's media takes up. But this does not help to know what context that media is posted in, or what kind of media it is. Whether it's legit or abuse. Secondly, imposing a quota-per-user is practically impossible, because you never know what the value should be, it would be different for every instance, but also for every user. An artist will naturally consume more disk space, but it's totally legitimate. You would need to have every instance owner figure out, a-priori, "what is a good amount of disk space I want to dedicate to each user?" and that is a hard question to ask someone, I am the developer of this thing and I don't even know. Now let's say you want to take all available disk space and divide evenly between all users, and alert if a user goes over that. Your first problem is that with any kind of object storage (AWS S3, Wasabi...) you do not have a "maximum disk space" and you're back at asking the admin. Your second problem is that you're gonna have a lot of inactive users skewing the data. And in the end, when you do get alerted, instantly banning or pausing that account makes no sense. You would still have to go and check what all that stored data is.

There is something to be said for "not all mod staff are admins".

Perhaps, in addition to adding this feature back in, we also need more granular security: ie, Admin, moderator, user, etc. Moderators would not be able to use this feature, while admins (Who would also typically have server access) would be able to.

And most certainly logged in the audit log, when used, even by an admin.

I'm also concerned about legality in the European Union. If I'm not mistaken, regulations disallow reading private communications (and again if not mistaken, it should have been consolidated by the GDPR).

Also if such a feature lands, a global server flag to disable it completely would be really cool.

If I'm not mistaken, regulations disallow reading private communications (and again if not mistaken, it should have been consolidated by the GDPR).

Let someone else comment on whether such regulations really exist (this is the first time I hear about that), but just commenting that that idea is incompatible with the idea that admins are liable for the content (pirated or illegal media) they host on their servers, if that content is in a private message. It can be one or the other.

We'll be putting the implementation of this feature on hold until we find out more about the GDPR, even though Gargron has pointed out that one can not both be responsible for users content, and not be allowed to view it simultaneously.

In the mean while I want to say the following. While the reactions to this feature has definitely been a hot topic, I feel like the compromise is that if we do go down this road, we should absolutely make sure x (audit log) is made part of it, to which I agree.

On that note, when I read about the GDPR it sounds like the audit log would also allow us to do such a thing, but we still have to look at the details of what adjust. So that's one of the reasons we're putting it on ice.

Overall, reading through all the comments and replies again, it does appear that the silent majority are agreeing (and I mean silent as in, don't chime in with their own post, but rather just use the reactions to say they agree, which is a perfectly valid way of interacting with an issue like this) that there is use for this feature.

So, I'm not a jurist here and nobody is afaik but I tried to do a bit more of searching in the laws (France, as it's my home country; and Europe, as it's my wider country; haven't had the courage to look at GDPR so I'll stay on "old" regulations) and found theses twos who seems to be of interest for this case:

The french law has a _secret de la correspondance_ (secrecy of letters). Originally intended for snail mail, the law has been extended to include all other forms of communications. A private communication is then described as a message between one or many explicit users. It basically disallows every third party to read or intercept (access, read, …) without consent of the sender a private communication. The postman can't open your letters, and it's the same in the digital world. Reading the private communications of your SO (SMS, …) is illegal, too. If you read french, this article by the CNIL ("National Commission on Informatics and Liberty") is very interesting. It also states that social networks are covered.

Similar laws exists in the EU, first in the European Convention on Human Rights (and the Universal Declaration of Human Rights, but haven't searched outside of EU/FR), and then in the directive 97/66, refreshed in the directive 2002/58/CE which goes in this sense too (Article 5.1):

In particular, they shall prohibit listening, tapping, storage or other kinds of interception or surveillance of communications and the related traffic data by persons other than users, without the consent of the users concerned

_(they here is the EU members, as this is a directive)_

On what's Eugen said, that one can not both be responsible for users content, and not be allowed to view it simultaneously, it's a bit more subtle than that: you're responsible _once_ you've became aware of a potential problem (by the mean of _targeted_ reports). Even in that case I have doubt that it allows you to see _all_ the private communications of an user if they've not been reported.

On the fact that sysadmins can go fire up the database CLI and go search from there: there's an interesting phrase in the EU directive 2002/58/CE, at the end of the article 5.1: ”This paragraph shall not prevent technical storage which is necessary for the conveyance of a communication without prejudice to the principle of confidentiality.“ -- so yes, you can store it. Yes you have the possibility of querying the db by yourself. But you shall not access it.

I haven't read all theses laws completely, it's a complicated string of paragraphs to cross-reference against others, and totally open to different interpretations-- I'm not sure of mine either, but the _current_ feature sounds bordeline illegal (or very very complicated legally) to me. There's always a fine line between what's good and what's legal and we need to find it before implementing such features.

I encourage all of you to do your own research and try to understand the implications of this better- I may be wrong, so please, prove it to me !

_(edited for typos.)_

Both of the laws you stated would also seem to prohibit reporting DMs (and thus sending them to the admin) as well. The sender specifically consenting to a report is extremely unlikely in the first case, and the second case reads to me as _all_ users, not just any one specific one.

Just a thought. Perhaps the terms of service need to be changed anyway.

People who have received death threats and hate mail through the post are already allowed to show those to the police though!

Both of the laws you stated would also seem to prohibit reporting DMs (and thus sending them to the admin) as well. The sender specifically consenting to a report is extremely unlikely in the first case, and the second case reads to me as all users, not just any one specific one.

Actually: yes, it is. While reports on DMs are by itself a bit bordeline illegal (and this time I asked someone's who more competent than me in this domain, and confirms what I said previously), it's different than showing all of them. Which may also include communications with _other_ users, not related at all to a problem.

The legal way is to go to the police (I'm not saying it's a good thing. I'm saying it's the legal way). A judge will ask the operator/host to retrieve the message and then it becomes legal under the directive article 15 (see below).

Note also that this is a _directive_. It's all depends on how the member country integrates it in its law. Afaik there's nothing on the French law for reporting instead of going to the police.

So don't put more illegal things on top of something already apparently illegal…

Perhaps the terms of service need to be changed anyway

Probably, though apparently the CNIL (french privacy guard) affirms that if you need a consent, it needs to be explicit (check box for example) and renewed every year, and i'm not even sure you _can_ get a global waiver/consent for that (you can for advertising, but in ads profiling it's automatized… also that's in the French implementation of the EU directive). So not sure the terms can do anything.

People who have received death threats and hate mail through the post are already allowed to show those to the police though!

Don't worry, the EU thought of that. Article 15.1:

Member States may adopt legislative measures to restrict the scope of the rights and obligations provided for in Article 5, (…) of this Directive when such restriction constitutes a necessary, appropriate and proportionate measure within a democratic society to safeguard national security (i.e. State security), defence, public security, and the prevention, investigation, detection and prosecution of criminal offences or of unauthorised use of the electronic communication system, (…)

That actually sounds like the best bet may be to have an option to disable DMs entirely on an instance, as it would mean that the only DMs that could be dealt with are ones that violate laws, not just the instance ToS? (Or does Europe have something that makes violating ToS a crime?)

Was this page helpful?
0 / 5 - 0 ratings