Twitter’s looking to provide more options over its moderation process.
Twitter’s looking to provide more options and transparency over its rule violations and moderation processes, with a range of new tools currently in consideration that could give users more ways to understand and action each instance.
The first idea is a new Safety Center, which Twitter says would be ‘a one-stop shop for safety tools’.
As you can see here, the Safety Center concept, which would be accessible via the Twitter menu, would give users a full overview of any reports, blocks, mutes and strikes that they currently have in place on their account. The Safety Center would also give Twitter a means to provide updates on any outstanding reports (via the ‘Report Center’ tab).
The platform will also have alerts.
The platform would also alert users if they’re close to being suspended due to policy violations, which may prompt them to re-think their behavior, while it would additionally include a link to Twitter policy guidelines.
The impetus here is to get more users more aware of Twitter’s rules, and keep them updated on their activity. One one hand, that could raise awareness, but it may also give people more leeway to push the boundaries, with a constant checking tool to see if they might get suspended, when they’d need to dial it back.
Twitter’s second concept is a new Policy Hub, which would provide a full overview of its rules and policies.
Documents are readily accessible.
By making these documents more readily accessible, it could help to set clearer parameters around where Twitter draws the line – though its effectiveness, of course, would be dependent on users actually checking it.
A more direct concept, which could be more effective, is ‘Safety School’, which would give users a chance to avoid suspension for platform violations if they instead take a short course or quiz to learn about the rule/s that they broke.
Twitter says that this addition will ensure a ‘more diverse range of feedback’ on Birdwatch alerts, increasing the accuracy of such reports.
It’s hard to tell whether the Birdwatch proposal will work, but it’s an interesting concept, using its user-base to better detect low quality or false content, in order to reduce its overall impact and reach.
In some ways, that’s more like Reddit, which relies on its user community to up and downvote content, which generally weeds out things like false reports. Interestingly, Twitter’s also considering up and downvote options for tweets, so it seems that the platform is indeed looking to Reddit as a potential inspiration for its efforts on this front.
Which, again, does make sense, but it’s hard to tell whether Twitter’s user community is as aligned with content quality on the platform as Redditors are within their subreddits, which they likely feel more ownership of, and community within.
Maybe, through additions like this, Twitter can build on that, which would make tools like Birdwatch a more valuable addition.