Tinder’s newest safety feature attempts to reduce the number of hateful messages sent on its platform. The company plans to start rolling out ‘Are You Sure?’, a feature that uses artificial intelligence to automatically detect offensive language. If found, the program will ask users if they’re positive they want to send that message, forcing them to pause before doing so.
The company’s been testing the feature and says people who saw the prompt were less likely to be reported for inappropriate messages over the next month, which Tinder takes to mean that they’re adjusting their behaviour over the long term.
Other companies have employed similar technology, including, most notably, Instagram, which rolled out warnings for potentially offensive captions in 2019. The company also automatically hides comments its AI determines are offensive and recently expanded the system to block words that might be purposely misspelt to avoid people’s comment filters. Although Tinder isn’t outright blocking messages, at least not yet, it’s pushing people to reconsider whether their message might be enough to make the app at least somewhat safer and more welcoming.