The 2016 US election wasn’t just memorable for its historic merits. For those not in digital marketing, the election was an eye opener and one of the first times fake social media accounts were used on such a massive scale; more importantly, it was the first time significant evidence was discovered on just how effective social media bots and fake accounts can be.
The discovery of troll farms and bot networks operating behind the scene during the election and beyond also revealed more interesting facts about fake social media accounts. One of the most prominent conclusions was the fact that fake social media accounts are very difficult to detect. Is it really impossible to spot bots and fake social media accounts today?
To fully understand the operational scale of fake social media account farms, we have to take a closer look at the Russian troll farm that played a big role in the 2016 election. The farm for fake social media accounts was designed to operate with some degree of sophistication, making detecting their fake accounts more difficult.
Bots used to have default avatars and didn’t really post much on their social media accounts. That is certainly not the case today, with bots posting regularly the way organic, real users do. In fact, bots now mine avatars from legitimate users and begin using them to mask their identities. Fake social media accounts with avatars and posts look more legitimate already.
At the same time, the troll farm also programs their bots and fake accounts to interact with each other. They create circles and slowly attract real users and organic audience groups. The interactions between fake accounts and bots are not only there to fool real users, but also to stop the bots from being flagged and detected by automated systems.
Social media sites were facing difficulties in this department. Since fake accounts look more real than ever, their automated systems are no longer capable of differentiating between real users and fake accounts. False blocking of real users began to happen more frequently. That caused social media sites to relax their filters.
With the automated scanning and filters relaxed, it didn’t take much for fake accounts to really build an extensive network. As mentioned before, the goal is not to affect real users directly, but rather to spread conversations and content that appear to be genuine to real users. The setup is perfect for disinformation campaigns.
Human and Machine Inputs
We cannot talk about troll farms and fake accounts without talking about how they are set up and managed. Yes, most farms use automated tools and bots to operate their social media accounts, but the majority of them still require a lot of human input. Some of the best account farms in the world have social media managers, just like large corporations.
Social media specialists and managers spend more than 8 hours a day crafting content and making sure that the fake accounts appear to be real. Usually, automated solutions are used to make this process easier. You will see organic content being spun into multiple posts coming from different accounts. This is a lot more organic-looking than having 100 accounts share the same content.
That organic look and feel, however, also makes separating fake accounts from real ones nearly impossible. You cannot simply look at the content of fake social media accounts to determine whether they are fake. On the other hand, real users who post like bots (i.e. as a routine at a certain time) can mistakenly be identified as fake accounts.
Real presence is the keyword here. Social media managers behind thousands of fake accounts still leverage technologies like machine learning and AI to help make their jobs easier, but that doesn’t mean they don’t add manual inputs to those accounts. Personal touches, direct replies, and conversations are the differentiating factors here.
So, how do we detect fake accounts? Traditional metrics, such as avatar and post count, are no longer usable. They are far from effective in separating fake users from the rest. The same is true for metrics like engagement rate and reach because fake accounts also work with the rest of their networks to tackle that.
The biggest reason why fake social media accounts are nearly impossible to eradicate are the social media sites themselves. They are playing the numbers game. Many of today’s social media users are also indifferent when it comes to fake accounts. Fake accounts are harmless until they are being used to manipulate issues or drive discussions.
Social media sites will not be removing fake accounts anytime soon. Aside from the impossible task of detecting fake accounts – and the changing approaches that attackers now use – social media sites also need their monthly active user numbers to look good. Purging fake accounts could significantly bring those metrics down.