Bot farms.
How does it work and what are the threats?
Everyone has heard the word ‘bot’ or ‘bot farm’ many times. But what is really behind this phenomenon? Why is it important for us as consumers of digital content and marketers to understand the level of danger of bots?
The word ‘bot’ itself comes from the English ‘ro-bot’ and in this context we understand it to mean artificial anonymous or fake accounts.
At the beginning of the development of social networks, people began to create virtual personalities for anonymous communication on the Internet. Most active users of social media had an additional account to avoid being blocked or to express opinions that might not be acceptable to their main audience. This could be done by using an account of a fictitious person. This was the beginning of the era of internet bots.
Over time, programs have emerged that allow you to manage dozens or even hundreds of such accounts simultaneously - the so-called ‘bot harvesters’. These tools have become so sophisticated that even social media administrators sometimes cannot distinguish fake accounts from real ones.
What scale are we talking about?
In 2021, Facebook blocked and deleted 1.3 billion fake accounts, while the number of active users on the network at that time was about 3 billion. That is, fake accounts accounted for almost half!!! Impressive?
Instagram identified 95 million bots out of 1.2 billion active users.
And Twitter - 20 million out of 1.3 billion active users. The large number of bots prompted Elon Musk to introduce paid account verification and restrict access to viewing tweets for unregistered users.
What is a bot farm?
It is an organized structure that creates and controls networks of bots or fake accounts on social media, email and messengers.
Bot farms are used to spread false information, manipulate public opinion, and carry out cyberattacks and fraud.
The threats posed by bot farms are multilayered.
-
First, they contribute to the spread of disinformation, which can disorient society and cause negative consequences. Inaccurate information can undermine trust in the media and government institutions.
-
Second, bot farms can collect personal data of users in bulk, which threatens privacy and security. This data can be used for crimes such as identity theft or fraud.
-
In addition, bot farms are used to manipulate public opinion, influence election results and political processes, which can distort democratic principles and undermine trust in political institutions.
Therefore, bot farms pose a serious threat to society, affecting the information space, privacy and democratic processes, and the fight against this phenomenon is very important.
Where do these bot farms come from?
They usually pose as coaching centers, SMM agencies or network marketing consultants.
If you're a marketer, you've probably come across them when trying to increase your Instagram or TikTok followers.
All you have to do is say you want more followers, and for a small fee they will offer you thousands of new followers - regardless of how many you need.
These 'followers' appear almost instantly - they are usually foreign accounts with minimal activity and a few random photos on their profiles.
But there is no benefit to such a spike: these accounts will not interact with you because they are just bots. Moreover, a sharp increase in the number of followers can lead to your account being blocked due to suspicions of violating the platform's policy.
How does it work?
First, accounts are created that are registered on platforms such as Facebook in advance, which increases their ‘credibility’. Such an account can cost from $5 to $15 per thousand. Proxy addresses are then purchased for each of them, as the simultaneous use of many accounts from the same IP address is quickly noticed by social media administrators, and these profiles are blocked. Each bot has its own unique address, which allows them to remain active.
The next step is to purchase special software that manages all the bots. This is expensive software, but the success of the entire bot farm depends on its quality. Then, the bots are ‘warmed up’: over a certain period of time, accounts are filled with content that imitates the activities of real people. Bots communicate with each other, add friends, and congratulate each other on holidays - all to look like real users.
Previously, it was relatively easy to detect a bot - it was enough to find its photo through a search. However, with the advent of artificial intelligence, it has become possible to create thousands of unique images of people who don't actually exist.
To create interesting content, copywriters and photographers are hired, or artificial intelligence is used to generate articles, recipes, life hacks, etc. The software automatically distributes these texts between accounts, adding synonyms so that the content is not completely identical across all profiles.
The purpose of this process is to draw users' attention to certain communities or pages. Take a look at yourself: you may be subscribed to some groups such as ‘Tasty Together’, ‘Jokes from Grandpa Ivan’ or ‘Recipes from Marina Petrovna’.
And then the bots begin to do what they were created to do - spread propaganda, disinformation or other manipulations.
How to recognize a bot?
-
Account with low activity: bots rarely post anything or interact with other users. Their profiles usually contain minimal information, no personal details or photos.
-
Standard or repetitive content: Bots often post the same type of comments or messages that look generic and lack personality.
-
Unnaturally fast response: Bots can respond to messages almost instantly, which is a sign of process automation.
-
Use of keyword-based responses: Bots respond to certain topics, but without deep understanding or context.
-
A large number of followers without activity: accounts with a large number of followers who do not interact with the content may be bots that are not yet ‘warmed up’ but are already being used to increase the number of followers.
With each new advance in Internet technology, criminals emerge who seek to use these tools to manipulate people, encourage them to take certain actions, and create fear or distrust. These are all forms of social engineering, using psychological techniques to try to control the thoughts and behaviour of the audience.