Eryk Kielak (Gazeta.pl): What exactly are bots and why are they dangerous?
Dan Woods: Bots are pieces of code that automate a given task. For example, the code below is a bot that checks the balance of a bank card. For security reasons, I hid the vendor’s name.
Internet bot code F5 Networks
How can such a bot be used? Is it possible to obtain bank account details in this way?
Everything a cybercriminal needs is stored in the 16-digit number of the payment card and its code. The number of their configurations, although high, is finite. Thus, such a bot can be used to check millions or even billions of data pairs: + PIN code. Decoding this information will allow you to empty the card’s balance from the funds available on it. The real owner of the payment card will not even realize that his funds have been stolen until he tries to use them.
Are the card details the only data bots can capture? What about login data?
Malicious bots are also used to collect scattered data, such as logins, which, for example, were leaked from a database. use the data obtained in this way against applications.
The attacker automates logons for large numbers (often thousands or millions) previously discovered using standard network automation tools. Due to consumers’ habits of reusing usernames and passwords on different ones, these attacks typically take 0.1-3.0 percent of username and password. tried accounts. So when a cybercriminal breaks a combination of hundreds of millions or even billions of login credentials (username and password), tens of millions of accounts are hijacked.
Does this mean that every bot is a threat?
Not all bots are bad. Googlebot scans and indexes billions to enable efficient searches. Kayak and other online travel agencies collect prices and hotels from many to provide their customers with the best deals. Other bots aren’t good or bad, but they can be a pain. For example, a bot can buy all concert tickets within 30 seconds of the start of sales and then resell them at inflated prices on the secondary market.
Sometimes it also happens that a company offers a gift for new customers, e.g. a free cup of coffee. Then someone can use the bot to create thousands of accounts to enjoy thousands of free cups of coffee. Of course, these are anecdotal and relatively harmless examples. It is worse when a criminal organization needs multiple Internet accounts to engage in money laundering …
Where else can bots attack?
Even insurance companies are at risk. When a client evaluates their life insurance, they usually need to go through an advanced questionnaire with questions about age, place of residence, occupation or. After completing this data, the client receives a quote. Such a questionnaire feeds data into the analytical model that the insurer uses to estimate the price of the insurance. Unfair competitors or other third parties can use bots to reverse insurance pricing. They provide specially crafted input data and run the process thousands (or even millions) of times. In this way, they can affect the insurance price or disrupt the algorithm’s work.
Recently, due to the takeover of Twitter by Elon Musk, there is a lot of talk about bots in the context of social media. What role do bots play in them?
A few years ago, an industry company engaged F5 to investigate its user structure and real traffic. When we opened the system data and started analyzing logins, we found that over 90 percent. of all logins were performed by bots (sometimes even 99%). Based on the high rate of successful logins and conversations with the client, we found that these accounts were related to frauds better known as the American officer or Prince of Nigeria fraud.
However, the influence of social media on public opinion is much more harmful. Imagine you control millions of Twitter, Facebook or Instagram accounts. With this number of “users” you can dictate trends and manage the discussion in the web space. You could reinforce the message you want to promote and cover up content you think is unfavorable. In this way, you can influence public opinion and voting decisions.
Back to . Having a lot of people on Twitter provides an excellent opportunity to manipulate and create views. Additionally, there is no solid barrier to stopping automated logins. Thus, not only politicians and publicists influence the public opinion, but also millions of bots.
Elon Musk quit the deal after finding out about the actual number of bots on the platform. Why did this influence his decision?
If a regular small business, airline, or hotel has 10 million online customer accounts, that number plays only a minor role in determining the company’s value. What really matters is whether these online customers are spending money or not. For social media companies where revenue is mainly generated by advertising, the number of online accounts is weighted differently. The more (DAU) a company has, the more value it brings to advertisers. The more users, the more expensive the price.
Twitter reported that less than 5 percent. accounts on their platform are bots. Musk denied it. Who is closer to the truth?
In my opinion, both sides are wrong. I think there are definitely more bots, but neither side will say it publicly.
How is it possible that companies like Twitter are unable to track these accounts and disable them?
Cybercriminals use many methods to hide their activities. First, bots are now using hundreds of thousands, even millions. Security teams can typically identify several hundred or thousand of the most active IPs, but they lack the broader perspective to discover the actual source of an attack. Second, the effects of bot attacks are not always visible immediately. They often appear gradually over time, which makes them more difficult to recognize and not to be confused with something else, such as an organic increase in traffic on the site.
More information from the country and the world at
And do companies want to minimize the presence of bots at all?
Unfortunately, some companies, especially social media companies, are not interested in knowing the truth about bot traffic as the truth would most likely have a negative impact on the Daily Active User Unit (DAU) indicator, which translates into the price / valuation of their shares.
Most companies, however, want to know the truth about bots. Too many bots not only increase fraud losses, but also frustrate customers who post brand-damaging comments online. Bots are ruining a brand’s reputation by hitting its customers. The operation of bots also affects business analytics, distorting the real picture, e.g. customer experience indicators. Distorted metrics can also negatively impact corporate spending and decision making.
What do companies have to do to solve the bots problem on their sites?
The activity of enterprises in this matter is divided into two areas. First, companies need to collect information (signals) from users and customers. Each of the network users generates their own signal, such as key press time, mouse movement, plug-in use, font modification, screen use and hundreds of thousands of other signals that inform anti-bot systems that the user is a human. To make sure that you are dealing with a human, you need dozens of high-quality signals that are very difficult to counterfeit.
The first stage of defense against bots, in near real time (difference up to 10 milliseconds), is the ongoing verification of users after key signals. However, businesses need two lines of defense. The second step must be retrospective. It is here that the artificial intelligence algorithms examine the user’s interactions with the website (including the aforementioned signals) from the last days.
What are the benefits of bots’ authors, and what are the losses of the companies with bots on them?
As I mentioned before, bots are nothing bad in themselves, as they just automate certain functions and actions. What matters is the intention with which they are used. The effects of bots targeted by cybercriminals include: an increase in the number of frauds in digital spaces, loss of reputation attacked due to negative reviews, claims of victims; increased cost of defense against bots; indicators and reports on customer traffic in the network with errors, or a decrease in confidence in concluding transactions on the network.
If companies cannot cope with the elimination of bots, should they hire special companies or create positions for people responsible for fighting them?
I believe that a neutral third party would only be needed for organizations that are unmotivated or that even facilitate fraud without being a victim of it. Examples of such organizations are social media and email operators. For example, if a cybercriminal hijacks my account, they can identify where I possibly have online accounts (due to the emails I received), set up an email filter / redirect, and then go through the forgetting password process to hijack my online account.
It is worth remembering that cybercriminals often do not use the obtained data immediately, but wait for the right moment. Therefore, no sign of account takeover should be underestimated.
What is the greatest possible threat to society from bots?
Individual units suffer financial and image losses related to bot attacks. In my opinion, on a macro scale, these losses, although severe, are not such a threat as the manipulation of public opinion by bots. Influencing views, election decisions, and even purchasing decisions carries a high risk. If bots were used to propagate the truth, this would not be a problem, but unfortunately, bots can be, and probably are, used to spread lies and half-truths.
Has the bot phenomenon been increasing in recent years? How big is the scale?
Yes, although it is difficult to specify this fact quantitatively. There is no doubt, however, that the problem of bots has definitely intensified recently. Over the course of 6-7 years I have watched malicious and nuisance bots working against companies in virtually every aspect of their business. And when I think that bots cannot be used otherwise, a new type of attack appears.
is a reactive activity, therefore our task is to observe, detect threats and neutralize their negative effects by cutting off access. Cybercriminals don’t evolve until forced to do so. As long as their evil tools work, they follow the same patterns. The job of online security experts is to eliminate every single pattern that threatens internet users, one by one.