Bot Mitigation
Mitigate automated threats based on IP address reputation.
Last updated
Mitigate automated threats based on IP address reputation.
Last updated
Transparent Edge Bot Mitigation is in its core a curated IP address reputation database that is updated in real time that protects customer websites from malicious synthetic traffic.
A bot is a software application that runs automated tasks against online services, there are bad bots and good bots.
Good bots are usually web scrapers for the search engines (googlebot, bingbot, ...) and respect robots.txt.
Bad bots search the web with malicious intents, trying to find vulnerabilities to exploit, automating denial of service attacks, sabotage websites ...
Detecting a bot can be a trivial task if it's a simple bot, but advanced bots use an ordinary web engine to scrape, navigate links at random intervals, use the mouse... they become almost humans.
Bot Type | Description |
---|---|
Simple | Connects from a single IP address and uses automated scripts that do not try to impersonate as a browser. |
Moderate | Uses a headless browser that can even execute javascript. |
Advanced | Simulates mouse movements and clics, mimicking human behaviour. Uses browser automation technologies. Used by botnets. |
Evasive | Same as advanced but leverages on VPNs, proxies and other spoofing methodologies to hide. |
If you have adquired the Bot Mitigation service, you'll be able to access its settings where you can customize the threat detection level per site.
The screenshot below shows a small part of the available options:
After you've registered a site at the Bot Mitigation settings panel, you can activate an action if a bot is detected according to the established settings.
There are four available actions:
block
captcha
jschallenge
bypass
You can also define a more tailored reaction by using the call botm assessment.
For example if you want to protect your site www.example.com
and you've checked to detect IP addresses categorized as abusive and with a minimum risk score of 60 the following VCL code will block the IP addresses that match those settings:
Of course the condition can be anything you like, perhaps you only want to protect some paths of your website:
Instead of blocking the request directly, you can protect them using a JavaScript challenge that will filter most of the bots and automated request to your site.
You can also force a captcha for the detected IP addresses, users that successfully complete the captcha will be able to enter your website and the risk of their IP address will decrease overtime.
Lastly, maybe you only want the statistics that the detection engine provides automatically without blocking anything, in that case you can use the bypass option.
Our database is very effective and evolves in real time, but there is a faster and complementary way to perform bot detection: using our soft fingerprinting script.
It is a small javascript that you can include in your website. The script will run a test using the javascript engine of the user that connects to your website. The test is really fast and checks characteristics of the JS engine, for example the precision of floating point numbers and sends a small report to our API for deep analysis.
You can find the script in our within the Bot Mitigation settings in our dashboard.