Web Bot Detection

最終更新日:2026-03-25 18:23:29

Overview

Web Bot Detection is a frictionless human/bot verification capability designed for web scenarios. It helps improve the identification and mitigation of bot traffic in web browser environments. It is suitable for Web/H5 pages, including webpages loaded through built-in browsers inside mobile apps or mini programs.
This feature works by injecting a lightweight JavaScript SDK (tens of KB) into webpages to accurately collect browser environment and interaction signals, and then combining these signals with server-side policies for comprehensive risk analysis and enforcement.

It mainly addresses the following risk scenarios:

  • Automated access using forged or non-genuine browser environments.
  • Malicious web scraping or credential stuffing initiated by automated frameworks controlling browsers.
  • Automated bulk operations lacking genuine user interactions, such as no mouse movements or keystrokes.
  • Malicious debugging, tampering, and bypassing of frontend security detection logic.

Note: After this feature is enabled, the cloud security platform will automatically insert the JS SDK into HTML pages returned by your site. Because this changes how page content is delivered, it may conflict with certain front-end scripts, page compatibility settings, or browser environments. We recommend validating it first in a test or canary environment before enabling it in production.

How It Works

Web Bot Detection adopts a closed-loop mechanism of “Client-Side Page Injection + Environmental/Behavioral Data Collection + State Association + Cloud-Based Decision-Making”.

  • Frictionless injection: When a real user visits a webpage, the Cloud Security Platform automatically injects the security JS SDK into the response body.
  • Signal collection: While running in the browser, the JS SDK collects only the information needed to determine whether the visit is more consistent with real-user browser behavior. It does not collect the actual content entered by the user.
    • Basic browser information: Such as browser language, screen resolution, time zone, etc.
    • Environmental risk information: Detects whether automated tools such as Webdriver are being used in the browser.
    • User interaction events: Keyboard, mouse, and touch events.
  • State association: By setting specific Cookies or appending URL tokens in asynchronous requests, the signals collected on the frontend are bound to subsequent business requests.
  • Comprehensive assessment: The cloud service analyzes risks by combining multi-dimensional signals and, based on your strategy configurations, performs actions such as Log or Deny.

Steps

  1. Log in to the console and go to the subscribed security product page.
  2. Go to Security Settings–>Policies.
  3. Select the domain you wish to configure the security policy and click Upcoming Updates to Product Navigation and Document Center to enter the Security Policy editing page.
  4. Open the Bot Management tab and enable the master switch if it is turned off.
  5. Under Web Bot Detection, enable the relevant detection features and set exception scenarios.
  6. Click Publish Changes at the bottom to publish the configuration. Changes take effect within 1–3 minutes.

New Objects Introduced by the Platform

After enabling this feature, the platform will add the following objects to your pages and requests. Please make sure they do not conflict with your business logic.

  • The cloud security platform embeds the following JS SDK during the response phase:

    JS SDK Cache Duration
    /_fec_sbu/hxk_fec_[version].js 30 days
  • The cloud security platform adds following Cookies during the request phase:

    Cookie Name Duration Applicable Protocol Secure HttpOnly
    FECW 10 years HTTPS ×
    FECA Session HTTPS ×
    FECN 10 years HTTP × ×
    FECG Session HTTP × ×
  • In addition, the cloud security platform will add the following URL tokens to asynchronous API requests under this site:

    Token Name Example
    FECU http://www.example.com/test.html?id=1&FECU=[value]

Note: After Web Bot Detection is enabled, page content, cookies, and certain asynchronous request URLs may change. Therefore, before enabling this feature in production, we recommend carefully verifying the following:

  • Compatibility of page scripts
  • Whether front-end resources load properly
  • Whether asynchronous APIs allow additional URL parameters
  • Whether there is any strict validation logic for cookies or URL parameters
  • Whether it conflicts with CSP, caching policies, or other security configurations

Configuration Item Description

Web Bot Detection includes the following five capability dimensions. Among them, Browser Feature Verification is a core capability and cannot be disabled separately. The other capabilities can be enabled or disabled based on your business needs.

Capability Dimension Key Function Configuration Recommendation
Browser Feature Verification Verify whether the client has the basic capabilities to execute JS and support cookies.
When the action is set to "deny," this capability adopts an adaptive approach: For general GET requests, if the verification fails, the platform usually initiates a human-machine challenge first instead of directly denyying. Requests that pass the challenge will be skipped, while those that fail will be processed according to the deny action.
Basic capability, enabled by default, cannot be disabled individually.
Automated Tool Detection Inspect the client environment to verify whether the request is initiated by known automated tools (including Webdriver, PhantomJS, etc.). Abnormal requests will be handled according to the configured action. Recommended to enable. Effectively defends against intermediate and advanced crawlers.
Cracking Behavior Detection Detects attacks attempting to tamper with or bypass the core logic of the JS SDK. Abnormal requests will be handled according to the configured action. Recommended to enable. Increases the reverse engineering cost for attackers.
Page Anti-Debugging Disrupts the use of developer tools (F12) to increase the difficulty of front-end code analysis. Enable on demand. Recommended for high-value or high-adversarial scenarios.
Interactive Behavior Verification Detects user interactive behaviors on the page. For requests that do not meet the minimum interaction count required for business logic, the configured action will be applied. Interactive behaviors include the number of keyboard, mouse, and touchscreen interactions, without involving any specific user sensitive information.
  • Verification URI: The URI for which interactive behavior verification is required. Example: /test?id=1.
  • Minimum Number of Interactions: The minimum combined count of keyboard key presses, mouse clicks, and touchscreen movements on the current page before a request is made.
  • Matching Method: Fully matches the Verification URI by default. If selected, matches the Verification URI using regular expressions.
Enable as needed. Suitable for sensitive interfaces that explicitly require user interaction to trigger (such as login or submit a ticket).
When configuring Interactive Behavior Verification, it is recommended that each rule only verify one URI. If using regular expression matching, thorough validation is required to avoid matching unintended URIs.

Exception Scenarios

In some specific business scenarios, you may need to configure ‘exceptions’ to ensure the normal operation of your business:

Scenario 1: HTML pages that should not have JavaScript injected

If certain HTML pages have compatibility issues with JavaScript injection, you can add a URI exception for those pages.

Scenario 2: Ajax request exceptions

The JavaScript file delivered by Web Bot Detection performs continuous verification by dynamically appending URI tokens to Ajax requests. Since Ajax requests are typically not cached, this may cause repeated responses and increase network bandwidth usage. Therefore, such requests are excluded by default.

  • This is a trade-off between security and cost. If you wish to enhance the access security of such requests, you may remove the Ajax request exception.
  • However, if your website enforces strict validation on URL parameters, it is not recommended to remove the Ajax request exception.

Scenario 3: Exceptions for non-HTML application requests

If your website receives requests from the following types of clients, you should configure exceptions in Custom Request Exceptions based on request characteristics before enabling Web Bot Detection. This helps prevent these requests from being incorrectly processed and affecting normal business operations.

  • Native App
  • Native mini program
  • API calls of third-party applications

Custom Request Exception fields:

  • Name: Enter a name for the exception rule.
  • Rule Description (Optional): Enter a description for the rule.
  • Match Conditions: Fill in the matching conditions based on your application requirements. Multiple match conditions are supported, and they are evaluated with AND logic. Within a single match condition, you can enter multiple matching values separated by delimiters; these values are evaluated with OR logic.

Best Practices

To ensure a smooth transition for your business and maximize security effectiveness, we recommend following the three-step strategy outlined below:

  • Focus on front-end environment compatibility (before launch)
    If your site uses CSP, front-end integrity checks, resource cache controls, or special page rendering logic, confirm compatibility with the JavaScript injection mechanism before enabling the feature.

  • Start with canary rollout, then gradually expand (initial deployment stage)
    For sensitive pages such as payment, transaction, and login pages, we recommend enabling the feature for a small subset of traffic first. Observe whether it affects the normal user experience, and then gradually expand the scope.

  • Apply precise protection to high-risk assets (policy refinement)
    Prioritize Web Bot Detection coverage for core business scenarios, such as registration, login, password recovery, SMS sending, order placement, flash sales, and other scenarios that are commonly abused at scale by malicious automated traffic.