Intelligent simulation

Intelligent simulation is a risk experimentation feature provided by Antom Shield Pro, designed to help you perform comprehensive risk assessments and performance predictions before launching new rules.

By conducting backtesting on historical data and predicting potential outcomes, you can thoroughly understand how the new rules are expected to perform after deployment, including changes in transaction decisions, improvements in fraud interception capability, and their overall business impact, enabling you to make data‑driven, informed release decisions.

With this feature, you can assess and forecast the impact of new or combined rules without affecting live transactions, ensuring the safety and effectiveness of rule releases.

Below are the complete steps to evaluate new rule risks and outcomes through the Intelligent simulation tests:

Step 1: Create an experiment

In the Shield menu, select Risk lab, click the Create experiment button to enter the experiment creation process, and configure the following parameters:

  • Experiment name: Enter a name for the experiment.
  • Backtest date range: You can customize the historical transaction period used for the simulation.

Note: It is recommended to select more than 90 days of historical data to ensure coverage of various business scenarios and to improve the accuracy and representativeness of the simulation results.

  • Mock rules:
    • Include all online rules: Enable the Include all currently enabled rules in this experiment option. The system will include all currently published rules in the experiment to compare the impact of the new rules on the overall risk strategy.
    • Add rules to be tested: Click the Add rule button to manually add one or more rules for testing. Each experiment supports flexible rule combinations, with up to 20 rules allowed.

Step 2: Run the experiment

Once started, the system will automatically run the simulation based on the selected historical data and no manual intervention is required.

During testing, all rules are executed in a simulated environment only, with no impact on real transaction processing or decision logic.

Step 3: View experiment results

After the experiment completes, you can view the full simulation analysis report on the experiment details page, including:

  • Rule Hits and Decision Distribution: Displays how transactions were processed during the simulation, including:
    • Number and total amount of approved transactions
    • Number and total amount of transactions triggering 3DS authentication
    • Number and total amount of declined transactions
  • Fraud Detection Performance: Based on historical fraud feedback data, the simulation compiles statistics on successfully detected fraud cases, including:
    • Number and total amount of fraudulent transactions
    • Changes in fraud chargeback volume
  • Comparison with Current Rules: The system automatically compares the simulation results of the experimental rules with the actual performance of your current live rules. This comparison visually presents changes in key indicators such as transaction approval rate, decline rate, and 3DS verification rate, helping you quickly evaluate the potential benefits or risks introduced by the new rules.

Notes:

  • If the experimental results do not meet expectations, you can copy the experiment, modify rule conditions or parameters online, and re-run it to iterate and optimize continuously.
  • Intelligent Simulation is an essential tool for optimizing your risk control strategy. Combine results with list hit records, transaction analysis, and other data to evaluate how rule adjustments affect both business security and conversion performance.

Step 4: Deploy the rule

After confirming the rule configurations are correct, you can publish experimental rules at any time.

Once published, you can view the rule status and performance data via Smart engine > Rules.

Notes:

  • Each experiment runs in an independent testing environment, supporting flexible rule combinations (up to 20 rules per experiment).
  • To ensure simulation quality, it is recommended that your account contains at least 90 days of historical transaction data before using this feature.
  • If a rule shows no matches during simulation, check whether its matching conditions are overly strict or incorrectly configured.
  • All simulation tests are conducted offline and have no impact on real transactions. A rule only becomes active once you manually click Launch.