Skip to content

AiMod API Integration Guide for Fraud Detection

Overview

The AiMod API integrates with your app's backend without affecting or changing existing workflows. It involves three steps:

  1. Send fraud labels: Submit each fraud determination—whether from automated fraud detection systems or human fraud analysts—to the API.
  2. Request automated decisions: Query the API whenever you require an AI-powered fraud decision.
  3. Act on the decision: When AiMod recommends that action be taken on a suspicious account or transaction, configure your backend to perform any checks necessary and then take the appropriate action.

Integration Diagram

Setup

We create a separate deployment for each customer to enhance data privacy and security. You'll have both a production and a development API:

[yourcompany].musubilabs.ai
[yourcompany].dev.musubilabs.ai

You can find detailed API documentation at the /docs path. See our generic docs as an example.

Authentication

The API is authenticated via API keys like this:

GET https://aimod.musubilabs.ai/api/version
accept: application/json
Musubi-Api-Key: [your API key here]

We'll securely provide you with API keys when we provision your deployment.

Sending Fraud Labels to the /events endpoint

AiMod is particularly effective because it uses user-level context to make fraud decisions. To train AiMod, you set up your app's backend to send an API request every time a fraud determination is made—whether by an automated fraud detection system or a human fraud analyst.

Labels can come from multiple sources:

  • Automated fraud detection systems: Rules engines, ML models, or third-party fraud detection services that flag or clear accounts
  • Human fraud analysts: Manual reviews and investigations
  • Bulk actions: When fraud analysts identify a cluster of fraudulent accounts or a fraud ring

Use the /moderation/events endpoint to send us labels. This endpoint returns immediately and you can treat it as a "fire and forget" call.

It requires two main pieces of information:

  • Fraud Action Information: information about the determination as a ModerationActionTakenV1Model (docs)
  • Subject Details: detailed information about the subject that was evaluated, typically a user account or transaction.

Fraud Action Information

Information about the determination includes:

  • action: this is the specific action that was taken. Common actions include:
  • Account-level: APPROVE (legitimate), SUSPEND, BAN, REQUIRE_VERIFICATION
  • Transaction-level: APPROVE_TRANSACTION, BLOCK_TRANSACTION, HOLD_TRANSACTION
  • In the simplest case, there will be two actions that map to AiMod's automated decisions: one for no action being taken (e.g., APPROVE) and one for an action being taken (e.g., BAN or BLOCK_TRANSACTION). Some fraud detection systems don't have an explicit "approval" and instead a case is closed with no further actions. In these cases, we request that you send an event with APPROVE as the action.
  • moderatorId: this is the identifier of the source that made the determination. For automated systems, this could be the system name (e.g., rules_engine_v2, velocity_check, device_fingerprint_model). For human analysts, this could be an email address or internal identifier.
  • moderatorSource: this identifies the category of source. Examples:
  • automated_rules for rules-based fraud detection
  • ml_model for machine learning fraud models
  • third_party_fraud_service for external fraud detection services
  • fraud_analyst_team for human fraud analysts
  • aiModDecisionId: if a decision was made by AiMod, please submit the decision record ID in this field. This allows us to determine which of AiMod's decisions were acted upon and may be used for billing.
  • comments: Include any fraud indicators, detection reasons, or analyst notes regarding the determination.

Returning AiMod's Decisions

Once AiMod starts returning recommended decisions, we ask that you send any actions taken based on AiMod's decisions back to us via the event endpoint. The moderatorID and moderatorSource for these events should be prefixed with musubi. This serves two purposes: 1) it allows us to understand what recommendations have been acted on (thus "closing the loop") and 2) it improves system performance if we include AiMod's past decisions in model training.

Subject Details

This is where the magic happens. AiMod is extremely good at identifying patterns in the subject details that indicate fraudulent activity. We think of the subject details as a snapshot of the subject at the moment in time that you're making the API call.

Flexible schema

AiMod supports a flexible schema, so provide the subject details as a nested JSON object in whatever format is easiest for you. We'll do the rest.

We recommend providing at least the information that's available to your fraud analysts as they review cases. This is often easy to fetch from your internal API that drives your fraud investigation UI. If other information is available, the more relevant information you can provide, the better. This gives AiMod more signals it can use to identify behavior patterns that indicate fraudulent actors.

Examples of Subject Details

Data Description
account info Creation date, verification status, account type
identity signals Email domain, phone carrier, name matching scores
device fingerprints Device IDs, browser fingerprints, user agents
network info IP addresses, VPN/proxy detection, geolocation
behavioral patterns Login frequency, session patterns, navigation paths
transaction history Recent transactions, amounts, merchants, velocity
payment methods Card details (BIN, issuer), payment method age, # of methods
linked accounts Accounts sharing devices, IPs, payment methods, or identities
risk scores Internal or third-party fraud scores and signals
historical flags Past fraud alerts, chargebacks, disputes

All fields are optional, these are common fields you can provide that we typically find useful.

Truncate lists

We recommend limiting lists like transactions, login events, or IP addresses to the most recent 50-100 instances. We're always most interested in recent behavior, as this gives AiMod the best insight into why fraud action may need to be taken.

Requesting Decisions by hitting the /events/trigger endpoint

When requesting a decision, you provide the Subject Details just like Sending a Label, but AiMod sends you back a recommended decision rather than you having to provide it.

Use the /moderation/events/trigger endpoint to request a decision.

When to Request a Decision

Start off by requesting an AiMod decision at the moment suspicious activity is detected. If the typical flow is that an automated system (or analyst) flags an account, which is then put in your fraud review queue, it's most effective to send the alert to AiMod just before submitting it to your review queue.

You may also request decisions for real-time transaction screening, new account risk assessment, or periodic account reviews.

Alert Information

The API expects some basic information about the fraud alert:

  • timestamp: This is the time that the suspicious activity was detected and a case was created.
  • triggerReasons: The reason(s) that review is being requested. Examples may include SUSPICIOUS_TRANSACTION, ACCOUNT_TAKEOVER, FAKE_ACCOUNT, VELOCITY_EXCEEDED, DEVICE_MISMATCH, CHARGEBACK_PATTERN, etc. These should map to the general reason why an account was flagged for review. Currently only a single trigger reason is accepted per request.
  • comments: This can include any plain-text reason or description why the account is being flagged for review. Examples include detection system output, rule names that triggered, or analyst notes.

Alert-based Configuration

Trigger reasons must be configured in the system and enabled for AiMod to make a decision on them. If a specified trigger reason is not in the system or is not enabled, the event will be accepted but no decision will be made. This feature is useful in cases where we want to expand AiMod's decisioning to additional alert types. You can start submitting decision requests for additional alert types, and these records can be used to train and evaluate the model before decisions are returned.

Development Considerations

A response takes around 10 seconds, so we recommend designing your system to handle responses as slow as 50 seconds to be safe. We also recommend batching or making decision requests in parallel to achieve higher throughput.

We request that you start submitting decision requests before the ML training phase, so that we can train and test our models on the data that will be available at decision time. Since no decisions will be returned at this point of the project, you can either 1) allow the requests to time-out and ignore errors or 2) set the wait flag to false, which will bypass the decision prediction and return a response immediately.

Returned Decisions

AiMod returns decisions in the format specified in the DecisionRecordModel (docs).

A model score is returned that ranges from 0 to 100. The higher the score, the more likely the account or transaction is fraudulent. These scores are thresholded to return recommended decisions.

There are currently three decision types: APPROVE, SKIP, and BAN. AiMod will return APPROVE for scores below an "approve threshold" and BAN for scores above a "ban threshold". The approve and ban thresholds are configurable in the admin dashboard. If AiMod is not confident in the decision (and the score falls between the approve threshold and ban threshold), it will respond with a SKIP. These are ambiguous or complex cases that are best left to fraud analysts to handle as usual.

Holdout sets

A random subset of accounts are "set aside" in a holdout set. For these accounts, the holdout field in the returned decision will be set to true.

Records in the holdout set should be passed to fraud analysts for review, similar to the SKIP decisions.

You can think of the holdout set as a view into what regular fraud review would look like without AiMod. AiMod will still make decisions for each account in the holdout set, but since these accounts are also passed to the fraud team, we get a corresponding human decision too. This allows us to compare agreement between AiMod's decisions and the analysts' decisions to monitor system performance during production. The holdout set also improves model performance since we get analyst decisions for cases that otherwise would have automated decisions from AiMod.

The holdout rate determines the percent of accounts that are put in the hold out set and can be configured in the admin dashboard. We set a high holdout rate initially as we roll out decisions in production and then step down the holdout rate as we verify things are behaving as expected. We recommend a holdout rate of about 5% in steady state.

Actioning on AiMod's Decisions

We recommend this logic:

  • If AiMod responds with SKIP or an error:
  • Send the case to the fraud review queue as usual
  • If AiMod recommends an action:
  • Perform any checks to be sure the action is reasonable
  • Perform the action
  • If an account is in the holdout set (holdout=true):
  • Ignore AiMod's decision and send the case to the fraud review queue as usual
  • If the record is a test record (debug=true):
  • Ignore AiMod's decision

Double Checking

You may want to perform these checks before acting on an AiMod decision. We're happy to advise on the best approach here.

  • Confirm that the user hasn't already been actioned
  • Check if the user is a trusted or whitelisted account
  • Check if automated actioning is inappropriate. Possible examples:
  • a high-value customer with significant transaction history
  • a premium or enterprise customer
  • an account with active pending transactions above a certain value

Testing Flow

We recommend following this testing sequence to ensure a smooth integration:

Phase 1: Development Environment Testing

Running Mode: OFF

  1. Test the /events endpoint

  2. Send sample fraud labels to the /events endpoint

  3. Expect a 200 successful response

  4. Test the /events/trigger endpoint

  5. We won't be able to test predictions since we don't have a trained model yet, but we can test that the data schema is correct

  6. Sending sample trigger requests with Running Mode set to OFF will bypass the prediction step
  7. Optionally, you can set wait=false to get a response immediately and prevent the request from timing out

  8. Verification

  9. Ask the Musubi team to confirm that data was ingested successfully and that API fields are filled out correctly to support ML training

Phase 2: Production During ML Training Period

Running Mode: OFF

  1. Enable production data flow

  2. Turn on the flow of production fraud labels to /events

  3. Turn on the flow of trigger requests to /events/trigger
  4. This data will be collected and used for model training

  5. Handle trigger requests

  6. Customers can optionally set wait=false for trigger events to get a response immediately
  7. Otherwise, trigger requests will timeout if wait=true

Phase 3: Production Testing with Trained Model

Running Mode: TEST

Once the model is trained and deployed, start returning decisions in test mode:

  1. Configure running mode

  2. The AiMod running mode will be set to TEST

  3. This will start triggering predictions and will return response payloads with debug=true
  4. All debug=true responses should be ignored

  5. Configure trigger requests

  6. Ensure trigger events use wait=true to wait for a prediction to be returned

  7. Configure holdout rate and decision thresholds

  8. The holdout rate will initially be set to 100%
  9. This will return response payloads with holdout=true
  10. All holdout=true responses should be sent to the fraud team for normal review (similar to SKIP decisions)

Phase 4: Production Rollout

Running Mode: ON

Once you're ready for an incremental rollout:

  1. Enable automated actioning

  2. Once the Running Mode is set to ON, debug will be set to false. If the recommended AiMod actioning is in place, this should enable automated actioning on AiMod's decisions.

  3. Option 1: Gradual rollout via holdout rate

  4. Via the AiMod admin dashboard, reduce the holdout rate (to 99% or 95%, for example) to slowly enable real decisioning for a subset of accounts

  5. Monitor system performance and agreement between AiMod and fraud analysts
  6. Gradually reduce the holdout rate if things continue to look good
  7. Target a steady-state holdout rate of about 5-10%

  8. Option 2: Gradual rollout via decision thresholds

  9. Via the AiMod admin dashboard, set the decision thresholds to be very conservative (e.g. 1% for approve and 99% for ban) and the holdout to be near the target rate (e.g. 5-10%)
  10. This will result in AiMod skipping nearly all the cases, except those that it is very confident about
  11. Monitor system performance and agreement between AiMod and fraud analysts
  12. Gradually raise the approve threshold and lower the ban threshold if things continue to look good
  13. Stop adjusting thresholds once you're satisfied with the tradeoff between agreement and action rates

Integration Checklist

Here's a checklist for a successful integration:

Fraud Labels

  • Send events for all fraud analyst determinations
  • Send events from automated fraud detection systems (rules, ML models, third-party services)
  • Send "APPROVE" events if cases are closed with no action
  • Send events related to decision reversals (unbans, dispute resolutions)
  • Send AiMod's decisions with musubi prefixes

Request Decision

  • Determine full set of trigger reasons for us to configure
  • Start sending requests during the integration phase

Automated Actioning

  • Create automations for approvals and actioning
  • Build in relevant checks or validations before actioning
  • Pass SKIP decisions to fraud analysts
  • Pass holdout decisions to fraud analysts
  • Ignore debug decisions