Skip to content

Musubi Integration Documentation

AI-powered Trust & Safety tools for platforms at scale.


AiMod

AiMod automates 70–90% of all decisions handled by your Trust & Safety team. It combines deep behavioral analysis with content understanding to make user-level decisions, looking at the full picture: what someone posts, how they behave, and the context around them. AiMod learns and adapts in real-time by watching your moderators and fraud analysts make decisions.

Report types Spam, scams, fraud, illegal activity, impersonation, bot activity, underage usage, harassment, hate speech, and other custom policy violations
Content types Reported user, conversation, message, image, video, audio, post, transaction, or product listing
Actions Approve, ban, escalate, suspend account, take down content, warn user, flag for review, and more

AiMod is highly scalable

We've made 1 million Trust & Safety decisions during a 24-hour period in production.

Guides


PolicyAI

PolicyAI allows you to define your policies in plain language and apply them across your platform.

Guides

Features

Custom policies Craft policies tailored to your needs
Policy versioning Track changes to your policies over time with full version history
Custom outputs Define custom output fields for your policies beyond the default assessment
Multimodal Moderate multiple pieces of content together holistically
Evaluate policies Test your policies against curated datasets to ensure they're set up just right
Manage datasets Upload and manage test datasets for evaluating your policies
Decision history Full audit trail of all labeling decisions for review and compliance
Content Atlas Cluster and visualize content to discover patterns and emerging trends
Diagnosis tools Debug and understand why your policy made a specific decision
Policy converter Easily convert human-written policies into LLM-optimized formats