# Tuteliq — Complete API Reference for AI Agents > AI-powered child safety API. Detect grooming, bullying, self-harm, fraud, radicalisation, and 10+ harms across text, voice, image, and video. Base URL: https://api.tuteliq.ai Docs: https://docs.tuteliq.ai MCP Endpoint: https://api.tuteliq.ai/mcp ## Authentication All requests require an API key via `Authorization: Bearer ` header or `x-api-key` header. ## Detection Endpoints All detection endpoints accept POST with JSON body: | Endpoint | Path | Description | |----------|------|-------------| | Unsafe Content | POST /api/v1/safety/unsafe | Detect across all 9 KOSA harm categories | | Bullying | POST /api/v1/safety/bullying | Bullying and harassment detection | | Grooming | POST /api/v1/safety/grooming | Conversation-level grooming pattern analysis | | Social Engineering | POST /api/v1/fraud/social-engineering | Pretexting, impersonation, urgency manipulation | | App Fraud | POST /api/v1/fraud/app-fraud | Fake investments, phishing apps, subscription traps | | Romance Scam | POST /api/v1/fraud/romance-scam | Love-bombing, financial requests, identity deception | | Mule Recruitment | POST /api/v1/fraud/mule-recruitment | Easy-money offers, account sharing, laundering | | Gambling Harm | POST /api/v1/safety/gambling-harm | Underage gambling, addiction patterns, predatory odds | | Coercive Control | POST /api/v1/safety/coercive-control | Isolation, financial control, surveillance, threats | | Vulnerability Exploitation | POST /api/v1/safety/vulnerability-exploitation | Targeting vulnerable individuals | | Radicalisation | POST /api/v1/safety/radicalisation | Extremist rhetoric, recruitment patterns | | Multi-Endpoint | POST /api/v1/analyse/multi | Run up to 10 classifiers in one call | ## Request Body (Detection) ```json { "text": "content to analyze", "context": { "ageGroup": "13-15", "language": "en", "platform": "Discord", "sender_trust": "verified", "sender_name": "School Admin", "conversation_history": [ { "sender": "user1", "content": "previous message" } ] }, "include_evidence": true, "support_threshold": "high", "external_id": "your-tracking-id", "customer_id": "your-customer-id" } ``` ## Context Fields | Field | Type | Effect | |-------|------|--------| | ageGroup | string | Age-calibrated scoring: "under 10", "10-12", "13-15", "16-17", "under 18" | | language | string | ISO 639-1 code. Auto-detected if omitted. 27 languages supported. | | platform | string | Platform name (Discord, Roblox, WhatsApp). Adjusts for platform norms. | | conversation_history | array | Prior messages for multi-turn pattern detection. | | sender_trust | string | "verified", "trusted", or "unknown". Verified suppresses AUTH_IMPERSONATION. | | sender_name | string | Sender identifier for impersonation scoring. | ## support_threshold Controls when crisis helplines are included in responses: - "low" — include for Low severity and above - "medium" — include for Medium and above - "high" (default) — include for High and above - "critical" — include only for Critical severity Critical severity ALWAYS includes support resources regardless of threshold. ## Detection Response Shape ```json { "detected": true, "level": "high", "risk_score": 0.85, "confidence": 0.92, "categories": [ { "tag": "GROOMING_TRUST_BUILDING", "label": "Grooming Trust Building", "confidence": 0.88 } ], "evidence": [ { "text": "don't tell your parents", "tactic": "SECRECY_REQUEST", "weight": 0.9 } ], "rationale": "Human-readable explanation of why the content was flagged.", "recommended_action": "Escalate to safeguarding team", "language": "en", "language_status": "stable", "age_calibration": { "applied": true, "age_group": "13-15", "multiplier": 1.0 }, "support": { "helplines": [], "guidance": "..." } } ``` ## Multi-Endpoint Analysis POST /api/v1/analyse/multi Valid endpoint values: bullying, grooming, unsafe, social-engineering, app-fraud, romance-scam, mule-recruitment, gambling-harm, coercive-control, vulnerability-exploitation, radicalisation When vulnerability-exploitation is included, its cross-endpoint modifier adjusts severity scores across all other results. ## Media Endpoints | Endpoint | Path | Accepts | |----------|------|---------| | Voice | POST /api/v1/safety/voice | Audio files (mp3, wav, ogg, m4a — max 25MB) | | Image | POST /api/v1/safety/image | Image files (jpg, png, gif, webp) | | Video | POST /api/v1/safety/video | Video files (mp4, webm, avi — max 100MB, 10 min) | ## Guidance & Reporting | Endpoint | Path | Description | |----------|------|-------------| | Action Plan | POST /api/v1/guidance/action-plan | Age-appropriate guidance for child, parent, or professional | | Incident Report | POST /api/v1/reports/incident | Structured report for law enforcement or safeguarding teams | | Emotion Analysis | POST /api/v1/analysis/emotions | Emotional well-being analysis | ## Age Groups | Value | Sensitivity | |-------|-------------| | "under 10" | Highest — almost any harmful exposure flagged at elevated severity | | "10-12" | High — distinguishes normal peer friction from targeted harassment | | "13-15" | Moderate — accounts for teen communication while alert to genuine risk | | "16-17" | Adjusted — recognizes autonomy while protecting against exploitation | ## KOSA Harm Categories 1. Self-Harm & Suicidal Ideation 2. Bullying & Harassment 3. Sexual Exploitation 4. Substance Use 5. Eating Disorders 6. Depression & Anxiety 7. Compulsive Usage 8. Violence 9. Grooming ## Credit Costs Most detection endpoints: 1 credit. Voice/Image: 3. Video: 10. Document: max(3, pages × endpoints). Multi: sum of endpoints. Age verification: 5. Identity verification: 10. ## Document Analysis ### POST /api/v1/safety/document Upload a PDF for multi-endpoint safety analysis. Extracts text from each page, runs chosen detection endpoints in parallel, returns per-page results with overall risk assessment. Zero retention — no document data stored after response. Tier: Indie and above. Credits: max(3, pages_analyzed × endpoint_count). Parameters (multipart/form-data): | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | file | file | Yes | PDF file (max 50 MB, max 100 pages) | | endpoints | string | No | JSON array of endpoint names. Default: ["unsafe","coercive-control","radicalisation"] | | file_id | string | No | Your identifier for the file (echoed back) | | age_group | string | No | "under 10", "10-12", "13-15", "16-17", or "under 18" | | language | string | No | ISO 639-1 code. Auto-detected if omitted | | platform | string | No | Platform name for context-aware scoring | Available endpoints: unsafe, bullying, grooming, social-engineering, coercive-control, radicalisation, romance-scam, mule-recruitment. Response includes: document_hash (SHA-256 for chain-of-custody), total_pages, pages_analyzed, page_results (per-page detection), overall_risk_score (0.0–1.0), overall_severity, flagged_pages, credits_used. Error codes: ANALYSIS_6010 (extraction failed), ANALYSIS_6011 (exceeds 100 pages), FILE_MISSING, FILE_INVALID_TYPE, FILE_TOO_LARGE. ## Age & Identity Verification ### POST /api/v1/verify/age Verify a user's age via document analysis, biometric estimation, or both. Returns a verified age range and confidence score. Tier: Pro and above. Credits: 5 per verification. Parameters (multipart/form-data): | Parameter | Type | Required | Description | |-----------|----------|----------|-------------| | document | file | Depends | Government-issued ID image (JPEG/PNG, max 10MB). Required for "document" and "combined" methods. | | selfie | file | Depends | Front-facing selfie (JPEG/PNG). Required for "biometric" and "combined" methods. | | method | string | Yes | "document", "biometric", or "combined" | Response: | Field | Type | Description | |------------------|---------|-------------| | verified | boolean | Whether age verification succeeded | | estimated_age | integer | Best estimate of user's age | | age_range | string | "under-10", "10-12", "13-15", or "16-17" | | is_minor | boolean | Whether the user is under 18 | | confidence | float | Confidence score (0.0–1.0) | | method | string | Method used | | document_type | string | "passport", "driving_licence", "national_id", or "residence_permit" | | document_country | string | ISO 3166-1 alpha-2 country code | | biometric_age | integer | Age estimated from selfie (if provided) | | document_age | integer | Age from document DOB (if provided) | | credits_used | integer | Credits consumed | ### POST /api/v1/verify/identity Full identity verification with document authentication, face matching, and liveness detection. Tier: Business and above. Credits: 10 per verification. Parameters (multipart/form-data): | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | document | file | Yes | Government-issued ID image (JPEG/PNG, max 10MB) | | selfie | file | Yes | Front-facing selfie for face matching and liveness | Response: | Field | Type | Description | |------------------------|---------|-------------| | verified | boolean | All checks passed | | match_score | float | Face match between document and selfie (0.0–1.0) | | liveness_passed | boolean | Liveness check passed (not a photo/screen/mask/deepfake) | | document_authenticated | boolean | Document passed authenticity checks (MRZ, tamper, format) | | estimated_age | integer | Age from document DOB | | age_range | string | Age bracket or "adult" | | is_minor | boolean | Under 18 | | confidence | float | Overall confidence (0.0–1.0) | | document_type | string | Document type | | document_country | string | ISO country code | | flags | array | Warnings: "document_expiring_soon", "low_image_quality", etc. | | checks.mrz_valid | boolean | Machine-readable zone valid | | checks.tamper_detected | boolean | Tamper evidence found | | checks.face_match | boolean | Face matches document | | checks.liveness | boolean | Liveness confirmed | | checks.document_expired| boolean | Document is expired | | credits_used | integer | Credits consumed | ### POST /api/v1/verify/session Manage multi-step verification sessions with server-issued liveness challenges (gaze direction, head turn). Tier: Pro and above. ### Integration Pattern Verify age once, then pass the confirmed age_group to all detection calls: ```javascript const age = await tuteliq.verifyAge({ selfie, method: 'biometric' }) const safety = await tuteliq.detectUnsafe({ content: message, context: { age_group: age.age_range } // Calibrated risk scoring }) ``` ## MCP Server Connect via Streamable HTTP: https://api.tuteliq.ai/mcp NPM package for stdio: @tuteliq/mcp Resources available: - tuteliq://documentation — Quick reference - tuteliq://context-fields — All parameters and context fields - tuteliq://kosa-categories — KOSA harm categories - tuteliq://age-groups — Age calibration reference - tuteliq://credit-costs — Per-endpoint pricing