Platform Architecture

Four engines.
One platform.

SEA FAN AI runs on four parallel modules — each triggered differently, each optimized for a distinct workload. Together they drive enterprise-grade AI operations across Southeast Asia.

410M
Tokens per client per day
78%
Batch processing share
35+
Enterprise clients
$1M+
Monthly revenue potential
01Module AUser-triggered

Real-time Conversation Engine

Delivers instant, context-aware responses by injecting full conversation history and enterprise knowledge into every reply. Handles compliance-grade customer service at scale with sub-second latency.

< 800ms
Average response latency
200K
Context window tokens
99.9%
Uptime SLA

Capabilities

  • Full-context injection per conversation turn
  • Compliance-aware response filtering
  • Real-time sentiment analysis and escalation routing
  • Multi-channel support (web, app, WhatsApp, LINE)
  • Live agent handoff with full context transfer
02Module BScheduled — runs automatically

Nightly Batch Processing Engine

Processes the full day's conversation corpus overnight. Generates analytics reports, updates routing rules, refreshes knowledge base entries, and surfaces anomalies — all without human intervention.

78%
Share of daily token load
~320M
Tokens processed nightly
< 4h
Full corpus processing time

Capabilities

  • Full-day conversation analysis and KPI extraction
  • Automated report generation for enterprise dashboards
  • Dynamic routing rule and policy updates
  • Anomaly detection and escalation flagging
  • Knowledge base gap identification and patching
03Module CEvent-triggered automation

Proactive Outreach Engine

Monitors business events and automatically generates personalized outreach — order confirmations, shipping updates, cart recovery messages, and churn-prevention campaigns — at enterprise scale.

3–5×
Higher open rate vs. generic blasts
< 2min
Event-to-message latency
11
Languages per campaign

Capabilities

  • Order and logistics notification generation
  • Churn-risk user retention campaigns
  • Personalized upsell and cross-sell messaging
  • A/B variant generation at scale
  • Delivery channel orchestration (SMS, email, push, chat)
04Module DBrand-update triggered

Multilingual Knowledge Base

Maintains a synchronized, authoritative knowledge base across 11 Southeast Asian languages. When brand content changes, the engine propagates updates to all language variants automatically — no manual translation required.

11
SEA languages supported
< 6h
Full sync after brand update
98.4%
Translation consistency score

Capabilities

  • Thai, Vietnamese, Malay, Indonesian, Filipino, and 6 more
  • Automatic propagation on brand content change
  • Terminology consistency enforcement across variants
  • Regional compliance variant management
  • Version-controlled knowledge snapshots with rollback
Technical Foundation

Why Claude
200K context?

Our four-module architecture is only possible because of Claude's 200K token context window. Nightly batch processing requires ingesting an entire day's conversation corpus in a single pass — no other model handles this reliably at production scale.

Beyond context length, Claude's instruction-following consistency and native Southeast Asian language capability are non-negotiable for our compliance-grade enterprise clients.

200K Context Window

Ingest full conversation history and policy documents in a single pass

Long-text Stability

Consistent output quality across the full context length — critical for batch jobs

Instruction Following

Compliance-grade adherence to enterprise policy rules and tone guidelines

11 SEA Languages

Native-quality Thai, Vietnamese, Malay, Indonesian, Filipino, and more

Ready to deploy?

Talk to our team about which modules fit your enterprise workflow.