Inbox Triage
AI-assisted customer support triage

Triage your inbox the way a senior support lead would.

Paste in messy customer messages — emails, chats, tickets — and Inbox Triage classifies intent, calibrates urgency, drafts ready-to-send replies, and learns from your corrections inside the session.

7
intent classes
3
urgency tiers
3
AI providers
<3s
per batch
01 · Workflow

From paste to draft in three steps

01

Ingest

Paste a single message or 150 of them. Upload .txt or .csv. The pipeline cleans, splits on blank lines or pipes, and caps oversize entries before reasoning.

02

Triage

preprocess → classify → reason → recommend → draft, all in one structured call per chunk. Strict JSON via tool-use — no markdown parsing, no retry loops on malformed output.

03

Improve

Edit any field inline and apply. Your corrections become few-shot examples for the next batch — the model adapts to your judgment without any retraining.

02 · Capabilities

Calibrated, structured, auditable

Every classification grounded in the message's specific signal. Every reply in the company's voice. Every decision logged with the model, latency, and token counts that produced it.

Seven-class intent

support · sales · complaint · spam · billing · feedback · other. Each call grounded in the specific signal that drove it.

Rule-anchored urgency

High reserved for production outages, churn threats, and payment failures. Calibrated to business impact, not relative to batch.

Ready-to-send drafts

Warm, professional, no placeholders. When more info is needed, drafts the reply that asks — never invents facts.

Decision log per call

Every result records model, latency, tokens in/out, cache hits, and the few-shot examples that influenced it. Click any row to expand.

Session memory

Last 5 corrections injected as few-shot examples on the next call. No retraining, no embeddings, just direct prompt steering.

Streamed progress

Server-Sent Events ship per-batch results as they complete. The first triage row lands in the dashboard in under 3 seconds even for 150-message runs.

03 · AI flexibility

Bring your own model

One pipeline, three providers. Switch with a single environment variable — the JSON contract, the validation, the UI all stay identical.

Hosted · Best quality

Anthropic

Claude Opus / Sonnet / Haiku

  • tool_choice forces strict JSON
  • Prompt caching on the system prompt
  • Sub-2s per 25-message batch
Hosted · Free tier

OpenAI-compatible

Groq · OpenRouter · OpenAI · Cerebras

  • response_format: json_object
  • Auto-retry on 429 with backoff
  • $0 with Groq's free tier
Self-hosted · Private

Local

Ollama · llama.cpp · LM Studio · vLLM

  • Tolerant JSON extractor
  • Strips Qwen 3 thinking tags
  • Runs on your laptop, no API key

Ship cleaner support today.

Sign in with Google, paste customer messages, hit Analyze. No setup, no credit card.