Product
AI Support Pipeline
End-to-end system: ticket classification, duplicate detection, intelligent routing to the right team and person, auto-generated FAQ, and a conversational query interface.
Side build · AI Tooling
Built an AI-powered support routing system for a US fintech during a one-week product audit. Automated classification, deduplication, team routing, and workload balancing across 2000+ monthly tickets.
Overview
Product
End-to-end system: ticket classification, duplicate detection, intelligent routing to the right team and person, auto-generated FAQ, and a conversational query interface.
Client & context
Fast-growing company processing ~2000 support tickets per month, with a 10-person support team spread across multiple specialized squads.
My Role
One-week engagement: identified the bottleneck, designed the system, built a working prototype with OpenAI API, and validated it against real ticket data.
The problem
Manual triage
Every ticket had to be manually categorized, then assigned to the right team, then balanced across agents. One person spent a significant chunk of their time just routing, not solving customer problems.
Downstream waste
Duplicate tickets clogged queues. Some agents were overloaded while others sat idle. Many questions already had answers buried somewhere, but there was no FAQ, no knowledge base, nothing.
The system
Each ticket passes through a multi-step AI pipeline: classify, deduplicate, route, and balance, then the same data feeds a live FAQ and a conversational query layer.
OpenAI API scans each incoming ticket, assigns a category from the company's taxonomy, detects near-duplicates, and flags tickets that already have known answers.
If the model's confidence score for classification or deduplication drops below 95%, the system holds the ticket in a designated verification queue for human review, ensuring deterministic quality.
High-confidence tickets are distributed evenly across the correct specialized squad based on current load, preventing one person from drowning while others wait.
Recurring patterns are surfaced to auto-generate a structured FAQ. A conversational chatbot lets the team query the ticket base directly, e.g. "How many refund requests last month?"
Execution
Product thinking
Started by observing the support workflow end-to-end. The problem wasn't response quality, it was everything before the response: sorting, routing, finding existing answers. That became the scope.
Quality Metrics
Didn't just "guess" if the prompt worked. Built an evaluation dataset of 200 past tickets and ran testing scripts to measure hallucination rates and categorization accuracy across different prompt iterations.
Trust & Transparency
To get support agents to trust the new tool, routing decisions included citations (showing exactly what part of the ticket triggered the classification) and easy human override buttons if the AI was wrong.
Co-construction
Involved the support team from day one, understanding their workflow, getting feedback on routing rules, adapting to how they actually work. Internal tools need the same product rigor as customer-facing ones: without buy-in, nothing ships.
Delivery
Delivered a functional prototype validated against real ticket data. By prototyping standalone first, we de-risked the logic and the human-in-the-loop workflows before writing any integration code into their live system.
Result
Manual triage eliminated
Tickets classified and routed in seconds instead of ~3 minutes of manual sorting each, freeing the equivalent of ~15 hours per week of pure routing work.
Classification accuracy
Automated category assignment matched the company's existing taxonomy with high accuracy, validated against a sample of manually classified tickets.
Duplicate tickets flagged
Near-duplicate detection surfaced ~30% of incoming tickets as repeats or variations of existing issues, reducing queue clutter and enabling a self-serve FAQ.
Takeaway
Don't wire into a live system on day one. Build a working prototype on the side, validate it with real data, prove the logic works, then connect. This approach is faster, less risky, and makes stakeholder buy-in much easier because you can show results before asking anyone to change their workflow.
The biggest risk with AI automation isn't technical, it's adoption. If you build something and drop it on a team, they'll resist it. Co-constructing with operators from the start creates something that actually fits their workflow, and people who feel ownership don't push back on change.