Customer support teams at SuperEvent were manually answering repetitive questions every day — even when the answers existed in the Knowledge Base.
This project designed and deployed a production-grade, RAG-based AI Agent system inside HelpCrunch, built entirely from scratch: KB
architecture, system prompt engineering, escalation logic, and a continuous optimisation loop grounded in real conversation data.
Across 5 live event websites (6 brands, 100+ services), support staff manually handled high volumes of repetitive Tier-1 inquiries: parking, age limits, booking rules, cancellations, and last-minute availability.
Each answer required finding the right KB article and pasting it into live chat. Average manual handling time: 13 min 36 sec. No 24/7 coverage. No consistent response quality. No structured escalation logic.
Designed and deployed a three-layer RAG-based architecture in HelpCrunch — built entirely from scratch.
Every component was personally owned end-to-end:
Knowledge Base Architecture (140+ articles)
Audited, restructured, and expanded the KB from scratch — one concept per article, metadata tagging, and gap analysis from real chat logs. KB quality is the single largest driver of AI quality: this phase received the most time and attention.
Prompt Engineering — 5 Custom System Prompts
Designed a four-part system prompt for each of the 5 agents: Role Definition, Knowledge Scope, Escalation Rules, and Tone & Format Guidelines — iteratively refined on real conversation data, not hypothetical test cases.
Three-Layer Case Handling Flow
Every inquiry passes through: Triage (categorise + confidence check) → Knowledge (semantic KB search → draft → fact-check) → Safety (auto-send / Agent Assist / mandatory escalation). No dead ends — ever.
KB Gap → Improvement Loop
Every escalation is a data signal. Logged patterns drove direct KB updates — new articles, Custom Answers, and prompt refinements. This loop delivered ~70% automation, far above the 30% initial target.
Business Insight Delivery:
Delivered data-backed reports to leadership for use in strategic planning and market positioning.
Target was 30% — exceeded significantly
Down from 13 min 36 sec manual
Across 5 websites, in production
• ~70% of all customer inquiries resolved automatically — no human involvement
• ~16 seconds average AI response time — full conversation AHT: 2 min 5 sec vs. 13 min 36 sec manual
• 8+ days of manual support time saved per month across the team
• ~30% escalation rate — well-calibrated: each escalation is a KB improvement signal
• Zero AI error rate — enforced by architecture, not post-hoc review
• 24/7 first-line coverage across 5 websites, 6 brands, and multiple languages
HelpCrunch – AI Agent platform — 5 agents across 5 websites (3 live in production; 2 production-ready, pending final KB completion)
OpenAI GPT + NLP – GPT drives AI Agent response generation; NLP handles intent detection and conversational understanding
RAG (Retrieval-Augmented Generation) – Semantic search over 140+ KB articles — grounded responses, zero hallucinations
Prompt Engineering – Custom 4-part system prompts designed for each agent from scratch
Custom Answers + Agent Assist Mode – Pre-written Q&A; overrides for high-frequency questions; hybrid AI draft / human approve for complex cases
AI Analytics & Conversation Logs – Structured feedback loop: escalation patterns → KB gaps →updates → retest
The foundation of this system is not the model — it is data quality.
Building the KB as structured data (one concept per article, metadata-tagged, gap-analysed from real chat logs) is a data engineering discipline — exactly where a Data Analyst background makes the decisive difference.
The 70% automation rate was earned by treating every escalation as a data signal, every KB gap as an engineering task, and every prompt as an iteration on real production data.
LEAN process thinking applied to AI: find the structural waste, build the feedback loop, and make recurrence structurally impossible. The tools are different. The mindset is identical.