Usage Dashboard Reference
Every companion tab in the Admin Panel has a Usage sub-tab. First-time viewers have three reactions: “That’s a lot of charts” → “What do these numbers mean?” → “Should I worry?”
This page answers each one.
Usage Tab at a Glance
[Screenshot: Ada Dashboard → Usage, full view]
From top to bottom you typically see four regions:
- Monthly total card — big numbers, glance-friendly
- 7-day trend line — last week at a glance
- Token bar chart — time-sliced detail
- Distribution chart — toggle “by model” or “by purpose”
Region 1: Monthly Total Card
Cumulative since the first of this month:
| Field | Meaning | When to Worry |
|---|---|---|
| Total spend (USD) | Projected month-to-date cost | Above your monthly budget |
| Conversations | Message count (user + AI each count) | A sudden spike suggests something is off |
| Total tokens | Token consumption | Compare against spend for efficiency |
| Avg cost per conversation | Spend ÷ conversation count | Abnormally high = replies too long |
“Monthly total” uses your timezone. The default is UTC+8. Month boundaries flip at 00
in that zone.Region 2: 7-Day Trend Line
[Screenshot: 7-day trend line example]
Y axis: daily spend (USD) X axis: date
What to look for:
- Stable trend — roughly consistent daily usage is normal
- Unusual peaks — a 10× day suggests an AI loop
- Weekend vs weekday — reflects how the business runs
- Holidays — support workloads should drop during long breaks
Region 3: Token Bar Chart
Same data, bar chart format. Each bar is a day (or an hour, depending on your slice).
Toggle: look for a Daily / Hourly button above the chart.
Input vs Output use different colors:
- Lighter = input tokens (your prompt + conversation history)
- Darker = output tokens (AI’s reply)
Output typically accounts for 70–80% — replies are longer than prompts.
Region 4: Distribution Chart
This is the most useful chart — shows where the money actually goes.
Two toggles:
By Model (Provider)
Percentage share per model:
GPT-4o-mini ████████████████████████ 60%
GPT-4o ████████ 25%
Claude Haiku ████ 15%
Use to judge: are expensive models eating the budget?
By Purpose (Usage Type)
Percentage share per feature:
Reply generation ████████████████ 55%
Tool calls ██████████ 30%
Memory system █████ 15%
Use to judge: which feature is expensive? If memory burns 50%, conversation history is too long and needs compression.
[Screenshot: distribution-by-purpose example]
Real Debugging: Reading the Charts
Case A: Yesterday’s Spend Doubled
Steps:
- Trend line → identify the spike day
- Switch to Hourly bar chart → find the spiking hour
- Go to Activity tab and look at that hour’s conversations
- Common causes:
- One customer kept asking long questions
- AI got stuck in a loop (talking to itself)
- Expensive model accidentally used for bulk work
Case B: Projected Monthly Cost Is Close to Over-Budget
Steps:
- Divide “month total” by elapsed days → daily average
- Multiply by total days in the month → projected monthly
- If over, switch to “by model” to find the heaviest spender
- Reduce that model’s usage
Case C: Conversations Stable, Cost Spiked
Usually output tokens or context length exploded:
- Look at input vs output ratio — output at 90%+ means replies are too long
- Cap
maxTokens(see Cost Optimization) - Check whether context compression is enabled
Exporting Data
The Usage tab typically has an Export CSV button (top-right). Export 30 / 90 / 365 days.
Columns include:
- Timestamp
- Model
- Purpose
- Input tokens
- Output tokens
- Cost (USD)
- Triggering platform and user
Hand it to Vi: “Tell me which time slot cost the most this month, and which model had the best ROI.”