Agents that actually ship outcomes.
Not chatbots. A fleet of specialised AI agents with deep billing skills, plugged into every system you run through the Model Context Protocol (MCP), and routed across frontier LLMs, small language models and localised models — so each token does the maximum work at the minimum cost.
- Agent skills
- 14+
- Token cost vs peers
- ↓ 18×
- MCP connectors
- 50+
incoming
agent:dispute-handler · step:classify
LLM
claude-sonnet
skipped
SLM
haiku-class
selected
LOCAL
au-prem-7b
standby
resolved in 142 ms · 0 tokens · audit#a8f21c
resolution
Category: meter-read anomaly. Drafted apology + credit note. Queued for operator approval.
Fourteen agents. One operator console.
Each agent is scoped to a specific job, backed by real tools, and supervised by default. Compose them into workflows that used to take whole customer-service teams.
Bill explainer
Generates plain-English, audit-ready explanations for every line on a bill — solar credits, demand charges, prorated plan changes.
Dispute handler
Triages customer disputes end-to-end: reconciles meter data, drafts responses, escalates only what requires a human.
Hardship triage
Assesses hardship eligibility against AER and state frameworks, proposes payment plans, routes to financial counselling where needed.
Collections
Risk-scored collections outreach. Channel-aware, hardship-respecting, TCP & AER compliant by construction.
Tariff optimiser
Models each customer's usage against your rate card and competitors. Surfaces savings, triggers win-backs, avoids bill shock.
Meter-read validator
Catches dud reads, missing intervals, clock drifts, solar/consumption swaps and reverse-current errors before they become invoices.
Provisioning orchestrator
Drives MSATS, NBN Co B2B and carrier porting workflows. Quote → connect → activate without leaving the platform.
Retention agent
Predicts churn, offers calibrated retention offers with margin guardrails, and explains every decision to the CS lead.
KYC / identity
APP-aligned identity verification across ID documents, credit-bureau checks and biometric liveness — no dark patterns.
Concession matcher
Checks eligibility for every federal, state and jurisdictional concession. Applies rebates automatically with evidence trails.
Ombudsman drafter
Prepares EWON/EWOV/ESC audit packs and formal responses to TIO/ombudsman cases. Fact-checked against the case timeline.
Outage communicator
Fuses network outage feeds with affected-customer cohorts and pushes life-support-safe notifications via SMS, email and voice.
Plan comparison
Reference-price-aware plan comparison (EFL, BEC, VDO). Honest numbers, no dark patterns, auditable against AER Retail Pricing Information Guideline.
Workflow composer
No-code agent orchestration. Chain skills into workflows (move-in, disconnection, solar install) with gates and fallbacks.
Every agent exposes its tools, its policy checks and its prompt lineage to the operator console. Nothing is a black box.
MCP-native. Connect anything.
The Model Context Protocol is how modern AI systems talk to tools. We've made it the integration layer for the whole platform — every agent, every skill, every connector speaks MCP.
That means two things. First: agents can reach into any system you run — from AEMO MSATS to your finance ledger — with typed, sandboxed tool calls. Second: you can plug your own tools in by standing up an MCP server. No proprietary SDK, no vendor-lock-in adapter, no 12-month integration project.
CATS, Standing Data, NMI Discovery, B2B Service Orders, NEM12/NEM13 ingestion.
Ausgrid, Endeavour, Essential, Jemena, SA Power, Energex, Ergon, Powercor, United, CitiPower, TasNetworks, Evoenergy.
Service qualification, ordering, provisioning, fault management, assurance.
LNP (Local Number Portability), ACMA Number Portability Determination workflows.
BPAY, Direct Debit, card, Open Banking, PayTo, BNPL pass-through.
SMS, email, voice, postal vendors — channel-aware with suppression lists.
Xero, NetSuite, SAP, custom GLs. Journal lines with full reconciliation.
Salesforce, HubSpot, Zendesk, Freshdesk, Intercom. Bi-directional sync.
Services Australia DVS, state concession databases, audit evidence store.
Illion, Equifax, Experian. Risk scoring, dispute handling, NOCC workflows.
Snowflake, BigQuery, Databricks, Redshift. Live event streams and reverse-ETL.
Any REST, GraphQL, gRPC or database. Build an MCP server in a few hundred lines.
Bring your own MCP: any REST, GraphQL, gRPC or database becomes a first-class tool in a few hundred lines of code.
LLM, SLM, local — the right model for every token.
Frontier LLMs are brilliant and expensive. Small language models are fast and cheap. Localised models keep sensitive data on Australian soil. Our router decides — per step, not per workflow — which model handles which token. The result: frontier intelligence where it matters, near-zero cost everywhere else.
Frontier LLM
≈ 8%of token volume
Claude, GPT — for ambiguity, reasoning, empathy-heavy drafting
Used sparingly and deliberately. Long-context reviews, dispute narratives, regulatory interpretation.
Small language models
≈ 62%of token volume
Claude Haiku-class, Llama-family, Qwen — for structured tasks and classification
Routing decisions, extraction, categorisation, templated writes. Cents per thousand requests.
Localised models
≈ 30%of token volume
On-prem / in-VPC — runs in AU data centres
High-throughput inference, PII redaction, meter-read validation, sovereign workloads.
Why it costs less
- Router chooses the cheapest capable model for each step — 90%+ of steps never touch a frontier LLM.
- Prompt caching across agents — shared system context charged once, reused across thousands of tool calls.
- Structured-output SLMs replace JSON-mode calls to LLMs for a fraction of the cost.
- Localised models run on fixed-cost infrastructure — token-free for the highest-volume workflows.
- MCP connectors return typed data — no tokens wasted on HTML or JSON parsing heuristics.
What it means for you
- ~ 18×
- cheaper per resolved customer interaction vs prompt-only agent stacks
- < 2¢
- average AI cost per customer-month on a typical residential book
- 100%
- of customer PII workloads can route to localised, AU-hosted models
- < 300ms
- median first-token latency on SLM-routed interactive tasks
Benchmarks measured internally against prompt-only agent stacks running on a single frontier LLM. Your numbers will vary by book composition and workflow mix — we'll run the model for you in a scoping call.
Autonomy where it's safe. Approval where it matters.
Regulated billing isn't the place for agent free-for-alls. Every skill ships with policy, provenance and a human-in-the-loop gate by default.
Policy engine
AER, TCP Code, APP and hardship obligations expressed as enforceable policy. Agents can't ship work that violates them.
Human approval
Anything that touches money or a customer's dignity queues for a human. Operators see intent, evidence and impact.
Full audit trail
Every prompt, tool call, model response and policy check — captured, replayable, regulator-ready.
Rollback & override
Any agent action is reversible. Any workflow is pausable. Any decision is overridable — with reason captured.
The most cost-efficient agentic platform in Australian retail billing.
We're not billing you per token or per seat. Our pricing rewards the efficiency gains that our model routing, MCP tooling and localised inference unlock — so your CS cost per customer keeps falling as your book grows.
We're a deep technology partner to the retailers we work with, and an active contributor to the open-source MCP ecosystem. If it doesn't make your unit economics better, we don't ship it.
~90 days
From first call to agentic go-live
10-20×
Cost reduction vs single-LLM stacks
99.98%
First-time-right outcomes with supervision on
AU-hosted
Sovereign inference available for every tier
Put agents to work on your hardest billing workflow.
Bring us the job that costs you the most in CS time today. We'll show you an agent that handles it end-to-end in the demo.