Commission Reconciliation

How Claude processed a full year of commission statements — thousands of transactions, hundreds of accounts, exact-cent conservation.

This page describes the methodology and system design. All figures are anonymized aggregates.

Scroll

A year of commission statements. No single source of truth.

Insurance agencies receive commission statements from multiple carriers monthly. Each carrier uses different formats, different name spellings, different account identifiers. Reconciling who earned what requires matching every transaction to the correct client account — across thousands of rows where the same person might appear under different names.

📋Multiple carriers
📅Monthly statements
👤Name variations
No standard IDs
Positive & negative entries
🔄Statement corrections

The old way

Without a system, reconciliation means opening each carrier statement, manually searching for each name, deciding which account it belongs to, and adding it to a spreadsheet. Repeat for every row, every month, every carrier.

  • Same person appears as three different name variants
  • Negative entries (chargebacks) must net against positive entries
  • Mid-year corrections can change earlier month totals
  • No formula catches a misallocated row
  • One mistake compounds across twelve months

Five stages. Zero manual row-by-row review.

A structured pipeline that processes every transaction through safe, auditable stages. Each stage has a clear input, a clear output, and a clear failure condition.

1

Ingest

Load raw commission ledger. Parse currency, normalize names, assign transaction IDs. Store money as integer cents — no floating point.

In: raw carrier CSV Out: normalized transactions with IDs, integer cents
2

Allocate

Match each transaction to an account using safe layers only: exact canonical name, approved alias, unique policy anchor. No fuzzy matching. No "first match wins."

In: normalized transactions + name/alias/policy lookups Out: transactions tagged with account or "unmatched"
3

Triage

Anything that can't be safely allocated goes to a controlled admin bucket with a reason code — not a discard pile. Every dollar is accounted for.

In: unmatched transactions Out: admin bucket entries with reason codes
4

Reconcile

Conservation test: Allocated + Admin + Unallocated must equal Total Ledger, in exact cents. If the math doesn't balance, the run fails.

In: all allocated + admin + unallocated totals Out: PASS/FAIL with cent-level proof
5

Report

Generate account-level summaries, outlier flags, exception queues for human review. The human reviews names, not rows.

In: reconciled dataset Out: account summaries, exception queue, audit trail

Eight locked rules. No exceptions mid-run.

The business rules that couldn't be violated. These were defined before any code was written and remained immutable throughout the reconciliation process.

1
Active inclusion criteria — only accounts meeting defined activity thresholds are included in the run.
2
Account tenure from source system — the system of record determines when an account started, not the spreadsheet.
3
Premium from source system — premium values come from the carrier system, not manual entry.
4
Net TTM commissions — positive and negative entries both count. The net is what matters.
5
100% allocation requirement — every dollar goes to an account or a controlled admin bucket. No silent drops.
6
Trailing twelve months defined precisely — statement dates determine the window, not calendar approximation.
7
All rows included — even zero-dollar adjustments. If a carrier sent it, the pipeline processes it.
8
Integer cents for all money arithmetic — no floating-point rounding. Every calculation uses whole-cent integers.

These rules are the delegation paperwork. They answer Mollick's six questions before the AI writes a single line of code.

Three layers of safe matching. Everything else goes to review.

The allocation engine uses a strict hierarchy of matching methods. Each layer is deterministic and collision-free. If a transaction doesn't match at any layer, it's triaged — never guessed.

1

Exact Name

Transaction name matches the canonical account name exactly after normalization. Case-insensitive, whitespace-trimmed, punctuation-standardized.

Highest confidence
2

Approved Alias

A human-approved mapping from a name variant to an account. Must be collision-free — an alias can't point to two accounts.

Human-verified
3

Policy Anchor

Policy number uniquely maps to one account based on prior high-confidence matches. One policy, one account, always.

Structural match

What doesn't match?

Unmatched transactions go to controlled admin buckets, each with a reason code:

ADMIN_UNKNOWN_TEMP Needs research — name not in any lookup table
ADMIN_MISSING_INSURED Blank insured name in the carrier ledger
ADMIN_STATEMENT_CORRECTION Carrier-side adjustments and corrections
Threshold: Unknown temp must stay below 0.25% of total or $1,000 — whichever is smaller.

From 89% to 99% allocation — without unsafe matching.

The pipeline's first pass allocated 89% of all commission dollars using safe matching alone. The remaining 11% went to admin buckets with reason codes. Resolving those required reviewing insured name groups — not individual transaction rows.

The old way
~470
individual transactions to inspect
With the pipeline
~30
name groups to review

Allocation Explorer

See how resolving unknown insured name groups moves the allocation rate. Each group may contain multiple transactions — resolve one name, allocate many rows.

0 of 30
Allocation Rate
89.0%
Admin Bucket
11.0%
Names Reviewed
0
Below target — 89.0% allocated (target: 95%)
PASS
Conservation test
Allocated + Admin + Unallocated = Total Ledger, in exact cents
$0
Truly unallocated
Every dollar assigned to an account or a controlled admin bucket
~3,000
Total transactions processed
12
Trailing months covered
99.0%
Final allocation rate
0
Alias collisions

The jump from 89% to 99% required reviewing ~30 insured name groups, not ~470 individual transactions. That's Mollick's "cheaper evaluation" lever in action.

This is the equation, running on real numbers.

Every part of this project maps back to Mollick's framework. The commission reconciliation isn't a separate thing — it's the same delegation principles applied to a different domain.

Mollick's Lever What We Built Effect
Better Instructions Eight locked rules + controlled admin buckets AI knew its authority limits before writing a single line of code
Better Evaluation Conservation test: PASS/FAIL in exact cents System proves correctness or stops — no manual spot-checks
Cheaper Evaluation Review ~30 name groups, not ~470 rows Human reviews names sorted by dollar impact, not spreadsheet rows
Human Baseline Time: Days of manual reconciliation across multiple carrier formats, with error rates that compound across twelve months.

The system doesn't replace judgment — it concentrates it where it matters most. Days of row-by-row work become structured AI sessions with an audit trail and exact-cent conservation.