GadaaLabs
Claude Code Superpowers: AI That Gets Smarter With Every Task
Lesson 10

CHRONICLE — Patterns That Compound

16 min

A senior engineer is not faster than a junior engineer because they think faster. They are faster because they recognize the pattern. "Oh, this is the webhook idempotency problem. We solved this in the payments service. Here's what didn't work and here's what did." Ten seconds of pattern recognition saves three hours of investigation.

The CHRONICLE skill is the mechanism for building that pattern recognition into Claude. It stores patterns from solved problems and searches them before starting new ones. The system gets smarter with every completed task — not because Claude's model improves, but because your accumulated context improves.

The ReasoningBank

The ReasoningBank is the collection of pattern files stored in your auto-memory directory. Each file documents a solved problem: what the problem was, what approach worked, what approach failed, and what context would make this pattern apply.

Pattern files live at: ~/.claude/projects/<project-hash>/memory/pattern_<domain>_<keyword>.md

They are indexed in MEMORY.md like everything else:

markdown
- [Pattern: JWT Refresh Rotation](pattern_backend_jwt_refresh.md) — avoid refresh token reuse detection without Redis
- [Pattern: Stripe Webhook Timeout](pattern_backend_stripe_webhook.md) — always return 200 immediately, process async
- [Pattern: PSI Drift Detection](pattern_ml_drift_psi.md) — PSI > 0.1 alert, PSI > 0.25 rollback

The index hook is critical. It must be specific enough that Claude can decide relevance without loading the full pattern file. "Pattern for backend" is not a useful hook. "Avoid refresh token reuse detection without Redis" is a useful hook — it tells Claude exactly when to load this file.

Searching Before Starting

The most important habit the system enforces: search before you code.

The ORACLE skill automatically runs a pattern search for complexity ≥ 4 tasks. But the search is only as good as the patterns that have been stored. In the first few sessions, the ReasoningBank is empty and every search returns nothing. After 10 sessions of storing patterns, searches start returning relevant results. After 50, you rarely solve the same problem twice.

When invoking learning-from-experience to search:

Query: "<task-type> <domain> <key-keywords>"

Construct the query from what you are about to do, not from what the error message says. If you are adding rate limiting to an API, the query is "feature backend rate-limiting api" — not the error you might encounter later.

Review every match:

  • What worked? Apply it directly without re-investigating.
  • What failed? Avoid it explicitly, and include the reason in your announcement.
  • What is different? Note context differences that might make the pattern inapplicable.

Announce findings before proceeding:

"Found 1 relevant pattern: backend:rate-limiting — implemented in payments service using Redis sorted sets with sliding window. Token bucket approach failed under burst traffic. Applying: Redis sorted sets implementation."

This announcement creates shared understanding. If the pattern does not apply for some reason you have not noticed, this is the moment to catch it.

Storing Patterns After Solving

After completing a task that involved non-obvious decisions, novel approaches, or hard-won lessons, store the pattern. The storing step is the close of the task-intake after-action review.

When to store:

  • The solution involved a non-obvious approach you had not seen before
  • The first approach failed — this is the most valuable pattern to store
  • The solution would have been significantly faster with prior knowledge
  • The problem is likely to recur in this codebase or domain

When not to store:

  • The solution was obvious from the code or documentation
  • The problem was a simple typo or configuration mistake
  • The approach was entirely standard with no surprising trade-offs

Pattern file format:

markdown
---
name: Stripe Webhook Timeout Pattern
description: "Use CHRONICLE to build a searchable ReasoningBank of solved patterns, search it before starting new tasks, and create the compound knowledge effect that makes the system progressively smarter."
type: feedback
---

## Problem
Stripe retries webhooks if no 200 response is received within 30 seconds.
Long synchronous processing (e.g., large cart calculations, email sending) 
can exceed this limit, causing Stripe to retry and creating duplicate events.

## What Failed
Processing the webhook synchronously before returning 200. 
For carts with 50+ items, processing took 35-40 seconds. 
Stripe retried 3 times, resulting in triple-charged orders.

## What Worked
Return 200 immediately, enqueue the webhook payload for async processing.

```python
@app.route('/webhooks/stripe', methods=['POST'])
def stripe_webhook():
    payload = request.get_data()
    sig_header = request.headers.get('Stripe-Signature')
    
    # Verify signature immediately
    event = stripe.Webhook.construct_event(payload, sig_header, WEBHOOK_SECRET)
    
    # Enqueue for async processing
    process_stripe_event.delay(event.id, event.type, event.data)
    
    return jsonify({'received': True}), 200  # Return immediately

When This Applies

Any webhook handler from any payment processor, notification service, or third-party integration where processing time might exceed the provider's response timeout. Stripe: 30s. Twilio: 15s. GitHub: 10s.

Avoid

Any synchronous processing that might exceed the provider timeout. Specifically: email sending, PDF generation, cross-service API calls, large database writes.

This pattern file takes 5 minutes to write. The next time someone hits a Stripe webhook issue, they find this in 10 seconds.

## The Compound Interest Effect

The value of the ReasoningBank compounds over time in a way that is hard to appreciate until you experience it.

**Session 1-5:** The bank is empty. Every search returns nothing. The system feels slower — you are adding the after-action review step without getting any search benefits yet. This is normal. Push through it.

**Session 6-20:** Occasional matches. "I searched and found a relevant pattern from 3 weeks ago that saved me 30 minutes." The pattern library is starting to have coverage.

**Session 20-50:** Regular matches. Most new tasks have at least one applicable pattern. The system feels noticeably faster. Novel problems are genuinely novel — you are not re-solving things.

**Session 50+:** The ReasoningBank is a significant engineering asset. Your team's accumulated solutions, failure modes, and trade-off decisions are searchable and applied automatically. New team members benefit from patterns discovered by experienced members. Onboarding a new developer to the codebase now includes access to the team's collected problem-solving experience.

This is the compound interest effect. Each stored pattern is a one-time investment that pays dividends indefinitely. The early sessions feel like overhead. The late sessions feel like having a senior engineer always available who remembers everything.

## Pattern Quality

Not all pattern files are equally valuable. The highest-value patterns share three properties.

**Transferable context.** "This fixed a bug in checkout.py on line 47" is not transferable. "Stripe webhook handlers must return 200 before async processing or retries cause duplicate events" is transferable — it applies anywhere you see Stripe webhooks.

**Specific failure modes.** The "what failed" section is often more valuable than the "what worked" section. Knowing that token bucket rate limiting fails under burst traffic saves the next engineer from rediscovering it.

**Applicability criteria.** The "when this applies" section tells Claude when to suggest this pattern. Without it, Claude has to guess based on surface similarity. With it, Claude can accurately decide if the current situation matches.

**Anti-patterns explicitly documented.** The "avoid" section prevents future engineers (including future Claude sessions) from repeating known mistakes.

## The Priority Hierarchy

When Claude searches for patterns, it checks three sources in priority order:

1. **Auto-memory** (first) — `~/.claude/projects/<hash>/memory/pattern_*.md` files. These are your project-specific and team-specific patterns. Always authoritative.

2. **Git-tracked docs** (second) — `docs/patterns/` or similar directories in your repository. Shared with the whole team. Less specific to Claude's working context.

3. **MCP external sources** (third) — External knowledge bases connected via Model Context Protocol. Lowest priority.

Always store patterns in auto-memory first. They are immediately available in the next session. They do not require a git commit or a PR. They are private to your project context by default.

## Practical: Storing Your First Pattern

After completing your next non-trivial debugging session or feature implementation, store the pattern.

1. Run the after-action review (from task-intake lesson)
2. If the task involved a non-obvious approach, create the pattern file:

```bash
# The file goes in your project's memory directory
touch ~/.claude/projects/<your-project-hash>/memory/pattern_<domain>_<keyword>.md
  1. Write the pattern using the format above: problem, what failed, what worked, when it applies, what to avoid.

  2. Add an entry to MEMORY.md:

markdown
- [Pattern: <name>](pattern_<domain>_<keyword>.md) — <specific one-line hook>
  1. Test the search: in the next session, start a related task and watch task-intake find your pattern.

The first time this happens — the first time task-intake surfaces a pattern you stored 2 weeks ago and it saves you from a mistake — the habit of storing patterns becomes automatic. The return is visceral and immediate.

Key Takeaway

The CHRONICLE skill builds a ReasoningBank of solved patterns stored in auto-memory. Search before starting any complexity ≥ 4 task. Store after any task with non-obvious solutions, failed first approaches, or transferable lessons. Pattern files include: problem, what failed, what worked, when it applies, what to avoid. The compound interest effect: the system gets smarter with every session. The first 20 sessions build the library. After 50, you rarely solve the same problem twice.