GadaaLabs
Claude Code Superpowers: AI That Gets Smarter With Every Task
Lesson 9

The Auto-Memory System — Teaching Claude to Remember

16 min

Every session with an AI assistant starts from zero. The preference you expressed last Thursday, the architecture decision you made with Claude two weeks ago, the correction you gave when Claude recommended the wrong library — all of it is gone. You re-establish context from scratch, every time.

The auto-memory system changes this. It is a file-based persistence layer that survives across sessions. At the start of every conversation, Claude reads a memory index. The memories in that index shape how Claude understands your project, your role, and your working preferences before you type the first message.

Where Memories Live

The memory directory is at:

~/.claude/projects/<project-hash>/memory/

The <project-hash> is a deterministic hash of your project's absolute path. If you always open Claude Code from the same directory, you always get the same memory directory. If you open it from a different path, you get a different memory context.

The directory structure:

memory/
  MEMORY.md                     ← loaded at every session start
  user_profile.md               ← who you are
  feedback_testing.md           ← corrections and validated approaches
  project_auth_rewrite.md       ← current project context
  reference_linear_bugs.md      ← where to find external information
  pattern_ml_drift_detection.md ← solved patterns (from learning-from-experience)

MEMORY.md is the index. It is the only file guaranteed to be read at session start. Every other file is referenced from the index and loaded on demand. Keep the index under 200 lines — lines beyond 200 are truncated.

The Four Memory Types

Each memory file has a type field in its frontmatter. The type determines how the memory is used:

user — Who you are and how you work. Role, expertise level, domain background, working preferences. Used to tailor explanations and recommendations to your specific context.

Example of a useful user memory:

markdown
---
name: User Profile
description: Developer building AI educational site; Python background, new to Next.js/TypeScript
type: user
---

Software engineer with 8 years of Python backend experience. Currently building 
GadaaLabs.com, an AI educational platform using Next.js 16, TypeScript, and 
the Vercel AI SDK. Comfortable with ML concepts. Frontend is relatively new 
— frame TypeScript/React explanations in terms of Python analogues where possible.

Prefers: concrete examples over abstract explanations, working code over 
explanations of why something theoretically works.

feedback — Corrections and validated approaches. What Claude got wrong and how to avoid it. What unusual approaches were confirmed to work.

Structure for feedback memories: Lead with the rule, then Why: (the reason), then How to apply: (when it triggers).

markdown
---
name: Never Revert Animated Cards
description: Admin animated access cards must not be reverted or simplified
type: feedback
---

NEVER simplify or revert the animated access cards on the DataLab admin page.
They were deliberately designed with complex CSS animations for the product identity.

**Why:** A previous session "simplified" them to static cards to fix a CSS issue,
which broke the intended visual design and required re-implementation.

**How to apply:** Any fix to the DataLab admin page must preserve the animated 
card behavior. If there is a CSS conflict, solve it without touching the animation 
logic.

project — Ongoing context, decisions, and constraints that change over time. Current initiatives, architectural decisions, deadlines, stakeholder constraints.

Structure: Lead with the fact, then Why: (motivation), then How to apply: (how it shapes recommendations).

markdown
---
name: Auth Middleware Rewrite Context
description: Auth rewrite is compliance-driven, not tech-debt; compliance over ergonomics
type: project
---

The auth middleware rewrite is driven by a legal compliance requirement around 
session token storage, not by technical debt cleanup. Legal flagged the current 
implementation for storing session tokens in a way that does not meet new regulations.

**Why:** Compliance requirement, not engineering preference. The deadline is 
fixed by the legal review schedule.

**How to apply:** Scope all auth middleware decisions around compliance first.
Do not add features or ergonomic improvements unless they do not risk the 
compliance goal. Flag any suggestion that might extend the timeline.

reference — Pointers to external systems and where to find authoritative information.

markdown
---
name: Linear Bug Tracking
description: Pipeline bugs tracked in Linear project "INGEST"
type: reference
---

All data pipeline bugs are tracked in Linear project "INGEST". When investigating 
pipeline issues, check INGEST for prior reports before starting fresh investigation.
Oncall latency dashboard: grafana.internal/d/api-latency — check this when 
editing request-path code.

What Makes a Good Memory vs a Bad Memory

Not everything worth knowing belongs in memory. The most important distinction: memory is for what is true across sessions, not what is true for this session.

Good candidates for memory:

  • Your role and expertise (user type)
  • Corrections you have given before that you will need to give again (feedback type)
  • Architectural decisions with rationale (project type)
  • Where to find external information (reference type)
  • Non-obvious project constraints (project type)

Bad candidates for memory (do not save these):

  • Code patterns, conventions, file paths — derivable by reading the code
  • Git history — git log is authoritative
  • In-progress work from the current session — that's task state, not memory
  • Debugging solutions — the fix is in the code; the commit message has the context
  • PR lists or activity summaries — too ephemeral; only the surprising part is worth keeping

When in doubt: if Claude can find it by reading the current project state, do not put it in memory. Memory is for context that is not in the code.

The MEMORY.md Index

MEMORY.md is the file that loads at every session start. It must be concise — 200 lines maximum, one line per memory entry.

Format: - [Title](file.md) — one-line hook

markdown
# Memory Index

- [User Profile](user_profile.md) — Python backend engineer, new to Next.js; frame frontend in Python terms
- [Never Revert Animated Cards](feedback_animated_cards.md) — animated admin cards must never be simplified
- [Auth Rewrite — Compliance Context](project_auth.md) — compliance-driven, deadline fixed by legal schedule
- [DataLab Architecture](project_datalab.md) — 19-tab two-level nav, PDF isolation, Recharts quirks
- [Linear Bug Tracking](reference_linear.md) — pipeline bugs in Linear "INGEST"; oncall dashboard location

The one-line hook is what Claude reads to decide whether to load the full memory file. Make it specific. "User profile" is not a hook — it tells Claude nothing about whether the file is relevant to the current task. "Python backend engineer, new to Next.js; frame frontend in Python terms" tells Claude to load this file when working on frontend tasks.

Writing a Memory File

Every memory file uses the same frontmatter format:

markdown
---
name: Short descriptive name
description: One-line description used for relevance matching in future conversations
type: user | feedback | project | reference
---

Memory content here.

For feedback and project memories, use the structured format:

feedback format:

Lead with the rule itself.
**Why:** The reason the user gave — the past incident or strong preference.
**How to apply:** When and where this guidance kicks in.

project format:

Lead with the fact or decision.
**Why:** The motivation — constraint, deadline, or stakeholder requirement.
**How to apply:** How this should shape recommendations.

The Why: and How to apply: lines serve a specific purpose: they let Claude make judgment calls about edge cases instead of blindly following the rule. "Never touch the auth middleware" is a rule. "Never touch the auth middleware because the compliance audit uses a snapshot from the 15th and any change invalidates it" is a rule with context that lets Claude handle exceptions intelligently.

Updating and Removing Stale Memories

Memories decay. An architectural decision made 6 months ago may have been reversed. A project deadline that was "next Thursday" is now past. A constraint that applied to the MVP may not apply to v2.

Signs a memory is stale:

  • It references relative dates ("next Thursday", "this sprint") that have passed
  • It describes work that is now complete
  • It mentions constraints that no longer apply
  • It conflicts with what you observe in the current codebase

When Claude uses a stale memory, it will make recommendations based on incorrect context. This is worse than no memory — it is confidently wrong.

Updating a memory: Find the file, edit the content, update the description in MEMORY.md if the hook changes. No special command needed.

Removing a memory: Delete the file, remove its line from MEMORY.md. Clean memory is better than cluttered memory.

Rule: Before acting on a recalled memory, verify it against current state. "The memory says X file exists" is not the same as "X file exists now." File paths, function names, and architectural decisions should be verified before recommending based on them.

Memory and the Other Intelligence Layers

Memory integrates with two other components of the intelligence layer:

learning-from-experience uses the memory directory to store solved patterns. When you solve a novel problem, the pattern is stored as pattern_<domain>_<keyword>.md and indexed in MEMORY.md. These patterns are then searchable in future sessions. Lesson 10 covers this in full.

task-intake reads memory at the start of every task. When it searches for relevant patterns, it is searching the memory directory. When it announces "Found 2 relevant patterns," it found them in memory files that were stored by previous learning-from-experience invocations.

The three components form a loop: task-intake reads memory → work completes → learning-from-experience writes to memory → next session's task-intake finds the pattern. Each completed task improves future sessions.

Practical: Your First Memory File

After finishing lesson 2 (installing Superpowers), write your first user memory:

  1. Create ~/.claude/projects/<your-project-hash>/memory/user_profile.md:
markdown
---
name: User Profile
description: [Your role, expertise, and relevant background in one line]
type: user
---

[Describe your role, your tech stack comfort level, your domain background,
and any preferences that should shape how Claude works with you.]

Prefers: [list your concrete preferences]
  1. Create ~/.claude/projects/<your-project-hash>/memory/MEMORY.md:
markdown
# Memory Index

- [User Profile](user_profile.md) — [one-line hook]
  1. Start a new Claude Code session and observe how it greets you differently.

The difference is subtle at first — Claude's initial assessment and question framing will reflect what it knows about you. Over time, as you add feedback and project memories, the difference becomes substantial.

Key Takeaway

The auto-memory system is a file-based layer at ~/.claude/projects/<hash>/memory/. MEMORY.md is the session-loaded index — keep it under 200 lines. Four memory types: user (who you are), feedback (corrections and validated approaches), project (ongoing context and decisions), reference (where to find external information). Good memories are non-derivable from the current codebase, session-independent facts. Bad memories are code patterns, current task state, and anything git log can tell you. Verify memories against current state before acting on them. Remove stale memories aggressively.