Building Custom Skills — Extend the System for Your Domain
Every skill in the Superpowers system started as a specific engineering need. The GRADIENT skill exists because ML systems have failure modes that standard web application patterns do not address. The embedded-systems skill exists because ISR safety rules are not derivable from general programming wisdom. The ORACLE skill exists because starting tasks cold causes a specific, preventable failure.
Your domain has its own specific failure modes. The patterns that come with Superpowers address common domains, but your team's accumulated knowledge — the deployment checklist for your infrastructure, the database patterns that work for your data model, the API design conventions your organization has standardized on — none of that is in any off-the-shelf skill.
This lesson teaches you to build skills that capture that knowledge.
Anatomy of a SKILL.md File
Every skill has the same structure. Learn it once and you can read, modify, or write any skill in the system.
Frontmatter (required):
The name field is the identifier used in MEMORY.md references and skill invocations.
The description field is the most important field for discoverability. Claude reads this to determine whether the skill is relevant to the current task. A vague description ("general programming patterns") never triggers. A specific description ("Kubernetes deployment patterns for our staging/production pipeline: health checks, rollout strategies, rollback procedures") triggers when the task matches.
The type field tells Claude how to use the skill:
process: Workflow skills — how to approach a type of work (TDD, debugging, task-intake)domain: Domain expertise skills — what to know when working in a specific area (ml-engineering, ai-engineering)implementation: Execution skills — how to execute a specific type of implementation (brainstorming, writing-plans)
Skill body structure:
Rigid vs Flexible Skills
Rigid skills must be followed exactly. They enforce discipline. Deviation is not adaptation — it is violation. Examples: TDD, systematic-debugging, SENTINEL, task-intake. The value of these skills comes from their consistent application. An engineer who does TDD sometimes provides worse guarantees than one who never does TDD — sporadic discipline creates false confidence.
How to write a rigid skill: use imperative language. "Write the test before the implementation. Do not skip this step. The implementation does not exist until the test exists and fails." No qualifiers. No "when appropriate."
Flexible skills are expertise modules that adapt to context. Examples: ml-engineering, ai-engineering, frontend-excellence. They provide patterns for a domain, but which patterns apply depends on what you are building. You do not apply all of ml-engineering when you are at the monitoring stage — you apply the monitoring section.
How to write a flexible skill: start with an assessment that determines which section applies. Use conditional language for recommendations. "If your primary constraint is latency, use this approach. If cost is the constraint, use this one instead."
The skill itself should tell you which type it is. If the skill does not explicitly say, check: does the value come from always following it, or from knowing when and how to apply it?
Entry Point Patterns
The entry point is the first section after the overview. It determines what happens in the first 5 minutes of skill invocation. Good entry points prevent skill dumps — the failure where a skill loads and immediately provides all its patterns regardless of what the task actually needs.
Pattern 1: Stage Assessment (for lifecycle skills)
Use this when the skill covers a process that has multiple stages and different patterns apply at each stage.
Pattern 2: Type Assessment (for multi-domain skills)
Use this when the skill covers multiple types of work in a domain and the patterns for each type are distinct.
Pattern 3: Checklist-first (for process skills)
Use this for process skills where prerequisites must be in place before the skill's patterns are applicable.
Pattern 4: Constraint question (for optimization skills)
Use this when the skill covers optimization decisions that depend on what you are optimizing for.
Building a Custom Skill: Step by Step
We will build a Kubernetes deployment skill — a realistic example of domain knowledge that belongs in a custom skill rather than a general document.
Step 1: Define the skill boundary
A skill should cover one domain or one process type. Too broad and it becomes a documentation dump. Too narrow and it is not worth the overhead of a skill invocation.
For the Kubernetes deployment skill:
- In scope: deployment configuration, health checks, rollout strategies, rollback procedures, resource limits, namespace conventions
- Out of scope: Kubernetes installation, cluster management, network policy configuration (these are separate domains)
Write this boundary down before you write a single line of the skill.
Step 2: Identify the entry point
What question should Claude ask before applying any Kubernetes patterns?
Step 3: Write each section
For each type, write the patterns:
Auto-rollback trigger: if the new version's error rate exceeds 5% within 10 minutes of deployment, revert immediately. Do not wait for human observation.
Step 5: Extract heavy patterns to a patterns/ directory
If any section contains code or configuration examples longer than ~20 lines, extract to a patterns/ file:
Reference patterns in the skill:
Step 6: Write the final checklist
Every skill ends with a verification gate:
Step 7: Register the skill
Place the skill directory at:
The skill discovery mechanism automatically finds SKILL.md files in subdirectories. No registration command needed — it is available at the next Claude Code session start.
Testing Your Custom Skill
After writing the skill, test three things:
Discoverability: Start a new session. Describe a task that should trigger the skill. Does Claude invoke it? If not, the description field in the frontmatter is not specific enough. Rewrite it to be more specific about the exact scenarios where it applies.
Entry point: Invoke the skill with a task that matches each type in the assessment. Does it route to the right section? Does it avoid loading irrelevant sections?
Completeness: Use the skill for a real task. What did it miss? What checklist item was absent? What red flag pattern occurred that the skill did not warn about? Add what you learn back to the skill.
A custom skill is never finished. It improves with every use.
Sharing Skills with Your Team
The skills directory can be under version control. If you move your custom skills to a shared repository:
Publish as a Claude plugin:
Or share the directory path and have team members symlink it into their Superpowers skills directory:
Team-shared skills compound the knowledge benefit. Every team member's discoveries and corrections improve the shared skill. The accumulated knowledge of the team becomes accessible to every new hire.
Key Takeaway
Custom skills capture domain knowledge that Superpowers does not provide out of the box. Every skill follows the same structure: frontmatter (name, description, type), entry point assessment, content sections, red flags, integration table, and final checklist. Rigid skills enforce discipline consistently; flexible skills adapt to context. The entry point prevents skill dumps — always ask before applying patterns. Extract heavy code examples to patterns/ subdirectories to keep the main skill lean. Test discoverability, routing, and completeness. Share team skills as a plugin or symlinked directory — team knowledge compounds when it is accessible to everyone.