Claude Code Hooks & Skills: I Automated My Dev Workflow in 48 Hours

First, let me show you what happened. I set up a single configuration file that made Claude Code automatically fix its own TypeScript errors — in a loop, without me touching the keyboard. After that, 48 hours of testing claude code hooks and skills on a live project, I watched the system resolve 14 type errors back-to-back. However, the real discovery wasn’t the hooks system itself. More importantly, it was the pricing math that proves Anthropic built Claude Code for something most developers haven’t figured out yet.

Here’s the truth: most developers installing Claude Code treat it like ChatGPT in a terminal. Specifically, they chat, they copy, they paste. Honestly, I did the same thing for my first week. Then I found the hooks system, and my workflow changed forever.

In this guide, I walk through claude code hooks, skills, the CLAUDE.md memory file, and MCP servers — the four pieces that turn a chat tool into an autonomous engineering system. For this test, I ran every feature on a real WordPress deploy over 48 hours. Bottom line, every config snippet below is copy-paste ready.

Quick Start: What You Need

ToolClaude Code CLI (latest version)
SubscriptionPro ($20/mo) or Max ($100–$200/mo)
Time to Configure5 minutes (basic) to 45 minutes (full pipeline)
DifficultyIntermediate — terminal proficiency required
Monthly Cost Range$20 (Pro) to $200 (Max 20x)
What You’ll BuildA self-healing TypeScript pipeline, custom skills, and MCP integrations

claude code hooks configuration in a developer terminal window showing automatic TypeScript error fixing

What Are Claude Code Hooks? (Your Code’s Immune System)

In other words, Claude Code hooks are shell commands that run automatically at specific moments in Claude’s lifecycle — before a tool runs, after a file edits, when a session starts, or when a permission gets denied. To clarify, think of them as your codebase’s immune system. Above all, they enforce rules Claude cannot override.

At first, I used to think hooks were glorified git pre-commit scripts. However, the truth is different. In contrast, unlike pre-commit hooks, claude code hooks fire on every single tool call Claude makes — every Edit, every Read, every Bash, every Write. Consequently, that adds up to hundreds of events per session.

What does that matter in practice? For example, if you add a PostToolUse hook that runs ESLint after every file edit, Claude gets the linter output piped straight back into its context. Next, it reads the error. After that, it fixes the code. Put simply, no human involved.

Here’s the catch: hooks run outside the LLM. Specifically, they’re OS-level shell scripts. As a matter of fact, the model cannot hallucinate around them. In addition, it cannot ignore them. Therefore, if exit code 2 comes back from a PreToolUse hook, the tool call is blocked — period.

Above all, understanding hooks at a high level is step one, but the real power lives in choosing the right lifecycle event — and exactly nine of them exist.

The Complete Hooks Cheat Sheet: All 9 Lifecycle Events

Claude Code ships with nine lifecycle hooks. Three fire around tool calls (PreToolUse, PostToolUse, PermissionRequest). Meanwhile, session state gets tracked by SessionStart and SessionEnd. For user input, UserPromptSubmit and Stop cover the boundaries. Finally, PreCompact and PermissionDenied manage context and permission edges. Each has a specific superpower.

HookWhen It FiresCan Block?Typical Use
PreToolUseBefore any tool runsYES (exit 2)Security guardrails, protect secret files
PostToolUseAfter tool completesNoAuto-lint, format, compile, log activity
PermissionRequestAt permission boundaryNoAuto-approve known-safe bash commands
PermissionDeniedWhen user deniesNoRetry logic, push notifications
SessionStartSession beginsNoInject env context, pull latest deploy log
SessionEndSession endsNoCleanup, save final state
StopResponse finishesNoWrite to memory, save session artifacts
UserPromptSubmitUser hits enterNoSanitize input, set window title
PreCompactBefore context compactionNoSave variables before truncation

Look, most developers only ever use two of these: PreToolUse and PostToolUse. However, ignoring SessionStart is a mistake. For example, I use SessionStart to pull my latest deploy log and inject it into Claude’s context. At this point, every session starts knowing exactly what production looks like.

Similarly, knowing every hook exists is useful, but until you write your first one, none of this matters — so let’s build one in five minutes flat.

Step-by-Step: Your First PostToolUse Hook in 5 Minutes

To clarify how to create your first claude code hooks, open .claude/settings.json in your project root, add a hooks object with a PostToolUse matcher, point it to a shell command, and save. Once complete, the next tool call triggers it automatically.

First, create the settings file:

mkdir -p .claude
touch .claude/settings.json

Next, paste this minimal config:

{
  "hooks": {
    "PostToolUse": [
      {
        "matcher": "Edit|Write",
        "hooks": [
          {
            "type": "command",
            "command": "npx eslint $CLAUDE_FILE_PATH --fix"
          }
        ]
      }
    ]
  }
}

After that, start Claude Code inside the project:

claude

That’s it. Specifically, every time Claude edits or writes a file, ESLint runs with --fix attached. For example, if ESLint fails with errors, Claude sees them in its next turn and handles them automatically.

I noticed that the $CLAUDE_FILE_PATH environment variable is the real trick here — it tells your hook exactly which file Claude touched. In other words, no grep, no guessing. Meanwhile, you can chain multiple commands with && or call a small shell script for anything more complex.

Therefore, running ESLint is fine, but the real game-changer arrives when you chain the compiler, the linter, and Claude’s context together — which is exactly what I built next.

I Built a Self-Healing TypeScript Pipeline — Zero Human Input

As a matter of fact, every tutorial explains what hooks ARE. However, I spent 48 hours building something with them. In other words, something I now call the self-healing pipeline.

How the Compiler Feedback Loop Works

Here’s the setup. My PostToolUse hook runs tsc --noEmit after every Edit or Write to a .ts file. Specifically, if the compiler finds a type error, the hook returns exit code 2 and prints the error to stderr. Meanwhile, Claude Code catches stderr and pipes it back into the model’s context on the next turn.

What happens next felt unreal. In practice, Claude reads the error, edits the code, triggers the hook again, reads the next error, fixes it, triggers the hook again. I watched it fix 14 type errors in a row without me touching the keyboard. For example, it even caught an edge case where an import path broke after a rename — something I would have missed for an hour.

The Exact Config That Makes It Work

Here’s the exact config I used:

{
  "hooks": {
    "PostToolUse": [
      {
        "matcher": "Edit|Write",
        "hooks": [
          {
            "type": "command",
            "command": "tsc --noEmit --pretty false 2>&1; exit $?"
          }
        ]
      }
    ]
  }
}

Here’s the deeper insight: claude code hooks don’t just automate tasks. They turn Claude into a recursive debugging engine. Every error message becomes a training signal within the session. Every fix updates Claude’s internal model of your codebase’s rules. By hour 3, Claude was anticipating type issues before the compiler caught them.

In other words, hooks aren’t a developer feature. In contrast, they’re a feedback loop. Above all, feedback loops are how intelligence actually works.

On the other hand, hooks automate the moment-to-moment workflow, but for repeatable playbooks you need a different system — Claude’s skills folder.

Claude Code Skills: Modular Playbooks That Load on Demand

Put simply, skills are self-contained folders in .claude/skills/<skill-name>/ with a SKILL.md file that Claude loads on demand when you mention a trigger phrase. However, unlike hooks, skills are prompts — semantic instructions Claude reads and follows, rather than shell commands it cannot override.

My go-to mental model: hooks are the reflexes, skills are the memories. In other words, when I type commit this, Claude loads my commit.md skill — which has my exact commit message format, my testing checklist, my branch naming rules. Therefore, no copy-paste needed.

Specifically, the SKILL.md file uses YAML frontmatter:

---
name: commit
description: Creates conventional commits with tests and linting
paths: ["src/**/*.ts", "src/**/*.tsx"]
---

# Commit Skill

1. Run npm test — must pass
2. Run npm run lint — must pass
3. Use conventional commit format: feat(scope): message
4. Reference issue number if present in branch name

Skills live in two places. For example, personal skills go in ~/.claude/skills/ and work across every project. Meanwhile, project skills live in .claude/skills/ inside the repo, specific to that codebase. In practice, Claude auto-discovers both.

But wait, there’s more. In addition, skills support subdirectories. For example, you can have a references/ folder with API docs Claude reads as reference material, and a scripts/ folder with executables Claude can run — all triggered by one SKILL.md entry.

At this point, skills are powerful once you’ve written one, but the real friction is knowing how to structure that first skill without breaking it — so here’s the exact template I use.

How to Create Your First Custom Skill (With Template)

To clarify the process, create the folder .claude/skills/<skill-name>/, add a SKILL.md file with YAML frontmatter (name, description, paths), write your instructions in markdown below the frontmatter. Once complete, Claude picks it up on the next session.

First, set up the folder structure:

.claude/
  skills/
    deploy-to-staging/
      SKILL.md
      scripts/
        deploy.sh
      references/
        staging-env.md

Next, add the SKILL.md template (copy-paste):

---
name: deploy-to-staging
description: Push changes to staging with smoke tests and rollback
paths: ["**/*.ts", "**/*.tsx", "package.json"]
disable-model-invocation: false
---

# Deploy to Staging Skill

## Pre-deploy checks
1. Run tests: npm run test:ci
2. Build bundle: npm run build
3. Check bundle size under 500kb

## Deploy
Run ./scripts/deploy.sh - uses $CLAUDE_SKILL_DIR for path resolution.

## Smoke tests
Load staging-env.md reference and run the smoke test matrix.

After that, trigger it in chat:

Finally, just type deploy to staging and Claude loads the skill. Put simply, one phrase pulls in every line of your playbook.

Let me be honest: the first skill you write will probably be too long. For example, I wrote a 500-line SKILL.md for my first one. As a matter of fact, Claude burned 4,000 tokens before anything useful happened. Consequently, my rule is simple — each SKILL.md stays under 60 lines. Therefore, push longer content into the references/ folder so Claude loads it only when needed.

Meanwhile, the dynamic variable ${CLAUDE_SKILL_DIR} resolves to the skill’s folder path, which means your scripts can reference each other without hardcoded absolute paths. Similarly, $ARGUMENTS captures anything the user types after the trigger phrase.

On the other hand, skills handle playbooks, hooks handle automation — but neither works well without a memory file that tells Claude who you are and how you think.

2026 Data Point

The Subsidy Nobody Talks About

Claude Code’s Max 20x plan costs $200/month for ~220,000 tokens per 5-hour window. The same usage via raw API would cost approximately $15,000/month. That’s a 93% cost reduction — Anthropic is literally subsidizing autonomous coding pipelines. (Source: Anthropic pricing, April 2026)

CLAUDE.md: The Memory File That Controls Everything

Put simply, CLAUDE.md is a markdown file at your project root that Claude Code loads at every session start. Specifically, it contains your architecture rules, exact commands, domain vocabulary, and style preferences. More importantly, keep it under 300 lines — longer files bloat context and confuse the model.

In my experience, testing this was eye-opening. For example, my first CLAUDE.md was 900 lines. As a matter of fact, it covered everything. However, Claude ignored half of it because the token budget at session start was blown before I typed a prompt.

In contrast, here’s the 4-section structure that actually works:

  1. Architecture Rules — how the codebase is organized (folders, layers, naming)
  2. Exact Commands — the specific CLI commands for build, test, deploy
  3. Domain Vocabulary — project-specific terms Claude should use correctly
  4. Style Preferences — code style, comment style, commit message style

Look, here’s the real power: CLAUDE.md is static rules. In contrast, SKILL.md is dynamic playbooks. Above all, mix them. For example, put “always use Zod for validation” in CLAUDE.md. Similarly, put “when the user says deploy, do X, Y, Z” in a skill. Put simply, static versus dynamic — that’s the separation.

Specifically, I keep three configuration files per project:

  • CLAUDE.md — project rules, committed to git
  • .claude/settings.local.json — my machine-specific hooks, gitignored
  • ~/.claude/settings.json — global preferences across every project

To clarify, configuration priority runs from enterprise managed settings at the top, through command-line flags, local project settings, shared project settings, and finally user globals at the bottom. Meanwhile, the deny-first permission model means anything not explicitly allowed is blocked by default.

At this point, you’ve seen hooks, skills, and CLAUDE.md — but before we wrap, I owe you the pricing insight I promised in the intro, and it changes how you should think about this whole setup.

The $200 vs $15,000 Secret: Why Hooks Are the Real Product

First, Claude Code’s Max 20x plan costs $200 a month for roughly 220,000 tokens every five hours. Meanwhile, let me do the math you haven’t seen yet.

However, if you ran the same token volume through the raw Anthropic API at Opus 4.6 pricing — $5 per million input, $25 per million output — you’d pay north of $15,000 a month. Consequently, that’s a 93% cost reduction on the Max plan.

Next, why would Anthropic subsidize this so heavily? To clarify, the obvious answer is ecosystem lock-in. However, keep asking.

Why Anthropic Subsidizes Heavy Usage

Why does lock-in matter? Because once a developer writes 20 skills and 10 hooks and a 300-line CLAUDE.md, switching costs are enormous. Why does that matter? Because Anthropic wants developers running autonomous, overnight, 10-billion-token pipelines. Why does it want that? Because that’s where agentic coding actually lives — not in chat windows, but in scheduled background jobs.

Here’s the insight nobody has written: claude code hooks are not a power-user feature. Above all, they are the product. In contrast, the chat interface is the demo. Specifically, the hooks, the skills, the MCP servers — that’s what Anthropic actually built Claude Code for. Put simply, autonomous, overnight, zero-human engineering.

In other words, the pricing isn’t a ceiling on how much you should use. More importantly, it’s an invitation to run pipelines you’d never dare run on raw API costs. To clarify, the $200 plan doesn’t limit you — it liberates you. Therefore, if you’re treating Claude Code like a chatbot, you’re leaving 90% of its value on the table.

Finally, that’s the math. Put simply, every developer I know who actually uses hooks switched to Max 20x within a week.

Similarly, the economics speak for themselves, but extending Claude Code to your existing stack takes one more configuration file — MCP servers.

MCP Servers: Connecting Claude Code to Your Entire Stack

Put simply, MCP (Model Context Protocol) servers extend Claude Code with external tools like GitHub, PostgreSQL, Slack, and your filesystem. Specifically, configure them in .mcp.json at your project root. Meanwhile, each server shows up as a set of tools Claude can call directly.

code editor and terminal showing claude code hooks plus MCP server configuration in .mcp.json

For example, here’s a sample .mcp.json:

{
  "mcpServers": {
    "github": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": { "GITHUB_TOKEN": "${GITHUB_TOKEN}" }
    },
    "postgres": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-postgres", "postgresql://localhost/dev"]
    }
  }
}

To clarify, two transport types exist. First, Stdio is for local processes — fast, secure, no network. In contrast, HTTP is for remote servers — useful for team-wide tooling. Similarly, both support environment variable expansion via ${VAR_NAME} syntax, which keeps secrets out of the committed file. Meanwhile, the official spec lives at the Model Context Protocol site if you want the deep dive.

I’ll cut to the chase: don’t install every MCP server you find. Specifically, each server’s tool descriptions consume your context window. For example, five MCP servers can eat 10,000 tokens before you send your first prompt. Therefore, my rule is simple — install only the servers you’ll actively use this week.

For WordPress work like mine, I connect one MCP server (filesystem) plus the built-in bash tool. In other words, that’s enough. Meanwhile, for a database-heavy project, I’d add Postgres. Similarly, for a team workflow, I’d add Slack and GitHub.

At this point, you’ve got the full stack — hooks, skills, memory, MCP. However, how does Claude Code actually stack up against the tools your coworkers are using?

Claude Code vs Cursor vs Copilot: The 2026 Developer’s Decision

Put simply, Claude Code dominates for autonomous refactoring and multi-file context work. Meanwhile, Cursor wins for daily typing speed with inline autocomplete. On the other hand, Copilot is the safest choice inside enterprise VS Code environments. In contrast, most serious developers use two — Claude Code for big moves, Cursor for hour-by-hour coding.

FeatureClaude CodeCursorCopilotWindsurf
InterfaceTerminal CLIAI IDE (VS Code fork)PluginAI IDE
Context Window1M tokensLarge (Composer)LimitedLarge (Cascade)
Agentic PowerSuperiorModerateSpecificModerate
CustomizationHooks + Skills + MCPCustom rulesMinimalMemories
AutocompleteNoneHighest (~72% accept)HighModerate-High
Best ForBig refactors, autonomous pipelinesDaily typingEnterprise VS CodeMid-size refactors

✅ Pros

  • Autonomous multi-file refactoring with 1M tokens
  • Deterministic guardrails via OS-level hooks
  • CLAUDE.md + SKILL.md memory separation
  • CI/CD automation via GitHub CLI
  • 93% cost reduction vs raw API on Max plan

❌ Cons

  • No inline autocomplete — pair with Cursor
  • Steep learning curve for terminal beginners
  • Token bloat from too many MCP servers
  • Enterprise features priced at $15–$25 per PR
  • Generic boilerplate without strict CLAUDE.md

Let’s be honest about the weakness. Specifically, Claude Code has zero inline autocomplete. In other words, zero. Therefore, if you want ghost text appearing as you type, you need Cursor or Copilot in the loop. At this point, I run both — Cursor for typing, Claude Code for thinking.

Yes, you read that right: the best Claude Code workflow pairs it with a different AI tool. As a matter of fact, Anthropic hasn’t shipped autocomplete, and based on the roadmap they won’t any time soon. In contrast, they’re betting on the async, agentic side of coding — not the keystroke side.

Want a free local alternative that works offline? For example, my DeepSeek R1 local install guide walks through the exact setup on Windows 11. On the other hand, prefer a full IDE experience? Read my Cursor vs Windsurf AI coding assistant comparison for the tradeoffs. Similarly, building a quick landing page for a side project? Check my Mixo AI review for the fastest no-code path. Meanwhile, optimizing your dev schedule too? My Reclaim vs Motion comparison breaks down which scheduler survives a chaotic week.

At this point, the comparison is clear, but even a great tool fails if you misconfigure it — and five specific mistakes I’ve made are ones I don’t want you to repeat.

5 Configuration Mistakes That Will Break Your Claude Code Setup

Specifically, the five mistakes that kill Claude Code setups are: 900-line CLAUDE.md files, too many MCP servers, PreToolUse hooks without exit code handling, skills longer than 100 lines, and missing env variable expansion for secrets. Above all, fix these before your first real session.

  1. CLAUDE.md bloat. Stay under 300 lines. Always. Longer files crowd out your actual prompt tokens and make the model skim the rules.
  2. MCP server hoarding. Each server’s tool descriptions cost context. For example, five MCP servers can consume 10,000 tokens before your first message. Install only what you’ll use this week.
  3. Silent hook failures. If your PreToolUse hook crashes, it blocks nothing. Always check exit codes. To be fair, the official Claude Code GitHub repo has example hooks that handle this correctly.
  4. Skills as essays. A SKILL.md over 100 lines wastes tokens every time it loads. Push details into the references/ subfolder for on-demand reads.
  5. Hardcoded secrets. Never paste tokens into .mcp.json. Use ${ENV_VAR} expansion because .mcp.json gets committed to git.

Finally, ready to dive in? First, pick one project, create .claude/settings.json, add a single PostToolUse hook running your linter, and start Claude Code. Put simply, that’s the whole onboarding. Once complete, you’ll be in the top 5% of Claude Code users within an hour.

At this point, setup questions usually come next — here are the four that land in my inbox the most.

Frequently Asked Questions About Claude Code Hooks

Is Claude Code worth $200/month for the Max plan?

For developers running autonomous pipelines, yes. The Max 20x plan gives roughly 220,000 tokens per 5-hour window, which the same usage on raw API would cost about $15,000 per month. If you’re only chatting with Claude a few times a day, start with Pro at $20 and upgrade when you hit the rate limits twice in one week.

Can Claude Code replace Cursor for daily coding?

No, not for typing-heavy work. Claude Code has zero inline autocomplete, which Cursor does at ~72% acceptance rate. Most serious developers pair them: Cursor handles live typing and ghost text, while Claude Code handles big refactors, multi-file context, and autonomous pipelines via hooks.

How do Claude Code hooks differ from GitHub Actions?

GitHub Actions fire in CI/CD after you push. Claude Code hooks fire in real time during your local session — before every tool call, after every file edit, on session start. Hooks also pipe stderr back into the model’s context, which means Claude reads and responds to its own failures within the same session.

Do I need to know terminal commands to use Claude Code?

Yes, at an intermediate level. You’ll work with JSON config files, shell scripts, and commands like npm, git, and tsc. If you’re comfortable running basic CLI tools and editing JSON, you’ll be fine. Complete beginners might start with Cursor first and move to Claude Code after building CLI confidence.




Leave a Comment