9 Enforcement Hooks That Catch Mistakes Before They Ship
- Enforcement hooks run automatic checks before or after every tool call in Claude Code, catching mistakes without manual review
- PreToolUse hooks block bad output before it lands in your files, PostToolUse hooks validate what just happened
- A brand compliance hook can prevent wrong colors, banned words, and style violations from ever entering your codebase
- Spacing and accessibility hooks enforce design system rules that humans forget under pressure
- 9 hooks running together create a quality layer that works while you sleep
The Problem With Manual Code Review
Every developer has shipped something they should have caught. A hardcoded color that does not match the brand. An image missing alt text. A spacing value that breaks the design system grid. A secret that snuck into a config file.
Code review catches some of this. Linting catches more. But when you run a solo operation or a small team, things fall through. You are moving fast, juggling 6 projects, and the part of your brain that checks for #fff instead of #F5F5F7 is asleep by 3pm.
I spent months catching the same mistakes manually. Same brand violations. Same accessibility gaps. Same spacing inconsistencies. Every time, I would fix it, tell myself to remember next time, and forget next time.
Then I started building enforcement hooks for Claude Code. And the mistakes stopped.
Not reduced. Stopped.
What Enforcement Hooks Actually Are
An enforcement hook is a script that runs automatically at a specific point in Claude Code's workflow. You do not call it. You do not remember to run it. It fires every single time, on every single operation, without exception.
There are two trigger points:
PreToolUse fires before Claude Code writes, edits, or executes something. If the hook fails, the operation gets blocked. The bad code never touches your file. Think of it as a bouncer at the door.
PostToolUse fires after an operation completes. It validates what just happened and can flag issues for immediate correction. Think of it as a quality inspector on the assembly line.
The key insight is passive enforcement. You are not running a linter manually. You are not remembering to check the brand guide. The hook runs on every relevant operation whether you are paying attention or not. At 9am when you are sharp. At midnight when you are not.
This is fundamentally different from documentation or team agreements. Documentation says "please do X." A hook says "you cannot ship without X."
Hook 1: Brand Compliance
This was the first hook I built, and it paid for itself in a week.
The problem: RAXXO Studios has a strict brand system. Background color #1f1f21. Text color #F5F5F7 (never #fff or #ffffff). Lime accent #e3fc02. Font family Outfit. No em dashes anywhere in content. Certain words are banned from all copy.
Before the hook, I would catch brand violations in review maybe 80% of the time. The other 20% shipped and had to be fixed later. Across 12 repositories, that adds up to hours of cleanup every week.
The hook is simple in concept. It intercepts every Edit and Write operation, scans the content for violations, and blocks the operation if it finds any. Here is a simplified example of what the check looks like:
# Check for wrong text color
if echo "$content" | grep -q "#fff\b\|#ffffff"; then
echo "BLOCKED: Use #F5F5F7, not #fff"
exit 1
fi
That is the basic idea. The production version checks for about 15 different brand rules. Every file, every edit, every time. I have not shipped a #fff in months.
Hook 2: Accessibility Validation
Accessibility is one of those things everyone agrees is important and almost nobody enforces consistently. I am guilty of this too. Before hooks, my accessibility compliance was spotty at best.
The accessibility hook checks for the basics that should never ship broken:
- Images without alt text
- Links without descriptive text (no "click here")
- Color contrast issues against the dark background
- Missing ARIA labels on interactive elements
- Form inputs without associated labels
It runs on every HTML and Liquid file edit. If you add an image without alt text, the operation blocks. Not a warning. A block. You have to add the alt text to proceed.
This changed my accessibility compliance from "I try to remember" to "it is physically impossible to forget." The difference matters. Especially when you are shipping across multiple storefronts and web apps.
Hook 3: Spacing System Enforcement
I use a strict spacing scale: 0, 2, 4, 6, 8, 12, 16, 20, 24, 32, 48, 64px. No arbitrary values. No 15px margins. No 37px padding. Every spacing value must come from the scale.
This sounds trivial until you realize how often AI-generated code uses random spacing values. Claude might suggest padding: 18px because it looks reasonable. And it does look reasonable. But it breaks the system, and broken systems compound into visual inconsistency across an entire product.
The spacing hook scans for CSS spacing properties (margin, padding, gap) and validates that the values match the approved scale. Arbitrary values get blocked with a message showing the closest valid option.
BLOCKED: padding: 18px is not on the spacing scale.
Closest valid values: 16px or 20px
This single hook eliminated an entire category of design debt from my projects. No more "why does this section feel slightly off" debugging sessions.
The PreToolUse Pattern in Detail
PreToolUse hooks work as gatekeepers. They inspect the proposed operation and either allow it or block it. The flow looks like this:
1. Claude Code prepares an edit (writing to a file, running a command)
2. The hook receives the operation details (file path, content, tool name)
3. The hook runs its checks against the content
4. If checks pass: operation proceeds normally
5. If checks fail: operation is blocked, Claude gets the error message
The critical design decision is that blocked operations never touch the filesystem. The bad code does not exist for even a moment. This is cleaner than linting after the fact, because there is no "fix it later" step. It just does not ship.
For hooks that need to be smart about context (like knowing which file type they are checking), the hook receives metadata about the operation. You can check file extensions, target directories, or even the tool being used (Edit vs Write vs Bash).
I configure mine in Claude Code's settings file, mapped to specific tool triggers. A brand check on Edit and Write operations. A secrets scanner on Bash commands. Each hook fires only when relevant, so there is no performance overhead on unrelated operations.
The PostToolUse Pattern
PostToolUse hooks solve a different problem. Sometimes you cannot validate the operation before it happens. You need to see the result first.
Example: after a blog post is published via the Shopify API, a PostToolUse hook automatically rebuilds the blog index. It does not block anything. It triggers a follow-up action based on what just happened.
Another example: after running a build command, a PostToolUse hook can scan the output for warnings and flag them immediately. Instead of scrolling through 200 lines of build output hoping to spot issues, the hook pulls out what matters.
The pattern is:
1. Claude Code completes an operation
2. The hook receives the result (output, file changes, exit codes)
3. The hook runs its analysis
4. If issues found: flag them for immediate attention
5. If clean: proceed silently
PostToolUse hooks are less aggressive than PreToolUse hooks. They inform rather than block. But they catch things that would otherwise require a human to notice, and humans are not great at noticing things consistently.
What the Other 6 Hooks Cover
I mentioned 3 hooks in detail. The full set of 9 covers more ground:
SEO validation catches missing meta descriptions, titles that are too long, and pages without structured data. Every page edit gets checked.
Security scanning blocks API keys, tokens, and credentials from appearing in any file. This runs on every operation, every file type, no exceptions. One leaked key can cost thousands.
Dead code detection flags unused imports, unreachable code blocks, and orphaned functions after refactoring operations.
Value protection prevents proprietary business logic, pricing details, and product internals from appearing in public-facing content. This is critical when you blog about your own tools.
Content voice checks that blog content uses first person singular (I, not we), avoids banned AI-sounding words, and follows the house style guide.
Dependency auditing validates that new package installations do not introduce known vulnerabilities or unnecessary bloat.
Each hook is a focused script. Most are under 50 lines. They do one thing, do it on every relevant operation, and either block or flag. No configuration UI. No dashboard. Just scripts that run.
Why 9 Hooks Beat 90 Lint Rules
Linting is good. I use linters. But linters have two problems that hooks solve.
First, linters run when you remember to run them. Or when CI runs them, which means the bad code already exists in a commit. Hooks run before the code exists. The feedback loop is instant, not delayed by a push-and-wait cycle.
Second, linters are generic. They know about JavaScript patterns or CSS best practices, but they do not know about your brand system, your spacing scale, or your business rules. Hooks are custom. They encode your specific standards, not general best practices.
The combination is powerful. Linters handle the universal stuff (syntax errors, common anti-patterns). Hooks handle the specific stuff (your brand, your accessibility requirements, your security rules). Together, they create a quality surface that catches issues at every level.
After 3 months of running 9 hooks across 12 projects, here is what changed:
- Brand violations in shipped code: dropped from roughly 8 per week to 0
- Accessibility issues caught post-deploy: dropped from 3-4 per month to 0
- Spacing inconsistencies: eliminated entirely
- Time spent on manual review: cut by roughly 60%
- Security incidents from leaked credentials: 0 (was 0 before too, but now I sleep better)
The hooks do not make me a better developer. They make it harder to be a worse one at 11pm on a Tuesday.
Building Your First Hook
If you want to start with enforcement hooks, start with one. Pick your most common mistake. The thing you catch in review every other week. Build a hook that blocks it.
The structure is simple:
1. Decide the trigger (PreToolUse for blocking, PostToolUse for flagging)
2. Decide the scope (which tools, which file types)
3. Write the check (usually grep or a short Python script)
4. Return a non-zero exit code to block (PreToolUse) or print a warning (PostToolUse)
5. Wire it into your Claude Code settings
Start with something small. A check for one banned word. A check for one wrong color value. Get it running. Feel the relief of knowing that one specific mistake can never ship again.
Then add the second hook. Then the third. Each one removes another category of mistakes from your output. After 9, you have a quality layer that runs 24/7 without human attention.
That is the entire point. Not perfection. Just consistent enforcement of the standards you already know matter.
Get Production-Ready Hooks
Building hooks from scratch is straightforward, but getting the edge cases right takes iteration. File type detection, performance on large files, clear error messages that actually help Claude fix the issue, handling of partial matches and false positives.
The Claude Blueprint includes 3 production-ready hooks (brand compliance, accessibility, and security scanning) that work out of the box. Pre-tested across 12 live projects, documented, and ready to drop into any Claude Code workspace. 33 EUR.
This article contains affiliate links. If you sign up through them, I may earn a small commission at no extra cost to you. (Ad)