Context Engineering: CLAUDE.md and .cursorrules
75% of engineers use AI tools daily but most teams see no gains. The gap is context, not capability.
# Context Engineering: CLAUDE.md and .cursorrules
[75% of engineers](https://www.faros.ai/blog/context-engineering-for-developers) now use AI tools daily. Most organizations see no measurable productivity gains from them. Faros AI sums it up: "Clever prompts make for impressive demos. Engineered context makes for shippable software." When your AI coding agent enters a session without knowing your naming conventions, architecture patterns, or which directories to never touch, every session starts cold. That overhead compounds across every developer on every task.
## What Context Engineering Actually Is
Context engineering has replaced prompt engineering as the skill that separates productive AI coding assistants from expensive autocomplete. [Martin Fowler defines it](https://martinfowler.com/articles/exploring-gen-ai/context-engineering-coding-agents.html) as "curating what the model sees so that you get a better result." In practice, that means treating your agent's information environment as infrastructure â architecting everything the model can access: project conventions, git history, team standards, tool definitions, and documentation.
The distinction from prompt engineering matters. Prompt engineering is a one-off act: write an instruction, get a response. Context engineering is a system: build the foundation that makes every session reliably productive, not just the occasional lucky one.
Two tools dominate this space right now: **CLAUDE.md** for Claude Code users and **Cursor Rules** for Cursor users. Both serve the same function, a permanent project-scoped instruction set that loads automatically at the start of every session. You configure it once; every subsequent session inherits it. You can debate whether calling this "engineering" is accurate for what amounts to editing a Markdown file. Meanwhile, the developers who figured it out months ago are shipping on first attempts.
## How CLAUDE.md and Cursor Rules Work
CLAUDE.md is a Markdown file at the root of your project. Every time Claude Code opens a session in that directory, its contents are injected into context automatically (an onboarding document for a developer with perfect recall and exact instruction-following).
Claude Code provides [four distinct context mechanisms](https://martinfowler.com/articles/exploring-gen-ai/context-engineering-coding-agents.html), each with a different loading behavior:
- **CLAUDE.md** â always loaded, for project-wide universal conventions
- **Rules** â path-scoped guidance (e.g., rules that apply only to `*.test.ts` files)
- **Skills** â lazy-loaded resources triggered by the agent when a task matches
- **Hooks** â deterministic scripts that run at lifecycle events like file save or commit
Cursor uses a parallel architecture. The original `.cursorrules` file is deprecated; the replacement is individual `.mdc` files inside `.cursor/rules/`, each scoped to a specific concern or file glob. One rule per concern keeps configuration focused and easier to maintain across a team.
Both tools share a key finding from [Faros AI's research](https://www.faros.ai/blog/context-engineering-for-developers): context ordering matters. Models attend more to content at the beginning and end of the context window. Critical constraints belong at the top; immediate task context and examples go at the end. Instructions buried in the middle of a 3,000-token CLAUDE.md get deprioritized.
There is also a counterintuitive ceiling on context size. [Stanford and UC Berkeley research](https://www.faros.ai/blog/context-engineering-for-developers) found model correctness drops around 32,000 tokens even for models advertising larger windows, the "lost-in-the-middle" effect. Keep CLAUDE.md under 500 tokens (roughly 400 words). For injecting large codebases selectively, [Repomix](https://github.com/yamadashy/repomix) lets you pack specific directories into structured prompts rather than dumping entire repositories at once. The goal is precision, not volume.
## Building Your CLAUDE.md in 15 Minutes
Start with five sections. Keep each under 15 lines.
**1. Project identity.** Name, purpose, and tech stack in three bullet points. The agent needs to know whether it is working on a TypeScript Next.js app or a Python FastAPI service before it modifies anything.
**2. Architecture conventions.** Where do things live? One paragraph. "Components go in `src/components/`, utilities in `src/lib/`, tests colocated as `*.test.ts` files adjacent to their source."
**3. Coding standards.** What your linter does not catch: naming conventions, type rules, patterns to prefer or avoid. "Named exports only. No `any` types â use `unknown` and narrow. Prefer composition over inheritance."
**4. Off-limits without explicit instruction.** List files or directories the agent should never modify unprompted. Migrations, generated code, vendored libraries. This section alone prevents the most costly agent errors.
**5. Testing requirements.** "All new functions need a unit test. Use vitest. Run `npm test` before marking any task complete."
A minimal example for a Node.js API project:
```markdown
# Project: Payments API
**Stack:** Node.js 22, TypeScript 5.7, Postgres 16, Prisma ORM
## Architecture
- API routes in `src/routes/`, one file per resource
- Business logic in `src/services/`, never in route handlers
- All DB queries through Prisma â no raw SQL
## Standards
- Named exports only. No `any` â use `unknown` and narrow.
- Env vars via `process.env`, validated with Zod at startup.
## Off-limits
- `prisma/migrations/` â never edit directly
- `src/generated/` â overwritten on next build
## Before finishing any task
- Run `npm test` and confirm all pass
- Run `npm run lint` and fix all errors
```
Under 25 lines. An agent reading this produces dramatically fewer surprises than one starting cold.
For Cursor, apply the same logic across three `.mdc` files: one for general conventions, one for testing rules, one for framework-specific guidance. Each file stays under 100 lines and targets a specific concern.
To validate your CLAUDE.md is working, run two identical tasks side by side, one in a project without the file and one with it. First-attempt accuracy is the clearest signal. If the agent correctly follows your naming conventions without being told in the prompt, the context file is doing its job.
## The Limits to Know About
Context engineering improves reliability; it does not guarantee outcomes. [Martin Fowler notes](https://martinfowler.com/articles/exploring-gen-ai/context-engineering-coding-agents.html) that results still depend on LLM interpretation, requiring probabilistic thinking rather than certainty. Human review stays essential regardless of context quality.
Context files go stale. A CLAUDE.md written for an Express codebase that was later migrated to Fastify actively misleads the agent. This is worse than no file at all. A one-line note in your PR template ("Did you update CLAUDE.md?") costs ten seconds and prevents hours of confused agent sessions.
Finally, good context does not fix vague task descriptions. [Faros AI found](https://www.faros.ai/blog/context-engineering-for-developers) that most engineering tickets lack sufficient clarity for reliable agent execution. Context quality and task specification quality reinforce each other. Neither substitutes for the other. The distinction matters: "engineered context makes for shippable software" only if the task tells the agent what to ship.
## Key Takeaway
Create a `CLAUDE.md` file in your project root today with five sections: project identity, architecture conventions, coding standards, off-limits files, and test requirements. Keep it under 30 lines. Run your next Claude Code session and observe the difference in first-attempt accuracy. The model does not change â what it knows about your project does.

"Clever prompts make for impressive demos. Engineered context makes for shippable software." - that quote nails it. The 75% daily usage with no measurable gains problem is almost entirely a context problem in my experience.
One thing worth adding to the http://CLAUDE.md section: the structure of your instructions matters as much as the content. I found that headers, ordering, and specificity about what NOT to do outperforms a wall of best-practice text.
Have been documenting patterns across 1000+ sessions - the writeup: https://thoughts.jock.pl/p/how-i-structure-claude-md-after-1000-sessions - the "never touch" directory example resonates strongly.