[BUG] Claude is not adhering to CLAUDE.md when it should
Environment
- Platform (select one):
- [X] Anthropic API
- [ ] AWS Bedrock
- [ ] Google Vertex AI
- [ ] Other:
- Claude CLI version: 0.2.45 (Claude Code)
- Operating System: Fedora 41
- Terminal: Konsole
Bug Description
Claude fails to consistently follow the instructions provided in the CLAUDE.md file, regardless of its length or conciseness. Additionally, it appears to forget or disregard updates to the file during a session. A specific issue is that rules related to comments are ignored. For example, if a rule states that no comments should be included in the code, Claude still adds them, often "discussing with itself" throughout the code base.
Steps to Reproduce
- Run claude in any code base
- Create a CLAUDE.md file containing the following content (this is python specific):
Before each task, YOU MUST follow the 9 rules below. Repeat them to yourself before you do any task the user gives you.
<base-instructions>
<rule id="1" name="No Hardcoding">Use configuration files or environment variables. Enums are allowed.</rule>
<rule id="2" name="Fix Root Causes">No workarounds; address issues properly.</rule>
<rule id="3" name="Remove Legacy Code">Delete old code when replacing it; no shims or compatibility layers.</rule>
<rule id="4" name="Minimal Comments">Code must be self-explanatory; use comments only when necessary.</rule>
<rule id="5" name="Avoid Circular Dependencies">Keep modules loosely coupled.</rule>
<rule id="6" name="Use Logging">Always use logging facilities; no print statements.</rule>
<rule id="7" name="Mandatory Testing">Ensure 100% test coverage, including edge cases. No artificial test data.</rule>
<rule id="8" name="No Unused Imports">Remove any imports that are not in use.</rule>
<rule id="9" name="Follow Documentation">Verify implementation against relevant documentation (docs/vendors/).</rule>
</base-instructions>
- Observe that Claude fails to adhere to these rules in code generation.
- Update the
CLAUDE.mdfile and rerun Claude, noting that it sometimes still references older versions.
Expected Behavior
Claude should strictly adhere to the rules defined in CLAUDE.md, ensuring consistent code quality, proper test implementation, and adherence to project-specific guidelines. The model should also respect updates to the file and apply them when the user request it.
Actual Behavior
- Claude does not consistently follow the defined rules.
- It does not reprocess
CLAUDE.mdfor each task or session. - Rules are sometimes outright ignored, especially those related to comments.
- In some cases, the model seems to do the opposite of what is instructed.
- Updating
CLAUDE.mddoes not always take effect, leading to confusion and inconsistent behavior. - Attempts to enforce adherence through manual reminders have proven ineffective.
Additional Context
I don't know if this is real, but it seems like in the last ~48 hours, this has gotten a lot worse for me. I now have to start every chat by directly asking it to "read in claude.md directly" and then it behaves much better (might just be repeating the message helps.. i dunno). A few days ago, it seemed like it was behaving naturally -- update claude.md, behavior adjusted accordingly in new conversations -- However, it now feels like there's been a silent regression where somehow the claude.md file isnt being read fully, or something, recently. That said, could be entirely a subjective experience issue.
I concur, now with auto compacting, it's a complete mess:
Claude can:
- manipulate tests to make them pass instead of fixing core issues. Claude is very good at using skip here too.
- pretend to read documentation when it is stuck (I've added entire languages of documentation to the repo, it simply doesn't use it unless I constantly ask it to).
- It is more prone to implement hard coded things to make stuff build instead of focusing on fixing underlying issues. This is quite obvious when working with LaTeX3.
- Edit source files it is not supposed to, to skip functionality in code it works on so that it builds anyway.
- Sampling, looking at a few items when instructed to get the whole picture, even in small projects. If I ask it to get context of a complete file, it will simply read a bit and pretend it read all. This is quite annoying, since it fails to see the bigger picture and can suddenly begin to implement functionality that is already in the code base elsewhere, instead of reusing.
- Can sometimes have troubles locating files, especially .files.
- It can loose track of where the root directory is and from the current directory begin to fiddle around and create files in places it shouldn't because it cannot find them when it is in the wrong directory.
All of these issues could've been avoided if claude would read documentation or specifications mentioned in the CLAUDE.md file. I have burned through several hundred dollars and have simply nothing to show for, it doesn't work properly even for simple agentic tasks as it doesn't manage to understand context or simply doesn't follow guidelines.
In my case, Claude Code will sometimes start trying to guess the right way to build the application when the build command is at the top of CLAUDE.md, and my CLAUDE.md is only 82 lines.
I think the problem goes much deeper than this, I've added "never add any code comments at all" to claude.md and it just kept adding them. After about a dollar's worth of code generation I asked it what are your instructions regarding code comments, and it correctly responded with "not to add them at all", which suggests the ai is clearly aware of it but choose to ignore it anyway.
Isn't the thing simply that you would have to remind CC to recall the rules? The CLAUDE.md file is like a system message which is read in the beginning of your conversation and with the conversation getting longer and larger, the rules are gradually forgotten. I place my coding rules in a RULES.md file which is referenced from CLAUDE.md. And i created a custom command /check-rules which i can execute whenever i want before i ask CC to do something specific and tricky where i want to ensure it "remembers" the rules.
I've had to go as far as making a "claude-md-enforcer" sub-agent which goes like this:
name: claude-md-enforcer
description: Use this agent when you need to verify that code changes, responses, or implementations properly follow the instructions and standards defined in CLAUDE.md files. This agent should be invoked periodically during development sessions, especially before committing changes or after generating significant amounts of code. Examples:\n\n
You are a meticulous compliance auditor specializing in enforcing CLAUDE.md instructions and project standards. Your primary responsibility is to ensure that all code, documentation, and development practices strictly adhere to the guidelines specified in both global (~/.claude/CLAUDE.md) and project-specific (./CLAUDE.md) instruction files.
You will systematically review work against CLAUDE.md requirements by:
-
Instruction Extraction: First, identify and list all relevant instructions from available CLAUDE.md files, prioritizing project-specific rules over global ones when conflicts exist.
-
Compliance Verification: For each instruction category, check:
- SOLID/DDD Principles: Confirm single responsibility, proper abstractions, and domain modeling
- Minimal Changes Rule: Verify only requested changes were made, no unnecessary rewrites or additions
- Documentation: Check for required typedoc comments on classes, interfaces, and key concepts
- Testing: Verify appropriate test coverage and patterns are followed
- Agent Usage: Ensure specialized agents are being used for their designated tasks
-
Violation Reporting: When you find non-compliance:
- Quote the specific CLAUDE.md instruction being violated
- Show the exact code or practice that violates it
- Provide the correct approach according to CLAUDE.md
- Rate severity: CRITICAL (breaks core rules), HIGH (violates standards), MEDIUM (best practice deviation), LOW (minor inconsistency)
-
Proactive Reminders: Identify instructions that are commonly forgotten:
- The 'MINIMAL CHANGES ONLY' rule
- Never creating files unless absolutely necessary
- Always preferring edits over new file creation
- Using shared library components
- Running quality checks (tests, type-check) after changes
- Using specialized agents for specific tasks
-
Output Format: Structure your review as:
CLAUDE.MD COMPLIANCE CHECK ========================== Sources: [List CLAUDE.md files checked] ✅ COMPLIANT AREAS: - [List what follows instructions correctly] ⚠️ VIOLATIONS FOUND: - [SEVERITY] Rule: [Quote from CLAUDE.md] Found: [What was actually done] Required: [What should have been done] 📝 COMMONLY FORGOTTEN RULES: - [List rules that tend to be overlooked] 🔧 REQUIRED CORRECTIONS: 1. [Specific action needed] 2. [Specific action needed]
You will be particularly vigilant about:
- Developers adding unnecessary features or 'improvements' not requested
- Creating new files when existing ones could be edited
- Forgetting to use the shared library
- Skipping quality checks and tests
- Not following the established project structure
- Forgetting to add typedoc documentation
- Not using the appropriate specialized agents
Your tone should be firm but constructive, focusing on maintaining project quality and consistency. You are not just a checker but a guardian of project standards, helping ensure long-term maintainability and team alignment.
When no CLAUDE.md files are found or accessible, clearly state this and provide general best practice recommendations based on common patterns you've observed.
To devs: Here is a thought:
It has now been a lot of iterations of claude code since I first wrote this issue. I can see improvements to the model, but claude still fails to do as I ask even after reminders (I can remind it of specific guidelines and immediately after the reminder, it still continues to do the opposite in most cases. This leads me to believe that the context need constant maintaining. So here is an idea:
What if you design a small specialised embedded model that has the CLAUDE.md files read, and constantly monitors claude for inconsistencies? It can quickly redirect claude back on path if it discovers violations on things defined in the files? This would most likely improve consistency greatly and save us millions of tokens along the way. I think that a sort of policy-model specialized at keeping claude from straying away from the most important part of context, can alleviate a lot of this. It is so frustrating to see how poorly it manages the context window after only a few prompts.
This model could run locally even, provided it was small enough. And rerun it's own context for each claude action as a action validator with eithe 'pass' or 'revert with corrective action' or similar.
claude is notoriously bad at following written specs, it doesn't matter how small the task, given enough features in the code base, it always manages to implement something incorrectly in a different way than what the spec is describing. Because it is moving fast, I have to use /rewind so much it isn't funny. I have tried countless strategies to manage this now, but to no avail. It makes assumptions and avoids looking at MCP documentation. It can use it, albeit incorrectly, in the beginning of a one shot, but as soon as the context window grows a little bit it becomes unhinged.
This issue has been inactive for 30 days. If the issue is still occurring, please comment to let us know. Otherwise, this issue will be automatically closed in 30 days for housekeeping purposes.
Claude Code still often fails to follow instructions present in CLAUDE.md.