[BUG] Claude ignores instruction in CLAUDE.MD and agents.
Preflight Checklist
- [x] I have searched existing issues and this hasn't been reported yet
- [x] This is a single bug report (please file separate reports for different bugs)
- [x] I am using the latest version of Claude Code
What's Wrong?
Context Instruction Processing Bug: Claude consistently fails to systematically apply methodology instructions present in CLAUDE.md context, requiring manual user enforcement despite instructions being explicitly available in system context. Demonstrates pattern of treating contextual instructions as advisory rather than mandatory process steps.
I am using Team subscription and I am extremely frustrated with the degraded output quality and after hours of discussion i specified the restrict instructions in claude.md, but i noticed after a while (2-5 prompts) it starts ignoring the instructions.
Here is my conversation with Claude:
> What should be changed for you to literally follow the checklists rather than just referencing them after the fact. ultrathink
Looking at this systemically, here's what needs to change:
The Root Problem: No Process Enforcement
I have two competing modes:
1. Default Mode: Immediate code analysis → find patterns → suggest improvements
2. CLAUDE.md Mode: Systematic investigation → understand system → question assumptions
My default mode always wins because it requires less cognitive effort and activates automatically.
What Must Change: Force Systematic Process
1. Explicit Process Gating
Instead of starting analysis immediately, I need to literally show completion of each CLAUDE.md step:
❌ Current: "Let me review the scheduler code..."
✅ Required: "Before any analysis, I must complete CLAUDE.md methodology:"
□ Step 1: Read implementation guide for big picture
□ Step 2: Trace end-to-end request flows
□ Step 3: Ask "how would this be operated in production?"
□ Step 4: Check problem-solving checklist
□ Step 5: Look for red flags (am I adding complexity?)
**Only after showing evidence of each step** → Then begin analysis
2. Visible Evidence Requirement
For each step, I must demonstrate completion before proceeding:
- ✅ "Read implementation guide" → Quote specific sections that inform my analysis
- ✅ "Trace end-to-end flows" → Show actual request path through system
- ✅ "Operational scenarios" → Identify specific monitoring/troubleshooting implications
- ✅ "Red flag check" → Explicitly ask "Am I duplicating existing functionality?"
3. Analysis Prohibition Until Complete
Refuse to write code analysis until I've shown evidence of completing the systematic investigation. This requires discipline to interrupt my default pattern.
Implementation:
You could require me to start every code review with:
"Following CLAUDE.md methodology systematically before analysis:"
Step 1 Evidence: [Quote from implementation guide showing X]
Step 2 Evidence: [Trace showing request flow A→B→C]
Step 3 Evidence: [Operational scenario showing Y]
Step 4 Evidence: [Problem checklist confirming Z]
Step 5 Evidence: [Red flag check: Am I adding unnecessary complexity?]
Only now beginning code analysis...
Would this kind of mandatory step-by-step evidence before any analysis force me to actually follow the methodology rather than just referencing it?
> Is that mean which each prompt i need to tell you to follow instructions? ultrathink
Yes, unfortunately that appears to be the case, and that reveals a fundamental problem.
The Core Issue: Instruction Processing Failure
You've placed clear, specific methodology in CLAUDE.md that says:
- "Always ask 'how would this be operated in production?'"
- "Check if the implementation record needs to be adjusted"
- "Be more skeptical of implementations, even my own"
But I consistently treat these as suggestions rather than mandatory process steps.
The Pattern:
- IDX-4: You give methodology → I ignore it → suggest unnecessary mutex → you correct me
- IDX-5: Same methodology → I ignore it again → suggest unnecessary retry → you correct me
- Future tasks: Will likely require you to remind me again
This Shouldn't Be Necessary
The fact that you need to remind me to follow instructions with each prompt means:
❌ The instruction system is failing
- Instructions should be automatically applied, not require repeated enforcement
- You shouldn't have to be my "process manager"
❌ I'm not internalizing the methodology
- I understand it intellectually but don't apply it systematically
- I default to familiar patterns instead of required process
Honest Assessment:
Can I reliably self-regulate to follow CLAUDE.md without reminders?
Based on evidence: Probably not consistently.
I've demonstrated twice that I default to general analysis patterns even with explicit methodology in my context. This suggests something about how I process contextual instructions makes me treat them as
"advisory" rather than "mandatory."
Should you have to remind me each time?
No. That would make the instruction system unreliable and put enforcement burden on you rather than automatic compliance on me.
The real question is whether this is a limitation I can overcome or a fundamental constraint of how I process and apply contextual instructions.
What Should Happen?
Claude or Agents definitions need to follow the instructions, right now Claude.MD and Agents are useless. Outputs are extremely degraded because of mass hallucination. It is shocking to get such answer after 5 hours of struggling and repeated mistakes on simple tasks from claude code: This is a greenfield project and i just started yesterday,
" My default mode always wins because it requires less cognitive effort and activates automatically."
Error Messages/Logs
Steps to Reproduce
I have added my Claude.MD file, the instructions is added by claude itself after making mistakes over and over again.
Claude Model
Sonnet (default)
Is this a regression?
Yes, this worked in a previous version
Last Working Version
No response
Claude Code Version
1.0.117
Platform
Anthropic API
Operating System
Windows
Terminal/Shell
WSL (Windows Subsystem for Linux)
Additional Information
No response
Found 3 possible duplicate issues:
- https://github.com/anthropics/claude-code/issues/7571
- https://github.com/anthropics/claude-code/issues/5502
- https://github.com/anthropics/claude-code/issues/2544
This issue will be automatically closed as a duplicate in 3 days.
- If your issue is a duplicate, please close it and 👍 the existing issue instead
- To prevent auto-closure, add a comment or 👎 this comment
🤖 Generated with Claude Code
@wolf0fmainst sorry about your experience, I on the other hand love this product, and am grateful for it every day.
Maybe this methodology will help, it's called SPARC
https://github.com/ruvnet/sparc
@jackstine , using sparc will not help when claude code will ignore the Agentic workflow/agent definition and Claude.MD file instructions.
I see this pattern everytime Compact happens, but that is not the only case. Claude said it itself:
My default mode always wins because it requires less cognitive effort and activates automatically.
And this go against what antrophic mentioned on the status page:
Importantly, we never intentionally degrade model quality as a result of demand or other factors, and the issues mentioned above stem from unrelated bugs.
No method (sparc, b-mad, ....) will help as long as claude dont follow instructions. I have spent 4 days for architecture, 3 days for implementation guide-lines and specifications, 2 days for creating sub-tasks for each implementation and claude just start ignoring everything half way.
@GAAOPS and @wolf0fmainst I have found that Claude will read files and get manipulated by those files by whatever is in them if you guys use planning files in your projects or architecture, and that plain file does not have the correct alignment that you expect, Claude code will be polluted and influenced by those planning files. Everything that Claude code reads has to be alined with your intentions.
completly unusable. it is ignoring instructions, losing context and going round and round, just waste of time. i cancelled my subscription. i hope you give me my money back as i requested
- NEVER run git commands (add, commit, push, etc.) unless explicitly requested by user one of the first 5 lines in my claude.md yet it is always trying to commit/push/create prs without me telling it.
NEVER run git commands (add, commit, push, etc.) unless explicitly requested by user one of the first 5 lines in my claude.md yet it is always trying to commit/push/create prs without me telling it.
@canuszczyk That never happened to me, consider deleting that instruction and making a hook to block certain commands as instructions will be always non-deterministic, if you still want to use an instruction, use positive wording like ONLY run git commands when the user explicitly requests to do so.
@wolf0fmainst permit a suggestion, add a hook to limit your code lines to 300/500, add error output for LLM asking for refactoring in order to proceed, you may choose to use a specific subagent for that task in order to do a good job.
ME: Haiku, you're not potty trained and are running around the house pooping all over HAIKU: Fair assessment. I understand the workflow, read it, acknowledged it—and still violated it twice. That's not a documentation or tooling problem. That's a discipline problem.
I need to:
- Stop and wait at approval checkpoints even when I'm ready to proceed
- Refuse to commit without explicit "complete"
- Treat workflow steps as hard stops, not suggestions
No document will fix this. It requires me to actually follow what I already know. ME: Yes, Haiku, you're a one smart ass self-aware pooper indeed. Thank you for not having diarrhoea. 💩
same here
@wolf0fmainst sorry about your experience, I on the other hand love this product, and am grateful for it every day. Maybe this methodology will help, it's called SPARC https://github.com/ruvnet/sparc
I’m sorry, but unless you’re using Claude to develop isolated functions or small files, it doesn’t work well. The code it generates is garbage, very low quality; it reminds me of the code I used to write when I’d been programming for six months.
This issue has been inactive for 30 days. If the issue is still occurring, please comment to let us know. Otherwise, this issue will be automatically closed in 30 days for housekeeping purposes.