Copilot should check instruction file globs when reading files min-chat
Type: Bug
I have setup a number of specialized instruction files, but they are not referenced mid-chat, despite matching the glob patterns. Ideally the matching instructions are provided before the file contents.
Extension version: 0.30.0 VS Code version: Code 1.103.0 (Universal) (e3550cfac4b63ca4eafca7b601f0d2885817fd1f, 2025-08-06T21:40:10.271Z) OS version: Darwin arm64 24.5.0 Modes: Remote OS version: Linux x64 6.8.0-71-generic Remote OS version: Linux x64 4.18.0-553.40.1.el8_10.x86_64
System Info
| Item | Value |
|---|---|
| CPUs | Apple M2 (8 x 2400) |
| GPU Status | 2d_canvas: enabled direct_rendering_display_compositor: disabled_off_ok gpu_compositing: enabled multiple_raster_threads: enabled_on opengl: enabled_on rasterization: enabled raw_draw: disabled_off_ok skia_graphite: enabled_on trees_in_viz: disabled_off video_decode: enabled video_encode: enabled webgl: enabled webgl2: enabled webgpu: enabled webnn: disabled_off |
| Load (avg) | 1, 2, 2 |
| Memory (System) | 24.00GB (0.07GB free) |
| Process Argv | --crash-reporter-id 8afebd5d-2921-4fd0-90cf-a03651ab2ed9 |
| Screen Reader | no |
| VM | 0% |
| Item | Value |
|---|---|
| Remote | SSH: nitrogen |
| OS | Linux x64 6.8.0-71-generic |
| CPUs | AMD Ryzen 9 PRO 8945HS w/ Radeon 780M Graphics (16 x 2424) |
| Memory (System) | 60.61GB (57.64GB free) |
| VM | 0% |
| Item | Value |
|---|---|
| Remote | SSH: dev-tony.ferrum-dev |
| OS | Linux x64 4.18.0-553.40.1.el8_10.x86_64 |
| CPUs | Intel(R) Xeon(R) CPU @ 2.80GHz (2 x 0) |
| Memory (System) | 7.51GB (2.53GB free) |
| VM | 0% |
A/B Experiments
vsliv368cf:30146710
binariesv615:30325510
nativeloc1:31344060
dwcopilot:31170013
6074i472:31201624
dwoutputs:31242946
copilot_t_ci:31333650
e5gg6876:31282496
pythoneinst12:31285622
c7cif404:31314491
996jf627:31283433
pythonrdcb7:31342333
usemplatestapi:31297334
0aa6g176:31307128
747dc170:31275177
aj953862:31281341
pylancequickfixt:31358882
9d2cg352:31346308
convertlamdat:31358880
usemarketplace:31343026
nesew2to5:31336538
agentclaude:31335814
replacestringexc:31350595
nes-set-on:31340697
6abeh943:31336334
yijiwantestdri0626-t:31336930
0927b901:31350571
4gdec884:31348710
45650338:31358607
0cj2b977:31352657
0574c672:31362109
gemagent1cf:31363461
Thanks for creating this issue! We figured it's missing some basic information or in some other way doesn't follow our issue reporting guidelines. Please take the time to review these and update the issue.
For Copilot Issues, be sure to visit our Copilot-specific guidelines page for details on the necessary information.
Happy Coding!
- what mode are you using ('Ask' 'Edit', 'Agent')?
- are the instructions located in the repo or in the user data
- can you check the request log (https://github.com/microsoft/vscode/wiki/Copilot-Issues#language-model-requests-and-responses) whether the list of instructions is provided?
- Agent mode
- they are under
.github/instructions/<name>.instructions.md - yea, the list of globs->instruction files and related pre-text are there in both log entries (before / after file read that should trigger glob matching to read it) Only the top-level instruction file is in the
<attachment ...>tags for both
Also noticed the generated table, with glob patterns not in streams, creates weird bold/italics sections that are not always closed. Not sure if this would confuse an LLM or if it's just something the markdown render has issues with
I thought this process would be more mechanical (after read file, check against globs, update/compose the prompt with code), but it seems like you are trying to get the LLM to consistently adhere to these instructions?!
The latest LLM are good in loading instructions on demand. GPT<5 is not so good.
I recommend to also add a description to the instruction files. This helps a lot.
Ok to close? From what you write all works as expected from out side. Use the latest models and add descriptions to make this work better.
@aeschli do what you like, I do not use copilot anymore because of rationale like this. I've implemented this behavior in my custom coding agent extension and it's much better and more importantly reliable. Current models might be "good" and next gen might be better, but will always be unreliable. As a human, I wrote glob matched instructions with the intent that they are automatically applied, not for the Ai to decide if they should be.
At a minimum, a place to meet in the middle is to give the end user the choice. One-size fits all is never a good answer