Your Copilot Isn't Dumb — You're Starving It of Context
Table of Contents
Our enterprise GitHub Copilot runs Claude Sonnet 4 — not the latest, no option to upgrade. Most teams would complain to procurement and wait six months for an approval cycle that might not change anything. We stopped waiting and gave the model something it was actually missing: context about our codebase.
Three configuration files later, our “outdated” Copilot produces better suggestions than what most teams get from the latest model with zero project context. The difference wasn’t the model. It was what we were feeding it.
The Real Problem Isn’t the Model
If you work at an enterprise, you know the drill. Procurement locks you into a specific AI vendor and model version. Security review takes months. By the time the license is active, the model you evaluated is already a generation behind. I covered exactly this dynamic in our enterprise AI version gap case study — the approval stamp creates a false sense of confidence while your engineers know they’re working with yesterday’s tools.
Here’s what most teams get wrong: they treat Copilot as a black box. Model version comes in, suggestions come out, nothing you can do about the quality. That’s like onboarding a senior contractor, handing them a laptop, pointing at the codebase, and saying “figure it out.” No coding standards doc, no architecture overview, no explanation of why you use getByTestId instead of CSS selectors. Even the best contractor would produce mediocre work in that situation.
Model quality is half the equation. Context is the other half. And unlike the model version, context is entirely within your control — no procurement cycle required.
copilot-instructions.md — Your Project’s AI Readme
This is the highest-impact change you can make. A single file at .github/copilot-instructions.md that Copilot reads on every interaction — completions, chat, agent mode, all of it. Think of it as a README that only the AI reads.
Here’s what ours looks like for a Playwright + Java test automation project:
# Project ContextThis is a Playwright test automation framework (Java + TestNG).- Always use `getByTestId()` or role-based locators — never CSS selectors- Use Page Object Model: one page class per page, locators in constructor- Test methods must be independent — no shared state between tests
# Tech StackJava 17, Playwright 1.48, TestNG 7.9, Maven, Docker for CI execution.- Thread-safe: use `ThreadLocal<Page>` for parallel execution- API calls use RestAssured with shared auth token from `BaseTest`- Config lives in `src/test/resources/config.properties`, never hardcode URLs
# Conventions- Test names: `shouldDoX_whenY` format- Assertions: always use Playwright's built-in `assertThat()`, not JUnit asserts- Max one assertion concept per test — multiple asserts OK if same conceptFifteen lines. Took 20 minutes to write. The impact was immediate.
Before and After
Without this file, Copilot generated suggestions based purely on the open file and whatever patterns it could infer. Here’s what it suggested for a search test before we added the instructions:
// Copilot's suggestion — CSS selectors, no page objectLocator searchBox = page.locator(".header-search .search-input");searchBox.fill("test query");page.locator(".search-results .result-item:first-child").click();assertThat(page.locator(".detail-view h1")).isVisible();After adding copilot-instructions.md, the same prompt produced:
// Copilot's suggestion — testid locators, matches our conventionsLocator searchBox = page.getByTestId("search-input");searchBox.fill("test query");page.getByTestId("search-result").first().click();assertThat(page.getByTestId("detail-title")).isVisible();Same model. Same codebase. The only difference is that Copilot now knows our standards before it starts generating. Without this file, every developer on the team gets different quality suggestions depending on which files they have open. With it, there’s a consistent baseline. That consistency matters more than people think — especially when you’ve got 8 SDETs writing tests across 12 microservices. If you’ve seen what happens when AI generates tests without proper context, you know exactly how quickly suggestions degrade without guardrails.
Prompt Files — Reusable Team Prompts
Individual copilot-instructions.md handles the project context. But what about the prompts your team runs repeatedly? Every code review, every security check, every test review — someone on your team has figured out the right way to prompt Copilot for these tasks. The problem is that knowledge lives in their head (or worse, their clipboard history).
.prompt.md files in .github/prompts/ solve this. They’re reusable, version-controlled prompts that any team member can invoke with a slash command in Copilot Chat.
Here’s our security review prompt:
# Security Review — Test Automation CodeReview this code for security concerns in a test automation context:- Are credentials hardcoded or properly externalized to config/env?- Are test data cleanup routines present to avoid PII leaking into logs?- Do API calls use parameterized endpoints, not string concatenation?- Are screenshot/video artifacts stored in paths without sensitive data?- Does the test use its own auth token, not a shared service account?Flag anything that would fail our enterprise security scan.And our test code review prompt:
# Test Code ReviewReview these tests against our automation standards:- Each test must be independently runnable (no dependency on test order)- Locators must use getByTestId or role-based — flag any CSS selectors- Assertions must verify behavior, not implementation details- Waits must use Playwright auto-wait or explicit waitFor — no sleep()- Page objects must be used for any page with 3+ interactionsSuggest specific fixes for any violations found.Why This Beats Tribal Knowledge
Any team member types /security-review or /test-review in Copilot Chat, and they get a senior-level review prompt without needing to know what to check for. The new SDET who joined last month gets the same review quality as the lead who wrote the prompt. That’s the real value — it’s not about the AI, it’s about distributing your team’s best practices through the AI.
Path-Specific Instructions — Context That Follows the File
Here’s where it gets surgical. .instructions.md files in .github/instructions/ let you attach rules to specific file patterns using applyTo glob syntax. Copilot activates these instructions only when working on matching files.
This is the one that eliminated the “Copilot doesn’t understand our test conventions” complaints.
---applyTo: "**/*.test.ts"---# Test File Conventions- Import page objects from `@pages/` — never instantiate locators directly in tests- Use `test.describe` for grouping, one describe block per user flow- Test names must complete the sentence: "it should..."- Use `test.beforeEach` for navigation, never repeat `page.goto()` in each test- Tag slow tests with `@slow` annotation for CI filtering- Never use `page.waitForTimeout()` — use `page.waitForSelector()` or auto-waitFor backend code, a separate file with different rules:
---applyTo: "**/*.java"---# Java Conventions- Use constructor injection, not field injection with @Autowired- All public methods need Javadoc with @param and @return- Use Optional<T> return types instead of null checks- Log with SLF4J, never System.out.printlnThe applyTo pattern means a developer working on a test file gets test-specific instructions automatically, while a developer working on a Java service class gets backend conventions. No manual switching, no forgetting to tell Copilot “I’m writing a test now.” The context follows the file.
Before we set this up, our most common code review comment was “Copilot suggested waitForTimeout again and the dev accepted it.” That comment has dropped to zero in the last two months. Not because the model improved — because the model finally knows we don’t allow waitForTimeout.
What I’d Do Differently
Looking back at the three months since we rolled this out, a few things I’d change:
-
Set up
copilot-instructions.mdon day one of any new repo. We added ours to an existing project with 14 months of history. Half the value is preventing bad patterns from forming in the first place. If I start a new project tomorrow, this file gets created before the first test. -
Treat prompt files like team documentation from the start. We discovered our best prompts by accident — someone shared a good one in Slack, someone else modified it, eventually we formalized it. I’d skip the organic discovery phase and hold a 30-minute team session: “What prompts do you run every week? Let’s commit them.”
-
Measure suggestion acceptance rate before and after. We added these files and felt like suggestions got better, but we didn’t baseline the acceptance rate beforehand. GitHub’s Copilot metrics dashboard shows this data. I’d screenshot the before numbers and compare at 30 and 60 days. Gut feel is good. Data is better when you need to justify the time investment to your manager.
One Thing You Can Do Today
Create a copilot-instructions.md in your repo. It doesn’t need to be perfect — 10 lines covering your locator strategy, your naming conventions, and your tech stack versions. That’s 15 minutes of work that will pay back on every Copilot interaction for every developer on your team.
A better model will always help. I’m not arguing otherwise. But context closed more of the quality gap than we expected — and unlike a model upgrade, you can ship it today without waiting for procurement.
Get weekly QA automation insights
No fluff, just battle-tested strategies from 10+ years in the trenches.
No spam. Unsubscribe anytime.
Does this work with VS Code and JetBrains IDEs?
Yes. copilot-instructions.md and prompt files work in both VS Code and JetBrains IDEs with the GitHub Copilot plugin. The .instructions.md path-specific files are supported in VS Code and are rolling out to JetBrains. Check GitHub’s docs for the latest IDE compatibility if you’re on JetBrains — the feature set tends to land in VS Code first.
Do these files affect Copilot completions, chat, or both?
copilot-instructions.md affects all Copilot features — inline completions, chat, and agent mode. Prompt files are chat-only, invoked via slash commands. .instructions.md files with applyTo patterns affect inline completions and chat when the matching file is active. For maximum impact, start with copilot-instructions.md since it covers every interaction surface.
Can I use copilot-instructions.md with other AI coding tools?
The file is GitHub Copilot-specific, but the concept transfers directly. Cursor uses .cursorrules, Windsurf uses .windsurfrules, and Claude Code uses CLAUDE.md. The content is nearly identical — project context, conventions, tech stack. We maintain one source document and generate tool-specific files from it. The 15 minutes you spend writing the content works across every tool.
How often should I update these files?
Update them when your conventions change — new locator strategy, new framework version, new team standards. In practice, that’s roughly once a month for copilot-instructions.md and whenever a new pattern emerges for prompt files. We also review them quarterly alongside our other team documentation. Stale instructions are worse than no instructions, because they’ll push Copilot toward outdated patterns your team has moved past.
Related Posts
I Let AI Write My Test Suite — Here's What It Got Right and Wrong
A hands-on experiment with AI-generated Playwright tests — what LLMs nail, what they miss, and the workflow that actually works.
Our Enterprise Approved AI — And Why It's the Biggest Risk
Enterprises lock teams into outdated AI models for safety. The irony? Older, less capable models produce worse code and create more risk than they prevent.
Passing Inputs to Tests with GitHub Actions: A Fun Guide
Learn how to use GitHub Actions workflow_dispatch inputs to pass platform, test type, and release info to your test automation runs.
Get weekly QA automation insights
No fluff, just battle-tested strategies from 10+ years in the trenches.
No spam. Unsubscribe anytime.