my image

nivethan ariyaratnam

Senior Software Engineer @ Allion Technologies, writing daily events, interesting finding, thoughts here

2026-03-21

Prompt vs Skills

Sound familiar?

You probably already have a prompts/ folder somewhere in your repo. Battle-tested instructions for writing tests, reviewing PRs, generating migrations attached/pasted into Copilot or Claude at the start of every session. It works. But you’re the routing layer. You decide which prompt to use, you copy it, you paste it. The agent has no idea that folder exists.

That instinct “I should save this so I don’t retype it” is exactly right. Skills are what you get when you give that saved prompt a metadata contract so the agent can discover and load it autonomously.

WHAT YOU DO TODAY📂 prompts/test-instructions.mdreview-checklist.mddeploy-steps.mdYOUselect → attach❌ human routing❌ full context cost every timeWHAT SKILLS ENABLE📂 .skills/testing/SKILL.mdcode-review/SKILL.mddeployment/SKILL.mdAGENTdiscover → load✓ agent routing✓ progressive loading

What is a Prompt

A prompt is a temporary, conversational instruction. You type it, the LLM processes it, you get a responsethen the context resets. Every new session, you re-explain the same procedures from scratch. It’s the curl of AI interaction: powerful for one-off requests, wasteful for anything repeatable.

Prompts are reactive. They carry no memory, no structure, no composability. The context cost is paid on every turn, by you, manually.

USER TYPES”Fix the auth bug”LLM PROCESSEScontext consumedRESPONSEoutput generatedDISCARDEDcontext gonere-explain everything next session →

What are Skills

A skill is a folder. It contains a SKILL.md file (YAML frontmatter for metadata, Markdown for instructions) and optional directories for scripts, reference docs, and assets. Think of it as an npm package for procedural knowledge: versioned, portable, composable.

Where prompts vanish after each session, skills persist. They work across GitHub Copilot, Claude, Cursor, and any compatible agent. You write the procedure once; the agent loads it whenever relevant.

PROMPT”When you write tests,always use vitest andfollow AAA pattern…”⏱ Temporary🔄 Re-typed each session📦 Not composablevsSKILL📂 testing-standards/📄 SKILL.mdrequired📂 scripts/optional📂 references/optional📂 assets/optional✓ Persistent ✓ Portable ✓ Composable

How Skills Work

Skills use progressive disclosure, the same pattern behind lazy loading in web apps. At session start, the agent reads only the name and description of each skill (~100 tokens). Cheap. When a task matches, the full SKILL.md body loads into context. Reference files load only when the instructions explicitly call for them.

Multiple skills can activate simultaneously. If your task spans testing and deployment, both skills load and the agent resolves dependencies at the semantic level. No import statements, no dependency graphs, just Markdown and intent.

STAGE 1STAGE 2STAGE 3Discoverymetadata onlytesting~100 tokensdeployment~100 tokenscode-review~100 tokensdocgen~100 tokens~400 tokensActivationfull SKILL.md loadstesting--- frontmatter ---name: testingdescription: …---## InstructionsUse vitest, AAA…~2000–5000 tokExecutionfiles load on-demand📄 scripts/setup.sh📄 references/patterns.md📄 assets/template.tsonly if referencedvaries, on-demandProgressive disclosure: pay only for what you use

Anatomy of a Skill

The SKILL.md file follows a strict-then-free structure. The YAML frontmatter block is the contract: name (lowercase, hyphens, must match the directory) and description (keyword-rich, up to 1024 chars, this is what the agent uses for routing). Optional fields include license, compatibility, metadata, and allowed-tools.

Below the frontmatter, unrestricted Markdown. Step-by-step instructions, worked examples, edge case handlers, everything the agent needs to execute the task. Keep it under 500 lines. Heavy reference material goes in separate files.

---name: security-auditordescription: Audit code forsecurity vulns. Read-only.allowed-tools: Read Grep Glob---## When to useActivate for “security review”,“vulnerability scan”, “audit”…## Check For- Exposed API keys or credentials- SQL injection vectors- XSS vulnerabilities- Insecure dependencies## Gotchas- /health returns 200 even when DB is down. Use /ready instead.## Output FormatSeverity | File:line | DescriptionNever modify files.## ReferencesRead references/owasp-top-10.mdif findings match OWASP categoriesYAML Frontmatter~100 tokens · routingTrigger Keywordsimproves activationProcedure Steps~200 tokens · coreGotchashighest-value sectionSuccess Criteriaoutput formatFile Referencesloaded on demand

Context Window

Think of the context window as memory with a hard cap. Everything loaded costs tokens. The agent architecture splits content across three layers to minimize waste. Rules and MCP schemas load at session start because they’re always needed. (CLAUDE.md/copilot-instructions.md/AGENTS.md) Skills pay the index cost upfront (~100 tokens each) but defer the body until relevant.

Sub-agents and hooks operate outside the main context entirely. A sub-agent gets its own window, useful for parallel, isolated tasks. Hooks are external shell commands: zero tokens, infinite power.

SESSION STARTalways in contextAGENT.mdfull content loadedMCP Serverstool schemas loadedSkills Indexname + description onlyagent determines relevanceSESSION USEon-demandFull SKILL.md Bodyinstructions, examples, edge casesReference Filesload when skill references them💡 Only matching skills load~2K–5K tokens per skillISOLATEDzero main-context costSub-agents (own context)Hooks (shell commands)0 tokens from main window

The Mental Model

Deciding between these mechanisms comes down to three questions: Does this instruction need to be always-on? Should the agent decide when it’s relevant? Or do you invoke it explicitly? Rules are gravity, they always apply (CLAUDE.md/copilot-instructions.md/AGENTS.md). Skills are tools in a toolbox, the agent grabs the right one. Commands are shortcuts you type. Prompts are one-off conversations.

New instructionDoes it applyevery single time?YESRuleNOShould the agentdecide when to load it?YESSkillNOYou invoke it explicitly?NOPromptYESCommand

Key Takeaway

Skills didn’t end prompt engineering.

You’re still writing prompts. Still being specific, including examples, handling edge cases, testing. What changed is the delivery mechanism. Before skills, prompt engineering was a temporary act, something you performed every time you started a session. Now it’s an infrastructure concern: write it once, version-control it, share it with your team, and let the agent decide when to apply it.

The format is an open standard: portable across agents, composable across domains, progressively loaded to respect your context window. Your investment in writing good instructions is no longer locked to a single session or a single vendor.

Start with one skill. Take whatever prompt you copy-paste most often, put it in a SKILL.md with a good description, and drop it in your skills directory. When you stop thinking about it and just start seeing it work, you’ll understand the shift.

It’s not that the era of prompts is over. It’s that we finally learned how to ship them.


References