Sustainable Coding: Why Your AI Prompts Matter More Than You Think

As software engineers, we’ve entered a golden age of productivity. Tools like GitHub Copilot have transformed the way we write code, debug, and refactor. However, this “superpower” comes with a hidden cost that doesn’t show up on your monthly subscription bill.

Every time we hit Enter on a prompt, a massive physical infrastructure springs to life. Data centers hum, cooling systems engage, and electricity surges. While AI feels like “magic” in the cloud, it is fuelled by very real, finite resources: water and electricity.

As the saying goes, “With great power comes great responsibility.” Today, being a senior-level engineer isn’t just about code quality; it’s about resource-conscious engineering.


The Environmental Price of a Prompt

It’s easy to think of a single AI query as negligible. But at scale, the numbers are sobering. Recent data from 2024 and 2025 reveals the physical footprint of our digital assistants:

  • Thirsty Servers: Research indicates that for every 20 to 50 prompts you send to a Large Language Model (LLM), the system “drinks” approximately 500ml of water (the size of a standard water bottle) for cooling.
  • Energy Intensity: A single request to an AI model consumes roughly 10x more electricity than a standard Google search.
  • Carbon Footprint: In 2025 alone, the AI boom released roughly as much CO2 as the entire city of New York.

> Source Reference: According to The Sustainable Agency (2026), global AI-related water demand is projected to exceed the annual water use of entire countries like Denmark by 2027.


Choosing the Right Tool for the Job

One of the biggest contributors to “AI waste” is using a cost-heavy model for a simple task. Using a massive, multi-billion parameter model to explain a simple Regex or fix a typo is like using a sledgehammer to hang a picture frame.

1. Match the Model to the Task

GitHub Copilot and similar tools often allow for different underlying models.

  • Small Language Models (SLMs): For simple text refactoring, documentation updates, or basic unit tests, use smaller, more efficient models. They use 10–100x less energy while providing the same result for narrow tasks.
  • Large Language Models (LLMs): Reserve these for complex architectural decisions, cross-file logic, or debugging deep-seated race conditions.

2. The “One-Shot” Goal

Providing a “wrong” or vague prompt often leads to a cycle of 5–10 follow-up prompts to get the desired output. This doesn’t just waste your time; it decuples the environmental cost of that single task.

  • Be Specific: Give context (files, language, constraints) in the first prompt.
  • Think Before You Type: Treat each prompt as a function call with a high execution cost.

How to Be a “Green” AI User

Being mindful doesn’t mean stop using AI—it means using it optimally.

  1. Refactor for Sustainability: Use Copilot to find inefficient algorithms in your code. Reducing the CPU cycles your code takes to run in production is a massive win for the planet.
  2. Avoid Redundant Calls: Don’t prompt for things you already know or can find in 2 seconds of documentation.
  3. Use Local Models where possible: For basic autocomplete, local, on-device models are nearly “carbon neutral” compared to cloud-based inference.

Conclusion

AI is an incredible tailwind for our work, but we must be the ones steering it toward a sustainable future. By choosing the right models and being intentional with our prompts, we ensure that the software we build today doesn’t come at the expense of tomorrow’s environment.

What’s one way you can optimize your AI workflow today? Share your tips in the comments below!


Further Reading Links

Decoding Your GitHub Copilot Quota: Multipliers, Premium Requests, and Limits

If you’ve recently noticed new numbers like 1x, 0.33x, or 3x popping up in your GitHub Copilot interface, you aren’t alone. As GitHub transitions to a “consumptive billing” model, staying productive means understanding how these credits are spent.

For most GitHub Enterprise (GHE) users, you have a baseline of 300 premium requests per month. Here is how to make them count.


1. What Exactly is a “Request”?

In Copilot, not every keystroke costs you. Requests are counted differently depending on what you are doing:

  • IDE Code Completions: Generally unlimited. Ghost text that appears as you type typically does not count toward your 300-request premium quota.
  • Chat Interactions: Every time you hit “Enter” in the Copilot Chat side panel or use the inline chat (Cmd+I / Ctrl+I), it counts as one request.
  • Premium Models: When you manually switch to high-reasoning models (like Claude 3.5 Sonnet or GPT-4o), you are using “Premium Requests.”

Important: How Threads are Counted

It is a common misconception that one “Conversation Thread” equals one request. Each individual message you send counts as a separate request. For example, if you are using a 3x multiplier model (like o1-preview) and you send 5 prompts within a single chat window to refine a piece of code, your consumption will be calculated as:

5 Prompts × 3 (Multiplier) = 15 Requests deducted from your 300-request quota.

2. The Multiplier Effect: 1x, 0.33x, and 3x

Think of your 300 requests as a flexible balance. Different AI models “cost” different amounts based on their computational power.

Multiplier What it means Example Models Total Chats/Month
0.33x Economy: One chat uses 1/3 of a credit. Gemini 1.5 Flash, Claude Haiku ~900 requests
1x Standard: One chat uses 1 credit. GPT-4o, Claude 3.5 Sonnet 300 requests
3x High Power: One chat uses 3 credits. Claude 3 Opus, o1-preview 100 requests

Pro Tip: Use the Auto Model Selection. It often defaults to the most efficient model for the task and can even provide a small “discount” on request counting in some enterprise configurations.

Visualizing the model selection and associated multipliers

3. Understanding “% of Premium Requests Consumed”

In your Copilot dashboard or IDE status bar, you’ll see a percentage. This is your “fuel gauge” for the month.

  • What it tracks: The weighted sum of your requests (Request count × Multiplier).
  • What happens at 100%? You won’t be locked out of Copilot entirely. Usually, you will lose access to “Premium” models and fall back to the standard base model for the remainder of the billing cycle.
  • Running Low? If you hit the ceiling and your workflow requires high-reasoning models, you will need to follow our internal process to request an additional quota through the IT/DevOps portal.

4. Conclusion: Work Smarter, Not Just Harder

You don’t need the most expensive model (3x) to fix a syntax error or write a boilerplate unit test. Reserve your “Premium” power for architectural decisions and complex debugging.

Next Steps:

  • Check your current usage in the Copilot icon menu in VS Code or IntelliJ.
  • Experiment with 0.33x models for routine documentation tasks.

5. Further Reading Links

GitHub Copilot Mental Model: Prompts vs. Instructions vs. Agents vs. Skills

The ecosystem around GitHub Copilot is evolving faster than most developers can track. We’ve moved past simple autocomplete into a world of Agents, Skills, and Custom Instructions, but the terminology often feels like marketing fluff. If you’re tired of guessing which configuration file does what, this post provides a clean, technical mental model to help you master Copilot’s architecture without the fog

Below is a clean mental model of the terms you keep hearing around GitHub Copilotagent, skill, instruction, prompt, chat mode, etc.—and how they fit together without marketing fog. I’ll keep it practical and aligned with how you’d actually use this in a real repo or VS Code setup.

1️⃣ Prompt – “What you say right now”

What it is

*   A single, ad‑hoc input you type into Copilot Chat.
*   Exists only for that one interaction.

Examples

*   “Explain this C# method”
*   “Create a Bruno request with multipart/form-data”
*   “Why is this Graph API call returning 403?”

Key characteristics

*   Ephemeral
*   Manual
*   No reuse unless you copy/paste

When to use

✅ One‑off questions  
✅ Exploration, debugging, learning

> Think of a prompt as talking to Copilot like a human—powerful, but forgetful.


2️⃣ Instructions (Custom Instructions) – “Always behave like this”

What they are

*   Persistent rules automatically injected into Copilot’s context.
*   Define standards, conventions, and expectations.

Where they live

*   .github/copilot-instructions.md
*   .github/instructions/*.instructions.md
*   AGENTS.md (multi-agent compatible)

What they contain

*   Coding standards
*   Architectural constraints
*   Style rules
*   Review expectations

Example

- Use async/await for all IO  
- Follow Clean Architecture  
- Use xUnit for tests

Key characteristics

*   Always-on (or pattern-based)
*   No scripts or assets
*   Text-only rules

When to use

✅ Team-wide standards  
✅ “Never do X / Always do Y” rules > Instructions are policy, not capability.


3️⃣ Prompt Files – “Named, reusable prompts”

What they are

*   Saved prompts you explicitly invoke.
*   Typically exposed as slash commands.

Where they live

*   .github/prompts/*.prompt.md

Example

# /create-release-notes
Generate release notes from git history using our standard format.

Key characteristics

*   Manual invocation
*   One-shot execution
*   No auto-discovery

When to use

✅ Repeatable tasks  
✅ Human-triggered workflows

> Prompt files are macros for prompts.


4️⃣ Agent (formerly Chat Mode) – “Who Copilot is”

Important clarification

✅ Chat modes and agents are the same thing

GitHub renamed chat modes → agents [dev.to]

What an agent is

*   A persona + capability boundary for Copilot
*   Defines:
    *   Role (tester, reviewer, SRE, architect)
    *   Allowed tools
    *   Tone and intent

Where they live

*   .github/agents/*.agent.md

Example

name: qa-agent
description: Focuses on testing and validation
tools:
  - file_read
  - test_runner

Key characteristics

*   Selected explicitly
*   Changes how Copilot thinks and acts
*   Can execute tools (in agent mode)

When to use

✅ Distinct roles  
✅ Tool-based workflows  
✅ Autonomous behavior

> Agents define identity and authority.


5️⃣ Skill (Agent Skill) – “What Copilot knows how to do”

This is the big new concept.

What a skill is

*   A self-contained, reusable capability package
*   Loaded automatically when relevant

What it can include

*   Instructions (SKILL.md)
*   Scripts
*   Code templates
*   Reference files
*   Examples

Where skills live

*   .github/skills//
*   ~/.copilot/skills/ (personal)

Example

.github/skills/webapp-testing/
     ├─ SKILL.md
     ├─ generate_tests.py
     └─ test_templates/

Key characteristics

*   Auto-discovered
*   Loaded on demand (progressive disclosure)
*   Portable across:
    *   VS Code
    *   Copilot CLI
    *   Copilot coding agent

When to use

✅ Multi-step workflows  
✅ Domain-specific procedures  
✅ Tasks needing scripts/assets

> Skills are procedural knowledge, not rules.


6️⃣ How they fit together (mental model)

    You type a PROMPT
           ↓
    An AGENT interprets it
           ↓
    INSTRUCTIONS apply global rules
           ↓
    Relevant SKILLS auto-load
           ↓
    Agent executes or responds

7️⃣ Quick comparison table

Concept Purpose Persistent Assets Auto-loaded Best for
Prompt One-off input Quick questions
Instructions Standards & rules Team conventions
Prompt File Named prompt Repeatable tasks
Agent Role/persona Tool-driven workflows
Skill Capability Complex procedures

8️⃣ Practical guidance (what to use when)

If you remember only one thing:

✅ Instructions → “Always follow these rules”

✅ Prompts → “Do this now”

✅ Prompt files → “Do this again later”

✅ Skills → “Here’s how to do this properly”

✅ Agents → “Act as this role”


9️⃣ Why this matters for you

Given your work:

*   API integrations
*   Documentation standards
*   Repeatable release and configuration tasks

You’d likely want:

*   ✅ Instructions for coding & doc standards
*   ✅ Skills for:
    *   Graph API email patterns
    *   Release note generation
    *   Configuration or module setup
*   ✅ Agents for:
    *   Reviewer vs implementer roles

📚 Further Reading & Sources