If you’ve recently noticed new numbers like 1x, 0.33x, or 3x popping up in your GitHub Copilot interface, you aren’t alone. As GitHub transitions to a “consumptive billing” model, staying productive means understanding how these credits are spent.
For most GitHub Enterprise (GHE) users, you have a baseline of 300 premium requests per month. Here is how to make them count.
1. What Exactly is a “Request”?
In Copilot, not every keystroke costs you. Requests are counted differently depending on what you are doing:
- IDE Code Completions: Generally unlimited. Ghost text that appears as you type typically does not count toward your 300-request premium quota.
- Chat Interactions: Every time you hit “Enter” in the Copilot Chat side panel or use the inline chat (
Cmd+I/Ctrl+I), it counts as one request. - Premium Models: When you manually switch to high-reasoning models (like Claude 3.5 Sonnet or GPT-4o), you are using “Premium Requests.”
Important: How Threads are Counted
It is a common misconception that one “Conversation Thread” equals one request. Each individual message you send counts as a separate request. For example, if you are using a 3x multiplier model (like o1-preview) and you send 5 prompts within a single chat window to refine a piece of code, your consumption will be calculated as:
5 Prompts × 3 (Multiplier) = 15 Requests deducted from your 300-request quota.
2. The Multiplier Effect: 1x, 0.33x, and 3x
Think of your 300 requests as a flexible balance. Different AI models “cost” different amounts based on their computational power.
| Multiplier | What it means | Example Models | Total Chats/Month |
|---|---|---|---|
| 0.33x | Economy: One chat uses 1/3 of a credit. | Gemini 1.5 Flash, Claude Haiku | ~900 requests |
| 1x | Standard: One chat uses 1 credit. | GPT-4o, Claude 3.5 Sonnet | 300 requests |
| 3x | High Power: One chat uses 3 credits. | Claude 3 Opus, o1-preview | 100 requests |
Pro Tip: Use the Auto Model Selection. It often defaults to the most efficient model for the task and can even provide a small “discount” on request counting in some enterprise configurations.

3. Understanding “% of Premium Requests Consumed”
In your Copilot dashboard or IDE status bar, you’ll see a percentage. This is your “fuel gauge” for the month.
- What it tracks: The weighted sum of your requests (Request count × Multiplier).
- What happens at 100%? You won’t be locked out of Copilot entirely. Usually, you will lose access to “Premium” models and fall back to the standard base model for the remainder of the billing cycle.
- Running Low? If you hit the ceiling and your workflow requires high-reasoning models, you will need to follow our internal process to request an additional quota through the IT/DevOps portal.
4. Conclusion: Work Smarter, Not Just Harder
You don’t need the most expensive model (3x) to fix a syntax error or write a boilerplate unit test. Reserve your “Premium” power for architectural decisions and complex debugging.
Next Steps:
- Check your current usage in the Copilot icon menu in VS Code or IntelliJ.
- Experiment with 0.33x models for routine documentation tasks.