Home All The GitHub Blog
author

The GitHub Blog

Stay inspired with updates, ideas, and insights from GitHub to aid developers in software design and development.

June 25, 2025  16:00:00

Software development has always been a deeply human, collaborative process. When we introduced GitHub Copilot in 2021 as an “AI pair programmer,” it was designed to help developers stay in the flow, reduce boilerplate work, and accelerate coding.

But what if Copilot could be more than just an assistant? What if it could actively collaborate with you—working alongside you on synchronous tasks, tackling issues independently, and even reviewing your code?

That’s the future we’re building.

Our vision for what’s next 

Today, AI agents in GitHub Copilot don’t just assist developers but actively solve problems through multi-step reasoning and execution. These agents are capable of:

  • Independent problem solving: Copilot will break down complex tasks and take the necessary steps to solve them, providing updates along the way.
  • Adaptive collaboration: Whether working in sync with you or independently in the background, Copilot will iterate on its own outputs to drive progress.
  • Proactive code quality: Copilot will proactively assist with tasks like issue resolution, testing, and code reviews, ensuring higher-quality, maintainable code.

Rather than fitting neatly into synchronous or asynchronous categories, the future of Copilot lies in its ability to flexibly transition between modes—executing tasks independently while keeping you informed and in control. This evolution will allow you to focus on higher-level decision-making while Copilot takes on more of the execution.

Let’s explore what’s already here—and what’s coming next.

Copilot in action: Taking steps toward our vision 

Agent mode: A real-time AI teammate inside your IDE

If you’ve used agent mode with GitHub Copilot (and you should, because it’s fantastic), you’ve already experienced an independent AI agent at work. 

Agent mode lives where you code and feels like handing your computer to a teammate for a minute: it types on your screen while you look on, and can grab the mouse. When you prompt it, the agent takes control, works through the problem, and reports its work back to you with regular check-in points. It can:

  • Read your entire workspace to understand context.
  • Plan multi‑step fixes or refactors (and show you the plan first).
  • Apply changes, run tests, and iterate in a tight feedback loop.
  • Ask for guidance whenever intent is ambiguous.
  • Run and refine its own work through an “agentic loop”—planning, applying changes, testing, and iterating.

Rather than just responding to requests, Copilot in agent mode actively works toward your goal. You define the outcome, and it determines the best approach—seeking feedback from you as needed, testing its own solutions, and refining its work in real time. 

Think of it as pair programming in fast forward: you’re watching the task unfold in real time, free to jump in or redirect at any step. ✨

Coding agent: An AI teammate that works while you don’t 

Not all coding happens in real time. Sometimes, you need to hand off tasks to a teammate and check back later.

That’s where our coding agent comes in—and it’s our first step in transforming Copilot into an independent agent. Coding agent spins up its own secure dev environment in the cloud. You can assign multiple issues to Copilot, then dive into other work (or grab a cup of coffee!) while it handles the heavy lifting. It can:

  • Clone your repo and bootstrap tooling in isolation.
  • Break the issue into steps, implement changes, and write or update tests.
  • Validate its work by running your tests and linter.
  • Open a draft PR and iterate based on your PR review comments.
  • Stream progress updates so you can peek in—or jump in—any time.

Working with coding agent is like asking a teammate in another room—with their own laptop and setup—to tackle an issue. You’re free to work on something else, but you can pop in for status or feedback whenever you like.

Less TODO, more done: The next stage of Copilot’s agentic future

The next stage of Copilot is being built on three converging pillars:

  1. Smarter, leaner models. Ongoing breakthroughs in large language models keep driving accuracy up while pushing latency and cost down. Expanded context windows now span entire monoliths, giving Copilot the long-range “memory” it needs to reason through complex codebases and return answers grounded in your real code.
  2. Deeper contextual awareness. Copilot increasingly understands the full story behind your work—issues, pull-request history, dependency graphs, even private runbooks and API specs (via MCP). By tapping this richer context, it can suggest changes that align with project intent, not just syntax.
  3. Open, composable foundation. We’re designing Copilot to slot into your stack—not the other way around. You choose the editor, models, and tools; Copilot plugs in, learns your patterns, and amplifies them. You’re in the driver’s seat, steering the AI to build, test, and ship code faster than ever.

Taken together, these pillars move Copilot beyond a single assistant toward a flexible AI teammate—one that can help any team, from three developers in a garage to thousands in a global enterprise, plan, code, test, and ship with less friction and more speed.

So, get ready for what’s next. The next wave is already on its way. 

Learn more about GitHub Copilot >

The post From pair to peer programmer: Our vision for agentic workflows in GitHub Copilot appeared first on The GitHub Blog.

June 24, 2025  17:04:48
Editor’s note: This piece was originally published in our LinkedIn newsletter, Branching Out_. Sign up now for more career-focused content > 

AI tools seem to be everywhere. With the tap of a key, they provide ready answers to queries, autocomplete faster than our brains can, and even suggest entire blocks of code. Research has shown that GitHub Copilot enables developers to code up to 55% faster. Junior developers, specifically, may see a 27% to 39% increase in output with AI assistance according to MIT, showing even greater productivity gains from their adoption of AI than more experienced developers. 

But here’s the question: you may be coding faster with AI, but when was the last time you asked yourself why before adopting a suggestion from an AI coding assistant? 

Being a developer is not just about producing code. It’s about understanding why the code works, how it fits into the bigger picture, and what happens when things break down. The best developers know how to think critically about new problems and take a systems view of solving them. That kind of expertise is what keeps software resilient, scalable, and secure, especially as AI accelerates how quickly we ship. Without it, we risk building faster but breaking more.

Our CEO, Thomas Dohmke, put it bluntly at VivaTech: “Startups can launch with AI‑generated code, but they can’t scale without experienced developers.” Developer expertise is the multiplier on AI, not the bottleneck.

We’re not saying you have to reject AI to be a great developer. At GitHub, we believe AI is a superpower, one that helps you move faster and build better when used thoughtfully. Your role as a developer in the age of AI is to be the human-in-the-loop: the person who knows why code works, why it sometimes doesn’t, what the key requirements in your environment are, and how to debug, guide AI tools, and go beyond vibe coding. 

After all, AI can help you write code a lot faster, but only developer expertise turns that speed into resilient, scalable, and secure software.

TL;DR: AI pair‑programming makes you faster, but it can’t replace the judgment that keeps software safe and maintainable. This article offers three concrete ways to level‑up your expertises.

AI’s productivity dividend + developer experience = greater impact

BenefitHow human judgment multiplies the value
⏱️ Faster commits (up to 55 % quicker task completion)Devs run thoughtful code reviews, write tests, and surface edge cases so speed never comes at the cost of quality.
🧠 Lower cognitive loadFreed-up mental bandwidth lets developers design better architectures, mentor teammates, and solve higher-order problems.
🌱 Easier onboarding for juniorsSenior engineers provide context, establish standards, and turn AI suggestions into teachable moments building long-term expertise.
🤖 Automated boilerplateDevs tailor scaffolding to real project needs, question assumptions, and refactor early to keep tech-debt in check and systems secure.

Speed without judgment can mean:

  • Security vulnerabilities that static analysis can’t spot on its own.
  • Architecture choices that don’t scale beyond the demo.
  • Documentation drift that leaves humans and models guessing.

The remedy? Double down on the fundamentals that AI still can’t master.

Mastering the fundamentals: 3 key parts of your workflow to focus on when using AI

As the home for all developers, we’ve seen it again and again: becoming AI-savvy starts with the old-school basics. You know, the classic tools and features you used before AI became a thing (we know, it’s hard to remember such a time!). We believe that only by mastering the fundamentals can you get the most value, at scale, out of AI developer tools like GitHub Copilot. 

A junior developer who jumps into their first AI-assisted project without having a foundational understanding of the basics (like pull requests, code reviews, and documentation) may ship fast, but without context or structure, they risk introducing bugs, missing edge cases, or confusing collaborators. That’s not an AI problem. It’s a fundamentals problem.

Let’s revisit the core skills every developer should bring to the table, AI or not. With the help of a few of our experts, we’ll show you how to level them up so you can dominate in the age of AI.

1. Push for excellence in the pull request

At the heart of developer collaboration, pull requests are about clearly communicating your intent, explaining your reasoning, and making it easier for others (humans and AI alike!) to engage with your work.

A well‑scoped PR communicates why a change exists—not just what changed. That context feeds human reviewers and Copilot alike.

As GitHub developer advocate Kedasha Kerr advises, start by keeping your pull requests small and focused. A tight, purposeful pull request is easier to review, less likely to introduce bugs, and faster to merge. It also gives your reviewers, as well as AI tools like Copilot, a clean scope to work with.

Your pull request description is where clarity counts. Don’t just list what changed—explain why it changed. Include links to related issues, conversations, or tracking tickets to give your teammates the full picture. If your changes span multiple files, suggest where to start reviewing. And be explicit about what kind of feedback you’re looking for: a quick sanity check? A deep dive? Let your reviewers know.

Before you ask for a review, review it yourself. Kedasha recommends running your tests, previewing your changes, and catching anything unclear or unpolished. This not only respects your reviewers’ time, it improves the quality of your code and deepens your understanding of the work.

A thoughtful pull request is a signal of craftsmanship. It builds trust with your team, strengthens your communication skills, and gives Copilot better context to support you going forward. That’s a win for you, your team, and your future self.

Here’s a quick 5‑item PR checklist to reference as you work: 

  1. Scope ≤ 300 lines (or break it up).
  2. Title = verb + object (e.g., Refactor auth middleware to async).
  3. Description answers “why now?” and links to the issue.
  4. Highlight breaking changes with ⚠️ BREAKING in bold.
  5. Request specific feedback (e.g., Concurrency strategy OK?).

Drop this snippet into .github/pull_request_template.md and merge.

Learn more about creating a great pull request > 

2. Rev up your code reviews

AI can generate code in seconds, but knowing how to review that code is where real expertise develops. Every pull request is a conversation: “I believe this improves the codebase, do you agree?” As GitHub staff engineer Sarah Vessels explains, good code reviews don’t just catch bugs; they teach, transfer knowledge, and help teams move faster with fewer costly mistakes.

And let’s be honest: as developers, we often read and review far more code than we actually write (and that’s ok!). No matter if code comes from a colleague or an AI tool, code reviews are a fundamental part of being a developer—and building a strong code review practice is critical, especially as the volume of code increases. 

You should start by reviewing your own pull requests before assigning them to others. Leave comments where you’d have questions as a reviewer. This not only helps you spot problems early, but also provides helpful context for your teammates. Keep pull requests small and focused. The smaller the diff, the easier it is to review, debug, and even roll back if something breaks in production. In DevOps organizations, especially large ones, small, frequent commits also help reduce merge conflicts and keep deployment pipelines flowing smoothly. 

As a reviewer, focus on clarity. Ask questions, challenge assumptions, and check how code handles edge cases or unexpected data. If you see a better solution, offer a specific example rather than just saying “this could be better.” Affirm good choices too: calling out strong design decisions helps reinforce shared standards and makes the review process less draining for authors.

Code reviews give you daily reps to build technical judgement, deepen your understanding of the codebase, and earn trust with your team. In an AI-powered world, they’re also a key way to level up by helping you slow down, ask the right questions, and spot patterns AI might miss.

Here are some heuristics to keep in mind when reviewing code:

  • Read the tests first. They encode intent.
  • Trace data flow for user input to DB writes to external calls.
  • Look for hidden state in globals, singletons, and caches.
  • Ask “What happens under load?” even if performance isn’t in scope.
  • Celebrate good patterns to reinforce team standards.

Learn more about how to review code effectively >

3. Invest in documentation 

Strong pull requests and code reviews help your team build better software today. But documentation makes it easier to build better software tomorrow. In the AI era, where code can be generated in seconds, clear, thorough documentation remains one of the most valuable—and overlooked—skills a developer can master.

Good documentation helps everyone stay aligned: your team, new contributors, stakeholders, and yes, even AI coding agents (docs make great context for any AI model, after all). The clearer your docs, the more effective AI tools like Copilot can be when generating code, tests, or summaries that rely on understanding your project’s structure. As GitHub’s software engineer Brittany Ellich and technical writer Sam Browning explain, well-structured docs accelerate onboarding, increase adoption, and make collaboration smoother by reducing back and forth.

The key is to keep your documentation clear, concise, and structured. Use plain language, focus on the information people actually need, and avoid overwhelming readers with too many edge cases or unnecessary details. Organize your docs with the Diátaxis framework, which breaks documentation into four categories:

  • Tutorials for hands-on learning with step-by-step guides
  • How-to guides for task-oriented steps with bulleted or numbered list
  • Explanations for deeper understanding
  • Reference for technical specs such as API specs

When your docs follow a clear structure, contributors know exactly where to find what they need and where to add new information as your project evolves.

In short: great documentation forces you to sharpen your own understanding of the system you’re building. That kind of clarity compounds over time and is exactly the kind of critical thinking that makes you a stronger developer.

Learn more about how to document your project effectively >

A level‑up dev toolkit

To make things simple, here’s a skills progression matrix to keep in mind no matter what level you’re at. 

SkillJuniorMid‑levelSenior
Pull requestsDescribes what changedExplains why and links issuesAnticipates perf/security impact & suggests review focus
Code reviewsLeaves 👍/👎Gives actionable commentsMentors, models architecture trade‑offs
DocumentationUpdates READMEWrites task‑oriented guidesCurates docs as a product with metrics

And here are some quick‑wins you can copy today:

  • .github/CODEOWNERS to auto‑route reviews
  • PR and issue templates for consistent context
  • GitHub Skills course: Communicating with Markdown

The bottom line

In the end, AI is changing how we write code, and curiosity, judgment, and critical thinking are needed more than ever. The best developers don’t just accept what AI suggests. They ask why. They provide context. They understand the fundamentals. They think in systems, write with intention, and build with care. 

So keep asking why. Stay curious. Continue learning. That’s what sets great developers apart—and it’s how you’ll survive and thrive in an AI-powered future.

Want to get started? Explore GitHub Copilot >

The post Why developer expertise matters more than ever in the age of AI appeared first on The GitHub Blog.

June 24, 2025  19:56:04

When generative AI tools guess what you need, the magic only lasts as long as the guesses are right. Add an unfamiliar codebase, a security checklist your team keeps in a wiki, or a one‑off Slack thread that explains why something matters, and even the most and even the most powerful model may fill in gaps with assumptions rather than having access to your specific context and knowledge.

GitHub Copilot Spaces fixes that problem by letting you bundle the exact context Copilot should read—code, docs, transcripts, sample queries, you name it—into a reusable “space.” Once a space is created on github.com, Copilot chat and command interactions on the GitHub platform are grounded in that curated knowledge, producing answers that feel like they came from your organization’s resident expert. In the future, IDE integration for Spaces is planned.

In this article, we’ll walk through:

  • A 5‑minute quick‑start guide to creating your first space
  • Tips for personalizing Copilot’s tone, style, and conventions with custom instructions
  • Real‑world recipes for accessibility, data queries, and onboarding
  • Collaboration, security, and what’s next on the roadmap (spoiler: IDE integration and Issues/PR support)

Why context is the new bottleneck for AI‑assisted development

Large language models (LLMs) thrive on patterns, but day‑to‑day engineering work is full of unpatterned edge cases, including:

  • A monorepo that mixes modern React with legacy jQuery
  • Organizational wisdom buried in Slack threads or internal wikis
  • Organization‑specific security guidelines that differ from upstream OSS docs

Without that context, an AI assistant can only guess. But with Copilot Spaces, you choose which files, documents, or free‑text snippets matter, drop them into a space, and let Copilot use that context to answer questions or write code. As Kelly Henckel, PM for GitHub Spaces, said in our GitHub Checkout episode, “Spaces make it easy to organize and share context, so Copilot acts like a subject matter expert.” The result? Fewer wrong guesses, less copy-pasting, and code that’s commit-ready.

What exactly is a Copilot Space?

Think of a space as a secure, shareable container of knowledge plus behavioral instructions:

What it holdsWhy it matters
AttachmentsCode files, entire folders, Markdown docs, transcripts, or any plain text you addGives Copilot the ground truth for answers
Custom instructionsShort system prompts to set tone, coding style, or reviewer expectationsLets Copilot match your house rules
Sharing & permissionsFollows the same role/visibility model you already use on GitHubNo new access control lists to manage
Live updatesFiles stay in sync with the branch you referencedYour space stays up to date with your codebase


Spaces are available to anyone with a Copilot license (Free, Individual, Business, or Enterprise) while the feature is in public preview. Admins can enable it under Settings  > Copilot > Preview features.

TL;DR: A space is like pinning your team’s collective brain to the Copilot sidebar and letting everyone query it in plain language.

Quick-start guide: How to build your first space in 5 minutes

  1. Navigate to github.com/copilot/spaces and click Create space.
  2. Name it clearly. For example, frontend‑styleguide.
  3. Add a description so teammates know when—and when not—to use it.
  4. Attach context:
  • From repos: Pull in folders like src/components or individual files such as eslint.config.js.
  • Free‑text hack: Paste a Slack thread, video transcript, onboarding checklist, or even a JSON schema into the Text tab. Copilot treats it like any other attachment.
  1. Write custom instructions. A sentence or two is enough:
  • “Respond as a senior React reviewer. Enforce our ESLint rules and tailwind class naming conventions.”
  1. Save and test it. You’re done. Ask Copilot a question in the Space chat—e.g., “Refactor this <Button> component to match our accessibility checklist”—and watch it cite files you just attached.

Personalize Copilot’s coding style (and voice, too) 

Custom instructions are the “personality layer” of a space and where spaces shine because they live alongside the attachments. This allows you to do powerful things with a single sentence, including:

  • Enforce conventions
    •  “Always prefer Vue 3 script setup syntax and Composition API for examples.”
  • Adopt a team tone
    • “Answer concisely. Include a one‑line summary before code blocks.”
  • Teach Copilot project‑specific vocabulary
    •  “Call it ‘scenario ID’ (SCID), not test case ID.”

During the GitHub Checkout interview, Kelly shared how she built a personal space for a nonprofit side project: She attached only the Vue front‑end folder plus instructions on her preferred conventions, and Copilot delivered commit‑ready code snippets that matched her style guide on the first try.

Automate your workflow: three real‑world recipes

1. Accessibility compliance assistant

Space ingredients

  • Markdown docs on WCAG criteria and GitHub’s internal “Definition of Done”
  • Custom instruction: “When answering, cite the doc section and provide a code diff if changes are required.”

How it helps: Instead of pinging the accessibility lead on Slack, you can use Spaces to ask questions like “What steps are needed for MAS‑C compliance on this new modal?” Copilot summarizes the relevant checkpoints, references the doc anchor, and even suggests ARIA attributes or color‑contrast fixes. GitHub’s own accessibility SME, Katherine, pinned this space in Slack so anyone filing a review gets instant, self‑service guidance.

2. Data‑query helper for complex schemas

Space ingredients

  • YAML schema files for 40+ event tables
  • Example KQL snippets saved as .sql files
  • Instruction: “Generate KQL only, no prose explanations unless asked.”

How it helps: Product managers and support engineers who don’t know your database structures can ask, “Average PR review time last 7 days?” Copilot autocompletes a valid KQL query with correct joins and lets them iterate. Result: lets PMs and support self-serve without bugging data science teams.

Space ingredients

  • Key architecture diagrams exported as SVG text
  • ADRs and design docs from multiple repos
  • Custom instruction: “Answer like a mentor during onboarding; link to deeper docs.”

How it helps: New hires type “How does our auth flow handle SAML?” and get a structured answer with links and diagrams, all without leaving GitHub. Because spaces stay in sync with main, updates to ADRs propagate automatically—no stale wikis.

Collaboration that feels native to GitHub

Spaces respect the same permission model you already use:

  • Personal spaces: visible only to you unless shared
  • Organization‑owned spaces: use repo or team permissions to gate access
  • Read‑only vs. edit‑capable: let SMEs maintain the canon while everyone else consumes

Sharing is as simple as sending the space URL or pinning it to a repo README. Anyone with access and a Copilot license can start chatting instantly.

What’s next for Copilot Spaces?

We’re working to bring Copilot Spaces to more of your workflows, and are currently developing:

  • Issues and PR attachments to bring inline discussions and review notes into the same context bundle.
  • IDE Integration: Query Spaces in VS Code for tasks like writing tests to match your team’s patterns.
  • Org‑wide discoverability to help you browse spaces like you browse repos today, so new engineers can search “Payments SME” and start chatting.

Your feedback will shape those priorities. Drop your ideas or pain points in the public discussion or, if you’re an enterprise customer, through your account team. 

Get started today

Head to github.com/copilot/spaces, spin up your first space, and let us know how it streamlines your workflow. Here’s how to get it fully set up on your end: 

  1. Flip the preview toggle: Settings > Copilot  >  Preview features > Enable Copilot Spaces.
  2. Create one small, high‑impact space—maybe your team’s code‑review checklist or a set of common data queries.
  3. Share the link in Slack or a README and watch the pings to subject‑matter experts drop.
  4. Iterate: prune unused attachments, refine instructions, or split a giant space into smaller ones.

Copilot Spaces is free during the public preview and doesn’t count against your Copilot seat entitlements when you use the base model. We can’t wait to see what you build when Copilot has the right context at its fingertips.

The post GitHub Copilot Spaces: Bring the right context to every suggestion appeared first on The GitHub Blog.

June 17, 2025  16:48:26

Managing issues in software development can be tedious and time-consuming. But what if your AI peer programmer could streamline this process for you? GitHub Copilot‘s latest issue management features can help developers create, organize, and even solve issues. Below, we’ll dig into these features and how they can save time, reduce friction, and maintain consistency across your projects.

1. Image to issue: Turn screenshots into instant bug reports

Writing detailed bug reports is often repetitive and frustrating, leading to inconsistent documentation. Copilot’s image to issue feature significantly reduces this friction.

Simply paste a screenshot of the bug into Copilot chat with a brief description prompt Copilot to create an issue for you, then Copilot will analyze the image and generate a comprehensive bug report for you. No more struggling to describe visual glitches or UI problems—the image will speak for itself, and Copilot will handle the documentation.

For example, if you encounter a UI alignment issue or a visual glitch that’s hard to describe, just capture a screenshot, paste it into Copilot, and briefly mention the problem. In the animation above, the user’s prompt was “create me a bug issue because markdown tables are not rendering properly in the comments.” Copilot then automatically drafted a report, including steps to reproduce the bug.

To get the most out of this feature, consider annotating your screenshots clearly—highlighting or circling the problematic area—to help Copilot generate even more precise issue descriptions.

Dive into the documentation to learn more.

2. Get the details right: Templates, tags, and types

Projects quickly become disorganized when team members skip adding proper metadata. Incorrect templates, missing labels, or wrong issue types make tracking and prioritization difficult.

Copilot solves this by automatically inferring the best template based on your prompt. It also adds appropriate labels and issue types without requiring you to navigate multiple dropdown menus or memorize tagging conventions.

Need something specific? Simply ask Copilot to add particular labels or switch templates. If you change templates after drafting, Copilot will automatically reformat your content—no manual copying required.

3. Stay organized with versioning and milestones

Keeping issues updated and properly categorized is crucial for clear communication, maintaining project velocity, and ensuring visibility into progress. But with so much else to do, it’s easy to let this work fall by the wayside.

With Copilot, adding projects and milestones is as simple as typing a prompt. You can also specify exactly how you want issues organized. For example, ask Copilot to use the “Bug Report” or “Feature Request” template, add labels like priority: high, frontend, or needs-triage, or set the issue type to “Task” or “Epic.” Copilot will apply these details automatically, ensuring your issues are consistently categorized.

Additionally, Copilot tracks all changes, making them easily referenceable. You can review issue history and revert changes if needed, ensuring nothing important gets lost.

4. Batch create multiple issues at once

Sometimes you need to log several issues after a customer meeting, user testing session, or bug bash. Traditionally, this means repeating the same creation process multiple times.

Copilot supports multi-issue drafting, allowing you to create multiple issues in a single conversation. Whether logging feature requests or documenting bugs, batch creation saves significant time.

Simply prompt Copilot to create the issues, describe each one, and Copilot will draft them all. For example, you could give the following prompt to create two issues at once:

Create me issues for the following features:
- Line breaks ignored in rendered Markdown despite double-space
- Bold and italic Markdown styles not applied when combined

You will still need to review and finalize each one, but the drafting process is streamlined into a single workflow.

5. Let AI help fix your bugs with Copilot coding agent

Creating issues is only half the battle—fixing them is where the real work begins. You can now assign issues directly to Copilot. Just ask Copilot coding agent to take ownership of the issue, and your AI coding assistant will start analyzing the bug. Copilot can even suggest draft pull requests with potential fixes.

This seamless handoff reduces context-switching and accelerates resolution times, allowing your team to focus on more complex challenges.

Beyond Copilot: Issues enhancements on GitHub

While Copilot is already revolutionizing issue management, we at GitHub are always looking for ways to enhance the overall issues experience. For example, you can now:

  • Standardize issue types across repositories for consistent tracking and reporting.
  • Break down complex tasks into sub-issues for better progress management.
  • Use advanced search capabilities with logical operators to quickly find exactly what you need.
  • Manage larger projects with expanded limits supporting up to 50,000 items.

Kickstart enhanced issue management today

Ready to transform your issue management workflow with GitHub Copilot? Head to github.com/copilot and try prompts like:

  • “Create me an issue for…”
  • “Log a bug for…”
  • Or simply upload a screenshot and mention you want to file a bug.

Experience firsthand how Copilot makes issue management feel less like administrative overhead and more like a conversation with your AI pair programmer.

Learn more about creating issues with Copilot >

The post 5 tips for using GitHub Copilot with issues to boost your productivity appeared first on The GitHub Blog.

June 17, 2025  16:43:28

The open source Git project just released Git 2.50 with features and bug fixes from 98 contributors, 35 of them new. We last caught up with you on the latest in Git back when 2.49 was released.

💡 Before we get into the details of this latest release, we wanted to remind you that Git Merge, the conference for Git users and developers is back this year on September 29-30, in San Francisco. Git Merge will feature talks from developers working on Git, and in the Git ecosystem. Tickets are on sale now; check out the website to learn more.

With that out of the way, let’s take a look at some of the most interesting features and changes from Git 2.50.

Improvements for multiple cruft packs

When we covered Git 2.43, we talked about newly added support for multiple cruft packs. Git 2.50 improves on that with better command-line ergonomics, and some important bugfixes. In case you’re new to the series, need a refresher, or aren’t familiar with cruft packs, here’s a brief overview:

Git objects may be either reachable or unreachable. The set of reachable objects is everything you can walk to starting from one of your repository’s references: traversing from commits to their parent(s), trees to their sub-tree(s), and so on. Any object that you didn’t visit by repeating that process over all of your references is unreachable.

In Git 2.37, Git introduced cruft packs, a new way to store your repository’s unreachable objects. A cruft pack looks like an ordinary packfile with the addition of an .mtimes file, which is used to keep track of when each object was most recently written in order to determine when it is safe1 to discard it.

However, updating the cruft pack could be cumbersome–particularly in repositories with many unreachable objects–since a repository’s cruft pack must be rewritten in order to add new objects. Git 2.43 began to address this through a new command-line option: git repack --max-cruft-size. This option was designed to split unreachable objects across multiple packs, each no larger than the value specified by --max-cruft-size. But there were a couple of problems:

  • If you’re familiar with git repack’s --max-pack-size option, --max-cruft-size’s behavior is quite confusing. The former option specifies the maximum size an individual pack can be, while the latter involves how and when to move objects between multiple packs.
  • The feature was broken to begin with! Since --max-cruft-size also imposes on cruft packs the same pack-size constraints as --max-pack-size does on non-cruft packs, it is often impossible to get the behavior you want.

For example, suppose you had two 100 MiB cruft packs and ran git repack --max-cruft-size=200M. You might expect Git to merge them into a single 200 MiB pack. But since --max-cruft-size also dictates the maximum size of the output pack, Git will refuse to combine them, or worse: rewrite the same pack repeatedly.

Git 2.50 addresses both of these issues with a new option: --combine-cruft-below-size. Instead of specifying the maximum size of the output pack, it determines which existing cruft pack(s) are eligible to be combined. This is particularly helpful for repositories that have accumulated many unreachable objects spread across multiple cruft packs. With this new option, you can gradually reduce the number of cruft packs in your repository over time by combining existing ones together.

With the introduction of --combine-cruft-below-size, Git 2.50 repurposed --max-cruft-size to behave as a cruft pack-specific override for --max-pack-size. Now --max-cruft-size only determines the size of the outgoing pack, not which packs get combined into it.

Along the way, a bug was uncovered that prevented objects stored in multiple cruft packs from being “freshened” in certain circumstances. In other words, some unreachable objects don’t have their modification times updated when they are rewritten, leading to them being removed from the repository earlier than they otherwise would have been. Git 2.50 squashes this bug, meaning that you can now efficiently manage multiple cruft packs and freshen their objects to your heart’s content.

[source, source]

Incremental multi-pack reachability bitmaps

​​Back in our coverage of Git 2.47, we talked about preliminary support for incremental multi-pack indexes. Multi-pack indexes (MIDXs) act like a single pack *.idx file for objects spread across multiple packs.

Multi-pack indexes are extremely useful to accelerate object lookup performance in large repositories by binary searching through a single index containing most of your repository’s contents, rather than repeatedly searching through each individual packfile. But multi-pack indexes aren’t just useful for accelerating object lookups. They’re also the basis for multi-pack reachability bitmaps, the MIDX-specific analogue of classic single-pack reachability bitmaps. If neither of those are familiar to you, don’t worry; here’s a brief refresher. Single-pack reachability bitmaps store a collection of bitmaps corresponding to a selection of commits. Each bit position in a pack bitmap refers to one object in that pack. In each individual commit’s bitmap, the set bits correspond to objects that are reachable from that commit, and the unset bits represent those that are not.

Multi-pack bitmaps were introduced to take advantage of the substantial performance increase afforded to us by reachability bitmaps. Instead of having bitmaps whose bit positions correspond to the set of objects in a single pack, a multi-pack bitmap’s bit positions correspond to the set of objects in a multi-pack index, which may include objects from arbitrarily many individual packs. If you’re curious to learn more about how multi-pack bitmaps work, you can read our earlier post Scaling monorepo maintenance.

However, like cruft packs above, multi-pack indexes can be cumbersome to update as your repository grows larger, since each update requires rewriting the entire multi-pack index and its corresponding bitmap, regardless of how many objects or packs are being added. In Git 2.47, the file format for multi-pack indexes became incremental, allowing multiple multi-pack index layers to be layered on top of one another forming a chain of MIDXs. This made it much easier to add objects to your repository’s MIDX, but the incremental MIDX format at the time did not yet have support for multi-pack bitmaps.

Git 2.50 brings support for the multi-pack reachability format to incremental MIDX chains, with each MIDX layer having its own *.bitmap file. These bitmap layers can be used in conjunction with one another to provide reachability information about selected commits at any layer of the MIDX chain. In effect, this allows extremely large repositories to quickly and efficiently add new reachability bitmaps as new commits are pushed to the repository, regardless of how large the repository is.

This feature is still considered highly experimental, and support for repacking objects into incremental multi-pack indexes and bitmaps is still fairly bare-bones. This is an active area of development, so we’ll make sure to cover any notable developments to incremental multi-pack reachability bitmaps in this series in the future.

[source]

The ORT merge engine replaces recursive

This release also saw some exciting updates related to merging. Way back when Git 2.33 was released, we talked about a new merge engine called “ORT” (standing for “Ostensibly Recursive’s Twin”).

ORT is a from-scratch rewrite of Git’s old merging engine, called “recursive.” ORT is significantly faster, more maintainable, and has many new features that were difficult to implement on top of its predecessor.

One of those features is the ability for Git to determine whether or not two things are mergeable without actually persisting any new objects necessary to construct the merge in the repository. Previously, the only way to tell whether two things are mergeable was to run git merge-tree --write-tree on them. That works, but in this example merge-tree wrote any new objects generated by the merge into the repository. Over time, these can accumulate and cause performance issues. In Git 2.50, you can make the same determination without writing any new objects by using merge-tree’s new --quiet mode and relying on its exit code.

Most excitingly in this release is that ORT has entirely superseded recursive, and recursive is no longer part of Git’s source code. When ORT was first introduced, it was only accessible through git merge’s -s option to select a strategy. In Git 2.34, ORT became the default choice over recursive, though the latter was still available in case there were bugs or behavior differences between the two. Now, 16 versions and two and a half years later, recursive has been completely removed from Git, with its author, Elijah Newren, writing:

As a wise man once told me, “Deleted code is debugged code!”

As of Git 2.50, recursive has been completely debugged deleted. For more about ORT’s internals and its development, check out this five part series from Elijah here, here, here, here, and here.

[source, source, source]


  • If you’ve ever scripted around your repository’s objects, you are likely familiar with git cat-file, Git’s purpose-built tool to list objects and print their contents. git cat-file has many modes, like --batch (for printing out the contents of objects), or --batch-check (for printing out certain information about objects without printing their contents).

    Oftentimes it is useful to dump the set of all objects of a certain type in your repository. For commits, git rev-list can easily enumerate a set of commits. But what about, say, trees? In the past, to filter down to just the tree objects from a list of objects, you might have written something like:

    $ git cat-file --batch-check='%(objecttype) %(objectname)' \
        --buffer <in | perl -ne 'print "$1\n" if /^tree ([0-9a-f]+)/'
    Git 2.50 brings Git’s object filtering mechanism used in partial clones to git cat-file, so the above can be rewritten a little more concisely like:
    $ git cat-file --batch-check='%(objectname)' --filter='object:type=tree' <in

    [source]

  • While we’re on the topic, let’s discuss a little-known git cat-file command-line option: --allow-unknown-type. This arcane option was used with objects that have a type other than blob, tree, commit, or tag. This is a quirk dating back a little more than a decade ago that allows git hash-object to write objects with arbitrary types. In the time since, this feature has gotten very little use. In fact, git cat-file -p --allow-unknown-type can’t even print out the contents of one of these objects!

    $ oid="$(git hash-object -w -t notatype --literally /dev/null)"
    $ git cat-file -p $oid
    fatal: invalid object type
    

    This release makes the --allow-unknown-type option silently do nothing, and removes support from git hash-object to write objects with unknown types in the first place.

    [source]

  • The git maintenance command learned a number of new tricks this release as well. It can now perform a few new different kinds of tasks, like worktree-prune, rerere-gc, and reflog-expire. worktree-prune mirrors git gc’s functionality to remove stale or broken Git worktrees. rerere-gc also mirrors existing functionality exposed via git gc to expire old rerere entries from previously recorded merge conflict resolutions. Finally, reflog-expire can be used to remove stale unreachable objects from out of the reflog.

    git maintenance also ships with new configuration for the existing loose-objects task. This task removes lingering loose objects that have since been packed away, and then makes new pack(s) for any loose objects that remain. The size of those packs was previously fixed at a maximum of 50,000, and can now be configured by the maintenance.loose-objects.batchSize configuration.

    [source, source, source]

  • If you’ve ever needed to recover some work you lost, you may be familiar with Git’s reflog feature, which allows you to track changes to a reference over time. For example, you can go back and revisit earlier versions of your repository’s main branch by doing git show main@{2} (to show main prior to the two most recent updates) or main@{1.week.ago} (to show where your copy of the branch was at a week ago).

    Reflog entries can accumulate over time, and you can reach for git reflog expire in the event you need to clean them up. But how do you delete the entirety of a branch’s reflog? If you’re not yet running Git 2.50 and thought “surely it’s git reflog delete”, you’d be wrong! Prior to Git 2.50, the only way to delete a branch’s entire reflog was to do git reflog expire $BRANCH --expire=all.

    In Git 2.50, a new delete sub-command was introduced, so you can accomplish the same as above with the much more natural git reflog delete $BRANCH.

    [source]

  • Speaking of references, Git 2.50 also received some attention to how references are processed and used throughout its codebase. When using the low-level git update-ref command, Git used to spend time checking whether or not the proposed refname could also be a valid object ID, making its lookups ambiguous. Since update-ref is such a low-level command, this check is no longer done, delivering some performance benefits to higher-level commands that rely on update-ref for their functionality.

    Git 2.50 also learned how to cache whether or not any prefix of a proposed reference name already exists (for example, you can’t create a reference ref/heads/foo/bar/baz if either refs/heads/foo/bar or refs/heads/foo already exists).

    Finally, in order to make those checks, Git used to create a new reference iterator for each individual prefix. Git 2.50’s reference backends learned how to “seek” existing iterators, saving time by being able to reuse the same iterator when checking each possible prefix.

    [source]

  • If you’ve ever had to tinker with Git’s low-level curl configuration, you may be familiar with Git’s configuration options for tuning HTTP connections, like http.lowSpeedLimit and http.lowSpeedTime which are used to terminate an HTTP connection that is transferring data too slowly.

    These options can be useful when fine-tuning Git to work in complex networking environments. But what if you want to tweak Git’s TCP Keepalive behavior? This can be useful to control when and how often to send keepalive probes, as well as how many to send, before terminating a connection that hasn’t sent data recently.

    Prior to Git 2.50, this wasn’t possible, but this version introduces three new configuration options: http.keepAliveIdle, http.keepAliveInterval, and http.keepAliveCount which can be used to control the fine-grained behavior of curl’s TCP probing (provided your operating system supports it).

    [source]

  • Git is famously portable and runs on a wide variety of operating systems and environments with very few dependencies. Over the years, various parts of Git have been written in Perl, including some commands like the original implementation of git add -i . These days, very few remaining Git commands are written in Perl.

    This version reduces Git’s usage of Perl by removing it as a dependency of the test suite and documentation toolchain. Many Perl one-liners from Git’s test suite were rewritten to use other Shell functions or builtins, and some were rewritten as tiny C programs. For the handful of remaining hard dependencies on Perl, those tests will be skipped on systems that don’t have a working Perl.

    [source, source]

  • This release also shipped a minor cosmetic update to git rebase -i. When starting a rebase, your $EDITOR might appear with contents that look something like:

    pick c108101daa foo
    pick d2a0730acf bar
    pick e5291f9231 baz
    

    You can edit that list to break, reword, or exec (among many others), and Git will happily execute your rebase. But if you change the commit message in your rebase’s TODO script, they won’t actually change!

    That’s because the commit messages shown in the TODO script are just meant to help you identify which commits you’re rebasing. (If you want to rewrite any commit messages along the way, you can use the reword command instead). To clarify that these messages are cosmetic, Git will now prefix them with a # comment character like so:

    pick c108101daa # foo
    pick d2a0730acf # bar
    pick e5291f9231 # baz
    

    [source]

  • Long time readers of this series will recall our coverage of Git’s bundle feature (when Git added support for partial bundles), though we haven’t covered Git’s bundle-uri feature. Git bundles are a way to package your repositories contents: both its objects and the references that point at them into a single *.bundle file.

    While Git has had support for bundles since as early as v1.5.1 (nearly 18 years ago!), its bundle-uri feature is much newer. In short, the bundle-uri feature allows a server to serve part of a clone by first directing the client to download a *.bundle file. After the client does so, it will try to perform a fill-in fetch to gather any missing data advertised by the server but not part of the bundle.

    To speed up this fill-in fetch, your Git client will advertise any references that it picked up from the *.bundle itself. But in previous versions of Git, this could sometimes result in slower clones overall! That’s because up until Git 2.50, Git would only advertise the branches in refs/heads/* when asking the server to send the remaining set of objects.

    Git 2.50 now includes advertises all references it knows about from the *.bundle when doing a fill-in fetch on the server, making bundle-uri-enabled clones much faster.

    For more details about these changes, you can check out this blog post from Scott Chacon.

    [source]

  • Last but not least, git add -p (and git add -i) now work much more smoothly in sparse checkouts by no longer having to expand the sparse index. This follows in a long line of work that has been gradually adding sparse-index compatibility to Git commands that interact with the index.

    Now you can interactively stage parts of your changes before committing in a sparse checkout without having to wait for Git to populate the sparsified parts of your repository’s index. Give it a whirl on your local sparse checkout today!

    [source]


The rest of the iceberg

That’s just a sample of changes from the latest release. For more, check out the release notes for 2.50, or any previous version in the Git repository.

🎉 Git turned 20 this year! Celebrate by watching our interview of Linus Torvalds, where we discuss how it forever changed software development.

1 It’s never truly safe to remove an unreachable object from a Git repository that is accepting incoming writes, because marking an object as unreachable can race with incoming reference updates, pushes, etc. At GitHub, we use Git’s –expire-to feature (which we wrote about in our coverage of Git 2.39) in something we call “limbo repositories” to quickly recover objects that shouldn’t have been deleted, before deleting them for good.  ↩️

The post Highlights from Git 2.50 appeared first on The GitHub Blog.

June 12, 2025  16:00:00

One of the beautiful things about software is that it’s always evolving. However, each piece carries the weight of past decisions made when it was created. Over time, quick fixes, “temporary” workarounds, and deadline compromises compound into tech debt. Like financial debt, the longer you wait to address it, the more expensive it becomes.

It’s challenging to prioritize tech debt fixes when deadlines loom and feature requests keep streaming in. Tech debt work feels like a luxury when you’re constantly in reactive mode. Fixing what’s broken today takes precedence over preventing something from possibly breaking tomorrow. Occasionally that accumulated tech debt even results in full system rewrites, which are time-consuming and costly, just to achieve parity with existing systems.

Common approaches to managing tech debt, like gardening weeks (dedicated sprints for tech debt) and extended feature timelines, don’t work well. Gardening weeks treat tech debt as an exception rather than ongoing maintenance, often leaving larger problems unaddressed while teams postpone smaller fixes. Extended timelines create unrealistic estimates that can break trust between engineering and product teams.

The fundamental problem is treating tech debt as something that interrupts normal development flow. What if instead you could chip away at tech debt continuously, in parallel with regular work, without disrupting sprint commitments or feature delivery timelines?

Using AI agents to routinely tackle tech debt

Managing tech debt is a big opportunity for AI agents like the coding agent in GitHub Copilot.

With AI agents like the coding agent in GitHub Copilot, tech debt items no longer need to go into the backlog to die. While you’re focusing on the new features and architectural changes that you need to bring to your evolving codebase, you can assign GitHub Copilot to complete tech debt tasks at the same time. 

Here are some examples of what the coding agent can do:

  • Improve code test coverage: Have limited code testing coverage but know you’ll never get the buy-in to spend time writing more tests? Assign issues to GitHub Copilot to increase test coverage. The agent will take care of it and ping you when the tests are ready to review.
  • Swap out dependencies: Need to swap out a mocking library for a different one, but know it will be a long process? Assign the issue to swap out the library to GitHub Copilot. It can work through that swap while you’re focusing your attention elsewhere.
  • Standardize patterns across codebases: Are there multiple ways to return and log errors in your codebase, making it hard to investigate issues when they occur and leading to confusion during development? Assign an issue to GitHub Copilot to standardize a single way of returning and logging errors.
  • Optimize frontend loading patterns: Is there an area where you are making more API calls than your application really needs? Ask GitHub Copilot to change the application to only make those API calls when the data is requested, instead of on every page load.
  • Identify and eliminate dead code: Is there anywhere in your project where you may have unused functions, outdated endpoints, or stale config hanging out? Ask GitHub Copilot to look for these and suggest ways to safely remove them.

If those examples sound very specific, it’s because they are. These are all real changes that my team has tackled using GitHub Copilot coding agent—and these changes probably wouldn’t have occurred without it. The ability for us to tackle tech debt continuously while delivering features has grown exponentially, and working AI agents into our workflow has proven to be incredibly valuable. We’ve been able to reduce the time it takes to remove tech debt from weeks of intermittent, split focus to a few minutes of writing an issue and a few hours reviewing and iterating on a pull request.

This isn’t about replacing human engineers; it’s about amplifying what we do best. While agents handle the repetitive, time-consuming work of refactoring legacy code, updating dependencies, and standardizing patterns across codebases, we can focus on architecture decisions, feature innovation, and solving complex business problems. The result is software that stays healthier over time, teams that ship faster, and engineers who spend their time on work that actually energizes them.

When AI is your copilot, you still have to do the work

The more I learn about AI, the more I realize just how critical humans are in the entire process. AI agents excel at well-defined, repetitive tasks, the kind of tech debt work that’s important but tedious. But when it comes to larger architectural decisions or complex business logic changes, human judgment is still irreplaceable.

Since we are engineers, we know the careful planning and tradeoff considerations that come with our craft. One wrong semicolon, and the whole thing can come crashing down. This is why every prompt requires careful consideration and each change to your codebase requires thorough review.

Think of it as working with a brilliant partner that can write clean code all day but needs guidance on what actually matters for your application. The AI agent brings speed and consistency; it never gets tired, never cuts corners because it’s Friday afternoon, and can maintain focus across hundreds of changes. But you bring the strategic thinking: knowing which tech debt to tackle first, understanding the business impact of different approaches, and recognizing when a “quick fix” might create bigger problems down the line.

The magic happens in the interaction between human judgment and AI execution. You define the problem, set the constraints, and validate the solution. The agent handles the tedious implementation details that would otherwise consume hours of your time. This partnership lets you operate at a higher level while still maintaining quality and control.

Tips to make the most of the coding agent in GitHub Copilot

Here’s what I’ve learned from using the coding agent in GitHub Copilot for the past few months:

  1. Write Copilot Instructions for your repository. This results in a much better experience. You can even ask your agent to write the instructions for you to get started, which is how I did it! Include things like the scripts that you need to run during development to format and lint (looking at you, go fmt).
  2. Work in digestible chunks. This isn’t necessarily because the agent needs to work in small chunks. I learned the hard way that it will make some pretty ambitious, sweeping changes if you don’t explicitly state which areas of your codebase you want changed. However, reviewing a 100+-file pull request is not my idea of a good time, so working in digestible chunks generally makes for a better experience for me as the reviewer. What this looks like for me is instead of writing an issue that says “Improve test coverage for this application”, I create multiple issues assigned to GitHub Copilot that “improve test coverage for file X” or “improve test coverage for folder Y”, to better scope the changes that I need to review.
  3. Master the art of effective prompting. The quality of what you get from AI agents depends heavily on how well you communicate your requirements. Be specific about the context, constraints, and coding standards you want the agent to follow.
  4. Always review the code thoroughly. While AI agents can handle repetitive tasks well, they don’t understand business logic like you do. Making code review a central part of your workflow ensures quality while still benefiting from the automation. This is one of the reasons why I love the GitHub Copilot coding agent. It uses the same code review tools that I use every day to review code from my colleagues, making it incredibly easy to fit into my workflow.

The future belongs to software engineers who embrace AI tools

We’re at a pivotal moment in software engineering. For too long, tech debt has been the silent productivity killer. It’s the thing we all know needs attention but rarely gets prioritized until it becomes a crisis. AI coding agents are giving us the opportunity to change that equation entirely.

The engineers who learn to effectively collaborate with AI agents—the ones who master the art of clear prompting, thoughtful code review, and strategic task delegation—will have a massive advantage. They’ll be able to maintain codebases that their peers struggle with, tackle tech debt that others avoid, and potentially eliminate the need for those expensive, time-consuming rewrites that have plagued our industry for decades.

But this transformation requires intentional effort. You need to experiment with these tools, learn their strengths and limitations, and integrate them into your workflow. The technology is ready; the question is whether you’ll take advantage of it.

If you haven’t started exploring how AI agents can help with your tech debt, now is the perfect time to begin. Your future self, who is more productive, less frustrated, and focused on the creative aspects of engineering, will thank you. More importantly, so will your users, who’ll benefit from a more stable, well-maintained application that continues to evolve instead of eventually requiring significant downtime for a complete rebuild.

Assign your tech debt to GitHub Copilot coding agent in your repositories today!

The post How the GitHub billing team uses the coding agent in GitHub Copilot to continuously burn down technical debt appeared first on The GitHub Blog.

June 11, 2025  23:24:42

In May, we experienced three incidents that resulted in degraded performance across GitHub services.

May 1 22:09 UTC (lasting 1 hour and 4 minutes)

On May 1, 2025, from 22:09 UTC to 23:13 UTC, the Issues service was degraded and users weren’t able to upload attachments. The root cause was identified to be a new feature which added a custom header to all client-side HTTP requests, causing CORS errors when uploading attachments to our provider. We estimate that ~130k users were impacted by the incident for ~45min.

We mitigated the incident by rolling back the feature flag that added the new header at 22:56 UTC. In order to prevent this from happening again, we are adding new metrics to monitor and ensure the safe rollout of changes to client-side requests. We have since deployed an augmented version of the feature based on learnings from this incident that is performing well in production.

May 28 09:45 UTC (lasting 5 hours)

On May 28, 2025, from approximately 09:45 UTC to 14:45 UTC, GitHub Actions experienced delayed job starts for workflows in public repos using Ubuntu-24 standard hosted runners. This was caused by a misconfiguration in backend caching behavior after a failover, which led to duplicate job assignments reducing overall capacity in the impacted hosted runner pools. Approximately 19.7% of Ubuntu-24 hosted runner jobs on public repos were delayed. Other hosted runners, self-hosted runners, and private repo workflows were unaffected.

By 12:45 UTC, the configuration issue was fixed through updates to the backend cache. The pools were also scaled up to more quickly work through the backlog of queued jobs until queuing impact was fully mitigated at 14:45 UTC. We are improving failover resiliency and validation to reduce the likelihood of similar issues in the future.

May 30 08:10 UTC (lasting 7 hours and 50 minutes)

On May 30, 2025, between 08:10 UTC and 16:00 UTC, the Microsoft Teams GitHub integration service experienced a complete service outage.

During this period, the integration was unable to process user requests or deliver notifications, resulting in a 100% error rate across all functionality, with the exception of link previews. This outage was caused by an authentication issue with our downstream authentication provider.

While the appropriate monitoring was in place, the alerting thresholds were not sufficiently sensitive to trigger a timely response, resulting in a delay in incident detection and engagement. Once engaged, our team worked closely with the downstream provider to diagnose and resolve the authentication failure. However, longer-than-expected response times from the provider contributed to the extended duration of the outage.

We mitigated the incident by working with our provider to restore service functionality and are working to migrate to more durable authentication methods to reduce the risk of similar issues in the future.


Please follow our status page for real-time updates on status changes and post-incident recaps. To learn more about what we’re working on, check out the GitHub Engineering Blog.

The post GitHub Availability Report: May 2025 appeared first on The GitHub Blog.

June 10, 2025  16:00:00

In my spare time I enjoy building Gundam models, which are model kits to build iconic mechas from the Gundam universe. You might be wondering what this has to do with software engineering. Product engineers can be seen as the engineers who take these kits and build the Gundam itself. They are able to utilize all pieces and build a working product that is fun to collect or even play with!

Platform engineers, on the other hand, supply the tools needed to build these kits (like clippers and files) and maybe even build a cool display so everyone can see the final product. They ensure that whoever is constructing it has all the necessary tools, even if they don’t physically build the Gundam themselves.

A photograph of several Gundam models on a shelf.

About a year ago, my team at GitHub moved to the infrastructure organization, inheriting new roles and Areas of Responsibility (AoRs). Previously, the team had tackled external customer problems, such as building the new deployment views across environments. This involved interacting with users who depend on GitHub to address challenges within their respective industries. Our new customers as a platform engineering team are internal, which makes our responsibilities different from the product-focused engineering work we were doing before.

Going back to my Gundam example, rather than constructing kits, we’re now responsible for building the components of the kits. Adapting to this change meant I had to rethink my approach to code testing and problem solving.

Whether you’re working on product engineering or on the platform side, here are a few best practices to tackle platform problems.

Understanding your domain

One of the most critical steps before tackling problems is understanding the domain. A “domain” is the business and technical subject area in which a team and platform organization operate. This requires gaining an understanding of technical terms and how these systems interact to provide fast and reliable solutions. Here’s how to get up to speed: 

  • Talk to your neighbors: Arrange a handover meeting with a team that has more knowledge and experience with the subject matter. This meeting provides an opportunity to ask questions about terminology and gain a deeper understanding of the problems the team will be addressing. 
  • Investigate old issues: If there is a backlog of issues that are either stale or still persistent, they may give you a better understanding of the system’s current limitations and potential areas for improvement.
  • Read the docs: Documentation is a goldmine of knowledge that can help you understand how the system works. 

Bridging concepts to platform-specific skills

While the preceding advice offers general guidance applicable to both product and platform teams, platform teams — serving as the foundational layer — necessitate a more in-depth understanding.

  • Networks: Understanding network fundamentals is crucial for all engineers, even those not directly involved in network operations. This includes concepts like TCP, UDP, and L4 load balancing, as well as debugging tools such as dig. A solid grasp of these areas is essential to comprehend how network traffic impacts your platform.
  • Operating systems and hardware: Selecting appropriate virtual machines (VMs) or physical hardware is vital for both scalability and cost management. Making well-informed choices for particular applications requires a strong grasp of both. This is closely linked to choosing the right operating system for your machines, which is important to avoid systems with vulnerabilities or those nearing end of life.
  • Infrastructure as Code (IaC): Automation tools like Terraform, Ansible, and Consul are becoming increasingly essential. Proficiency in these tools is becoming a necessity as they significantly decrease human error during infrastructure provisioning and modifications. 
  • Distributed systems: Dealing with platform issues, particularly in distributed systems, necessitates a deep understanding that failures are inevitable. Consequently, employing proactive solutions like failover and recovery mechanisms is crucial for preserving system reliability and preventing adverse user experiences. The optimal approach for this depends entirely on the specific problem and the desired system behavior.

Knowledge sharing

By sharing lessons and ideas, engineers can introduce new perspectives that lead to breakthroughs and innovations. Taking the time to understand why a project or solution did or didn’t work and sharing those findings provides new perspectives that we can use going forward.

Here are three reasons why knowledge sharing is so important: 

  • Teamwork makes the dream work: Collaboration often results in quicker problem resolution and fosters new solution innovation, as engineers have the opportunity to learn from each other and expand upon existing ideas.
  • Prevent lost knowledge: If we don’t share our lessons learned, we prevent the information from being disseminated across the team or organization. This becomes a problem if an engineer leaves the company or is simply unavailable.
  • Improve our customer success: As engineers, our solutions should effectively serve our customers. By sharing our knowledge and lessons learned, we can help the team build reliable, scalable, and secure platforms, which will enable us to create better products that meet customer needs and expectations!

But big differences start to appear between product engineering and infrastructure engineering when it comes to the impact radius and the testing process.

Impact radius

With platforms being the fundamental building blocks of a system, any change (small or large) can affect a wide range of products. Our team is responsible for DNS, a foundational service that impacts numerous products. Even a minor alteration to this service can have extensive repercussions, potentially disrupting access to content across our site and affecting products ranging from GitHub Pages to GitHub Copilot. 

  • Understand the radius: Or understand the downstream dependencies. Direct communication with teams that depend on our service provides valuable insights into how proposed changes may affect other services.
  • Postmortems: By looking at past incidents related to our platform and asking “What is the impact of this incident?”, we can form more context around what change or failure was introduced, how our platform played a role in it, and how it was fixed.
  • Monitoring and telemetry: Condense important monitoring and logging into a small and quickly digestible medium to give you the general health of the system. This could be a Single Availability Metric (SAM), for example. The ability to quickly glance at a single dashboard allows engineers to rapidly pinpoint the source of an issue and streamlines the debugging and incident mitigation process, as compared to searching through and interpreting detailed monitors or log messages.

Testing changes

Testing changes in a distributed environment can be challenging, especially for services like DNS. A crucial step in solving this issue is utilizing a test site as a “real” machine where you can implement and assess all your changes. 

  • Infrastructure as Code (IaC): When using tools like Terraform or Ansible, it’s crucial to test fundamental operations like provisioning and deprovisioning machines. There are circumstances where a machine will need to be re-provisioned. In these cases, we want to ensure the machine is not accidentally deleted and that we retain the ability to create a new one if needed.
  • End-to-End (E2E): Begin directing some network traffic to these servers. Then the team can observe host behavior by directly interacting with it, or we can evaluate functionality by diverting a small portion of traffic.
  • Self-healing: We want to test the platform’s ability to recover from unexpected loads and identify bottlenecks before they impact our users. Early identification of bottlenecks or bugs is crucial for maintaining the health of our platform.

Ideally changes will be implemented on a host-by-host basis once testing is complete. This approach allows for individual machine rollback and prevents changes from being applied to unaffected hosts.

What to remember

Platform engineering can be difficult. The systems GitHub operates with are complex and there are a lot of services and moving parts. However, there’s nothing like seeing everything come together. All the hard work our engineering teams do behind the scenes really pays off when the platform is running smoothly and teams are able to ship faster and more reliably — which allows GitHub to be the home to all developers.

Want to dive deeper? Check out our infrastructure related blog posts.

The post How GitHub engineers tackle platform problems appeared first on The GitHub Blog.

June 9, 2025  13:00:00

Welcome to the next episode in our GitHub for Beginners series, where we’re diving into the world of GitHub Copilot. This is our eighth and final episode, and it’s been quite a journey. We’ve covered a lot of different topics showcasing the power of GitHub Copilot, and you can check out all our previous episodes on our blog or as videos.

Today we’re covering that important step of code review—getting a second pair of eyes on your code. This can help catch bugs, improve code quality, and ensure consistency. We’ll also talk about refactoring code—restructuring existing code without changing its functionality. This can make things more efficient or more readable for those who need to understand it later (even if that’s yourself).

In any development project, maintaining a clean and efficient codebase is crucial to make future work easier. But in reality, things can quickly become messy as you’re focused on making it work. That’s where Copilot can come in handy. It doesn’t just assist you in writing code, it also makes the review and refactoring process smoother and more efficient.

Refactoring code

Suppose that you have a function that is long and difficult to understand. Refactoring code can make it easier to understand and ensure pieces of it aren’t too unwieldy to follow.

To use GitHub Copilot to help you with this refactoring task, open up Copilot Chat and do the following:

  1. Highlight the function you want to refactor in your code editor.
  2. In Copilot Chat, send the prompt please provide refactoring suggestions.
  3. Review the changes that Copilot suggests. It might break the code up into smaller pieces or optimize the logic for better performance. It might even update variable names to be aligned with your naming conventions.
  4. Once you’re comfortable with the suggested changes, click the Apply in editor button to apply the changes and have Copilot automatically update the file.

This works well for small changes, but there’s no reason to stop there. This is just if you want to focus Copilot’s attention on a specific area of your code. You can also have it look across entire files or your project. For example, take a look at this dashboard component. Let’s say you want to improve it.

To do so, open up the component in your editor and send Copilot Chat the following prompt:

How can I improve this code?

Copilot will then give several suggestions on ways the code can be improved. You can review these suggestions and even ask Copilot to explain each step in greater detail. When you’re finished, click the Apply in editor button to have Copilot make the necessary changes.

To see this in action, check out the video version of this episode. Just remember that since Copilot is a generative AI tool, the suggestions you see might not match those in the video exactly.

You can take this a step further by asking specific and direct questions. For example, you might want to make the data fetching logic reusable across components by creating a custom hook and centralizing the logic. To do this, create a new chat conversation and ask it the following:

How can I extract the data fetching logic into a custom hook?

Copilot generates refactored code that allows you to extract the logic out of the Dashboard component into a new hook directory that you can use in multiple components in the app. This makes it much more reusable! To follow through on this:

  1. Save the changes in a new file by selecting > Insert into New File.
  2. Import the hook into the dashboard file.
  3. Remove the old code.

Now what if you wanted Copilot to take a look and make sure you didn’t have a bunch of redundant code in your file? Just ask it.

Is there any redundant code in this file?

Copilot scans your code and identifies any redundancies that can be corrected. After reviewing the suggestions, go ahead and apply them to tighten up your code and make it a bit cleaner.

A slide explaining that Copilot can help with performance improvement suggestions, how to make functions more modular, adding comments for readability, upgrading syntax, and much more!

Reviewing and refactoring your code with GitHub Copilot is a great way to do an initial overview of the work you’ve done. You can also ask Copilot for performance improvement suggestions, how to make functions more modular, have it add comments, or upgrade syntax to be more modern. If you can think of a question, ask Copilot and see what it can do.

Code reviews in github.com

If you have the proper access, you can also get GitHub Copilot code reviews directly on github.com to make the process even more seamless. First, open up a pull request. Under the “Reviewers” section in the top-right corner, you’ll notice Copilot listed as a possible reviewer. Click Request to have Copilot review your code.

A screenshot shoring where to request a review from Copilot, under the 'reviewers' list on the right.

Once Copilot finishes the review, scroll down on the pull request to see any suggestions that it makes. It’s important to note that Copilot always leaves a Comment review, and never an Approve or Request changes review. This means that Copilot’s reviews will never be required nor block merges.

To accept any of Copilot’s suggestions, click Commit suggestion at the bottom of the specific suggestion you’d like to integrate. This pulls up a context menu. Click Commit changes and GitHub will update your pull request with that change.

A screenshot showing the 'Commit changes' button in the drop down menu titled 'Commit suggestion.'

You can also batch several suggested changes by clicking the Add to batch button under individual suggestions so they are pulled into one change.

After you’ve integrated any suggestions and made any changes, you can request another review from Copilot by clicking the circular arrows in the “Reviewers” box next to Copilot’s name.

With Copilot code review, you can have Copilot perform a preliminary review of your code before asking your team for that final code review. 

Key components and limitations

The key components of using Copilot for code review and refactoring can be broken down into five areas:

  • Automated suggestions: Copilot suggests improvements and optimizations as you review your code.
  • Consistency checks: Copilot helps maintain coding standards by suggesting consistent naming conventions and structures for your functions.
  • Refactoring assistance: Copilot provides actionable refactoring suggestions, whether it’s simplifying complex functions or reorganizing your codebase.
  • Error detection: Copilot can spot potential bugs or inefficiencies that you might have missed while building.
  • Comment support: Copilot helps generate clear comments in your code, making it easier to understand for others.

While GitHub Copilot can do a lot, it’s important to keep in mind that you are the pilot, and we call it Copilot for a reason. It’s a powerful tool, but it does have some limitations. First and foremost, it relies on the context you provide, so unclear or poorly documented code might lead to less effective suggestions.

A slide listing items Copilot can do: Assists with code review and refactoring; helps maintain clean, efficient, and consistent code; saves you time and reduces errors; and allows you to focus more on building.

In addition, while Copilot can catch many issues, it’s not a substitute for a thorough human review. Always double check the suggestions it provides to ensure they align with your project’s goals and standards, as well as your organizational policies.

Your next steps

GitHub Copilot is an invaluable assistant for code review and refactoring. It helps you maintain clean, efficient, and consistent code, saving you time and reducing errors. By integrating Copilot into your workflow, you can focus more on building great features and less on the nitty-gritty aspects of code maintenance.

If you’d like to dive a little deeper into using Copilot to help with code reviews and refactoring, here are some links to get you started:

Don’t forget that you can use GitHub Copilot for free! If you have any questions, pop them in the GitHub Community thread, and we’ll be sure to respond. Thanks so much for joining us for this season of GitHub for Beginners! Don’t forget to check out our previous episodes if you haven’t already.

Happy coding!

Need some help getting through a preliminary code review? Give GitHub Copilot a try!

The post GitHub for Beginners: Code review and refactoring with GitHub Copilot appeared first on The GitHub Blog.

June 6, 2025  16:00:00

You’ve used GitHub Copilot to help you write code in your IDE. Now, imagine assigning Copilot an issue, just like you would a teammate—and getting a fully tested pull request in return. 

That’s the power of the new coding agent in GitHub Copilot. Built directly into GitHub, this agent starts working as soon as you assign it a GitHub Issue or prompt it in VS Code. Keeping you firmly in the pilot’s seat, the coding agent builds pull requests based on the issues you assign it.

This isn’t just autocomplete. It’s a new class of software engineering agents that work asynchronously to help you move faster, clean up tech debt, and focus on the work that really matters. Let’s explore how this coding agent works and how it can help you find new ways of working faster. ✨

Oh, and if you’re a visual learner we have you covered. 👇

Coding agent in GitHub Copilot 101

This new coding agent, which is our first asynchronous software engineering agent, is built on GitHub Actions and works like a teammate. You assign it an issue, let it do the work, and then review its outputs before changing or accepting them. It also incorporates context from related issues or PR discussions and can follow custom repository instructions that your team has already set.

You assign Copilot an issue and it plans the work, opens a pull request, writes the code, runs the tests, and then asks for your review. If you leave feedback, it’ll revise the PR and keep going until you approve. 

The process isn’t instant—it takes a little time to compute and run. But it’s already helping developers work faster and more efficiently. 

According to Brittany Ellich, Senior Software Engineer at GitHub, traditional advice for devs has been to do one thing at a time, and do it well. But with the new coding agent, GitHub can now help you do more things well, like:

  • Offloading repetitive, boilerplate tasks like adding and extending unit tests
  • Maintaining better issue hygiene and documentation with quick typo fixes and small refactors
  • Improving user experience by fixing bugs, updating user interface features, and bolstering accessibility

By assigning these low- to medium-complexity tasks to the coding agent, you may finally have the bandwidth to focus on higher-level problem solving and design, tackle that tech debt that’s been piling up, learn new skills, and more.

Even though Copilot is doing the work, you’re in control the entire time: You decide what to assign, what to approve, and what should be changed.

How to get the coding agent to complete an issue

Step one: Write and assign the issue to Copilot

This is where you’ll be most involved—and this step is crucial for success. Think of writing the issue like briefing a team member: The more context you give, the better the results (like any other prompt). 

Make sure to include:

  • Relevant background info: Why this task matters, what it touches, and any important history or context. 
  • Expected outcome: What “done” looks like.
  • Technical details: File names, functions, or components involved.
  • Formatting or linting rules: These are especially important if you use custom scripts or auto-generated files. You can add these instructions for Copilot so it’s automatically reflected in every issue. 

Once you’ve written the issue, it’s time to assign it to Copilot—just like you would a teammate. You can do this via github.com, the GitHub Mobile app, or through the GitHub CLI. 

Copilot works best with well-scoped tasks, but it can handle larger ones. It just might take a little bit longer. You don’t have to assign only one issue; you can batch-assign multiple issues, which is great for tasks like increasing test coverage or updating documentation.

Here are a few tips and tricks that we’ve found helpful:

  • You can use issue templates with fields like “description” and “acceptance criteria” to make writing issues easier and more consistent across your team. 
  • If your repo includes custom instructions (such as which files are auto-generated or how to run formatters), Copilot will use these to improve its output.
  • The agent can actually see images included in its assigned issues on GitHub, so you can easily share images of what you want your new feature to look like, and the agent can run with it. 

Step two: Copilot plans the code 

Once you assign Copilot an issue, it will add an 👀 emoji reaction. Then it will kick off an agent session using GitHub Actions, which powers the integrated, secure, and fully customizable environment the coding agent is built on. 

This environment is where Copilot can explore and analyze your codebase, run tests, and make changes. The coding agent will simultaneously open both a branch and a pull request, which will evolve as Copilot works. 

Copilot will read your issue and break it down into a checklist of tasks, then update the pull request with this checklist. As it completes each task, Copilot checks it off and pushes commits to the branch. You can watch the session live, view the session logs later, or refresh the PR to see how Copilot is reasoning through the task. These are updated regularly for increased visibility, so you can easily spot problems if they arise.

Step three: Copilot writes the code

This is where the magic happens. Once you see the “Copilot started work” event in the pull request timeline, you’ll know the wheels are turning. Here’s what happens next:

  • Copilot modifies your codebase based on the issue.
  • It runs automated tests and linters if they’re present in your repo and updates or generates tests as needed.
  • Copilot will also push commits iteratively as it completes tasks.

You can see the work happening in real time, and if you notice that something looks off, you can step in at any point to make sure things are going in the right direction before Copilot passes it back to you.

Step four: Review and merge the pull request

This is another stage where you’ll need to be involved. Once Copilot finishes the work, it will tag you for review. You can either:

  • Approve the pull request
  • Leave comments
  • Ask for changes

Copilot will automatically request reviewers based on the rules you’ve set in your repo. And if needed, you can go through multiple review cycles until you get your desired outcome—just like with a human teammate. 

Once the pull request is approved:

  • The change can now follow your repo’s merge and deploy process.
  • The agent session will end.
  • If needed, a human can take over from the branch at any time. 

🚨One important thing to note: The person who created the issue can’t be the final approver. You’ll need a peer, manager, or designated reviewer to give the green light. This promotes collaboration and ensures unreviewed or unsafe code doesn’t get merged.

And you’re done! ✅ 

Like any other tool (or teammate), Copilot’s coding agent might need a little prodding to deliver exactly the output you want. Remember, the biggest factor to success starts with how you write the issue (Copilot can also help you write those faster). 

Here are a few tips on how to get the most out of Copilot: 

  • Write comprehensive issues: Clear, scoped, and well-documented issues lead to better results.
  • Start small: Try using the agent for tests, docs, or simple refactors.
  • Troubleshooting: If Copilot gets stuck, tag it in a comment and add more context. Iterating and refining the issue requirements can also help.

Take this with you 

AI and LLMs are improving at a rapid pace. “The models we’re using today are the worst ones we’ll ever use—because they’re only getting better,” says Ellich. And coding agents are already proving useful in real workflows. 

Try using the coding agent on a sample repo. See what it can do. And start building your own agentic workflows. Happy coding!

Visit the Docs to get started with the coding agent in GitHub Copilot.

The post Assigning and completing issues with coding agent in GitHub Copilot appeared first on The GitHub Blog.