Check out the dev/core collection at thegithubshop.com.
The GitHub Blog
Stay inspired with updates, ideas, and insights from GitHub to aid developers in software design and development.
What does it mean to be a developer? That question was at the heart of our thinking behind the new GitHub Shop collection: dev/core. The collection celebrates the developer’s layered experience—from the code, through the world of creation, to the unique identity of you, the developer, the builder, the person at the core of it all.
Ok, that sounds poetic, we hear you say. But how does that translate into merch? Our dev/core collection captures what it is to be a developer but also brings an exciting update to our core basics. Made by developers, for developers. Let’s dive into it.
A developer from head to toe
The <header>
cap and <footer>
socks are for those who know their way around a codebase—and an outfit. The cap kicks things off, a nod to the top of every great project. Down below, the socks wrap things up with comfort. Together, they bookend your look the way you bookend your code.
Getting back to the basics

Inspired by the all-time favorite black Invertocat hoodie, these two new builds level up your dev uniform. One features our iconic Octocat mascot reimagined in ASCII. The other reps GitHub Copilot, your favorite AI pair programmer. One nods to our roots as developers. The other looks to what’s next.
For when your brain hits Ctrl+Alt+Vibes

Throw it back to your first build—when the code was janky, the caffeine was flowing, and the dream was big. This tie-dye tee channels that raw, colorful chaos that got you into being a dev in the first place. It’s got startup energy. Garage band energy. “I learned CSS on a forum in 2004” energy.
The graph you obsess over (now in tote form)

There’s something deeply satisfying about watching your contribution graph fill up day by day, square by square, with every commit and small (or large) breakthrough. This tote celebrates that love with a contribution graph in the shape of our Invertocat, worn proudly on your side.
Write code, wear code

The ASCII tee is a tribute to the early days of building—when text was all you had and all you needed. It’s a direct tribute to the roots of development—where every line of code is a building block.
Look familiar? You might recognize it from thegithubshop.com homepage, where we’ve created your very own interactable version. You can spin it, shake it, fidget with it— perfect for when your stand-up is getting a little dull.
Made for developers, by developers
Developers are at the heart of what we do, because they’re the core of who we are. Our shop isn’t just a shop. It’s also chock-full of fun developer finds, and we’re not just talking about the swag now. We’ve even added a hidden CLI: type git [space]
into the search bar. Have fun!
In our dev/core collection, you can mix and match to create new patterns on our images by tapping on the dev/core pill. This unlocks a tool palette to customize the ASCII pattern, size, and speed.
The dev/core collection is more than merch—it’s a wearable nod to the builders, the dreamers, and the committers who shape the internet every day. From the clean lines of ASCII art to the playful and colorful additions, each piece is carefully designed for you. So whether you’re pushing code, sipping coffee, or staring into the abyss of your terminal, suit up in something that gets it. This is your core.
🤫 Psst… use the code “GITHUBBLOG15” at checkout to get free shipping from today until June 1. Your laptop’s looking a bit bare, btw. We’ve dropped a few new stickers in the mix too—just saying. |
The post Code. Create. Commit. Welcome to dev/core appeared first on The GitHub Blog.
Editor’s note: This piece was originally published in our LinkedIn newsletter, Branching Out_. Sign up now for more career-focused content >
Pop quiz: What do healthcare, self-driving cars, and your next job all have in common?
If you guessed AI, you were right. And with 80% of developers expected to need at least a fundamental AI skill set by 2027, there’s never been a better time to dive into this field.
This blog will walk you through what you need to know, learn, and build to jump into the world of AI—using the tools and resources you already use on GitHub.
Let’s dive in.
1. Learn essential programming languages and frameworks 💬
Mastering the right programming languages and tools is foundational for anyone looking to excel in AI and machine learning development. Here’s a breakdown of the core programming languages to zero in on:
- Python: Known for its simplicity and extensive library support, Python is the cornerstone of AI and machine learning. Its versatility makes it the preferred language for everything from data preprocessing to deploying AI models. (Fun fact: Python overtook JavaScript as the number one programming language in 2024!)
- Java: With its scalability and cross-platform capabilities, Java is popular for enterprise-level applications and large-scale AI systems.
- C++: As one of the fastest programming languages, C++ is often used in performance-critical applications like gaming AI, real-time simulations, and robotics.
Beyond programming, these frameworks give you the tools to design, train, and deploy intelligent systems across real-world applications:
- TensorFlow: Developed by Google, TensorFlow is a comprehensive framework that simplifies the process of building, training, and deploying AI models.
- Keras: Built on top of TensorFlow, Keras is user-friendly and enables quick prototyping.
- PyTorch: Favored by researchers for its flexibility, PyTorch provides dynamic computation graphs and intuitive debugging tools.
- Scikit-learn: Ideal for traditional machine learning algorithms, Scikit-learn offers efficient tools for data analysis and modeling.
Spoiler alert: Did you know you can learn programming languages and AI frameworks right on GitHub? Resources like GitHub Learning Lab, The Algorithms, TensorFlow Tutorials, and PyTorch Examples provide hands-on opportunities to build your skills. Plus, tools like GitHub Copilot provide real-time coding assistance that can help you navigate new languages and frameworks easily while you get up to speed.
2. Master machine learning 🤖
Machine learning (ML) is the driving force behind modern AI, enabling systems to learn from data and improve their performance over time. It bridges the gap between raw data and actionable insights, making ML expertise a must-have if you’re looking for a job in tech. Here are some key subfields to explore:
- Deep learning: A subset of ML, deep learning uses multi-layered neural networks to analyze complex patterns in large datasets. While neural networks are used across ML, deep learning focuses on deeper architectures and powers advancements like speech recognition, autonomous vehicles, and generative AI models.
- Natural language processing (NLP): NLP enables machines to understand, interpret, and respond to human language. Applications include chatbots, sentiment analysis, and language translation tools like Google Translate.
- Computer vision: This field focuses on enabling machines to process and interpret visual information from the world, such as recognizing objects, analyzing images, and even driving cars.
Luckily, you can explore ML right on GitHub. Start with open source repositories like Awesome Machine Learning for curated tools and tutorials, Keras for deep learning projects, NLTK for natural language processing, and OpenCV for computer vision. Additionally, try real-world challenges by searching for Kaggle competition solutions on GitHub or contribute to open source AI projects tagged with “good first issue” to gain hands-on experience.
3. Build a GitHub portfolio to showcase your skills 💼
A strong GitHub portfolio highlights your skills and AI projects, setting you apart in the developer community. Here’s how to optimize yours:
- Organize your repositories: Use clear names, detailed README files, and instructions for others to replicate your work.
- Feature your best work: Showcase projects in areas like NLP or computer vision, and use tags to improve discoverability.
- Create a profile README: Introduce yourself with a professional README that includes your interests, skills, and standout projects.
- Use GitHub Pages: Build a personal site to host your projects, case studies, or interactive demos.
- Contribute to open source: Highlight your open source contributions to show your collaboration and technical expertise.
For detailed guidance, check out the guides on Building Your Stunning GitHub Portfolio and How to Create a GitHub Portfolio.
4. Get certified in GitHub Copilot 🏅
Earning a certification in GitHub Copilot showcases your expertise in leveraging AI-powered tools to enhance development workflows. It’s a valuable credential that demonstrates your skills to employers, collaborators, and the broader developer community. Here’s how to get started:
- Understand GitHub Copilot: GitHub Copilot is an AI agent designed to help you write code faster and more efficiently. Familiarize yourself with its features, such as real-time code suggestions, agent mode in Visual Studio Code, model context protocol (MCP), and generating boilerplate code across multiple programming languages.
- Explore certification options: GitHub offers certification programs through its certification portal. These programs validate your ability to use GitHub tools effectively, including GitHub Copilot. They also cover key topics like AI-powered development, workflow automation, and integration with CI/CD pipelines.
- Prepare for the exam: Certification exams typically include theoretical and practical components. Prepare by exploring GitHub Copilot’s official documentation, completing hands-on exercises, and working on real-world projects where you utilize GitHub Copilot to solve coding challenges.
- Earn the badge: Once you complete the exam successfully, you’ll receive a digital badge that you can showcase on LinkedIn, your GitHub profile, or your personal portfolio. This certification will enhance your resume and signal to employers that you’re equipped with cutting-edge AI development tools.
Check out this LinkedIn guide for tips on becoming a certified code champion with GitHub Copilot.

The post Vibe coding: Your roadmap to becoming an AI developer appeared first on The GitHub Blog.
GitHub is honored to take the Global Accessibility Awareness Day (GAAD) Pledge, reaffirming our commitment to improving accessibility in open source software. Through our work, our aim is to empower people with disabilities to contribute to open source, increase the availability of open source Assistive Technologies, and enhance the accessibility of mainstream open source projects.
Joe Devon initially proposed the idea for GAAD in a 2011 blog post because he was frustrated by the lack of information about accessibility for developers. With the help of accessibility advocate Jennison Asuncion, Joe’s proposal led to the first GAAD in May 2012, which has since evolved into an annual global event that reaches millions of people. In 2020, the GAAD Foundation launched the GAAD Pledge to incorporate accessibility into the core of open source projects, and now, GitHub is proud to join this important initiative.
Our pledge
Our pledge will focus on the following interdependent goals:
- Empower people with disabilities to contribute to open source
- Increase the availability and adoption of open source assistive technologies
- Increase the accessibility of mainstream open source projects
Read on for how we plan on executing these goals:
Empower people with disabilities to contribute to open source
Given that technology is a ubiquitous and essential part of modern life, and approximately 16% of the human population, or 1.3 billion people, have a disability, it is critical that people with disabilities are able to contribute to the development of the technology that is used by all of humanity. When people with disabilities contribute, we increase the probability that the resulting technologies will be usable by everyone.
For example, consider the story of Becky Tyler. Becky is a bright, engaging, and tenacious young woman with quadriplegic cerebral palsy who interacts with her computer exclusively by using her eyes. Becky started off simply wanting to play Minecraft, but accessibility barriers led her down a path beyond mining ore and into the world of open source software where she began learning to code. She now attends the University of Dundee, where she studies Applied Computing.
We need to build more inclusive open source communities that better represent the rich diversity of humanity. In order for everyone to contribute, we need to remove barriers that block people with disabilities from development platforms and tools. Those barriers include a a lack of keyboard operability, insufficient color contrast, and incompatibility with assistive technologies such as screen readers. Removing accessibility barriers will enable every developer to keep exercising their craft.
Increase the availability and adoption of free and open source Assistive Technologies
Many people with disabilities require Assistive Technology to access a computer or perform the basic functions of daily living. According to the Assistive Technology Industry Association, assistive technology is, “any item, piece of equipment, software program, or product system that is used to increase, maintain, or improve the functional capabilities of persons with disabilities.”
The challenge is that proprietary Assistive Technology products can be very expensive. That challenge is exacerbated by the fact that people with disabilities are less likely to be employed and less likely to have earned an advanced degree, according to the U.S. Bureau of Labor Statistics.
On GitHub, anyone can create assistive technologies that improve access for people with disabilities. There are no financial or bureaucratic hurdles. In addition, open source licenses allow anyone to use those assistive technologies. They also enable like-minded individuals to form communities that support and improve the assistive technologies.
For example, Jamie Teh and Michael “Mick” Curran created the NVDA screen reader, a free, high-quality screen reader for the Microsoft Windows operating system. Over the past 15 years, they have built a community that includes hundreds of blind developers and contributors, as well as more than 250,000 users.
We need more free and open source assistive technologies like NVDA. We also need to increase the awareness of those alternatives within both the global community of people with disabilities and people who support them, such as care-givers, occupational therapists, speech therapists, and other forms of assistive technology and rehabilitation professionals.
Increase the accessibility of mainstream open source projects
The world runs on open source software. For example, ninety percent of companies use open source1, 97% of codebases contain open source2, 70-90% of the code within commercial tools comes from open source3, and the value of OSS globally is estimated to be $8.8 trillion4.
It is absolutely essential that computing infrastructure, frameworks, and libraries are designed with accessibility in mind so downstream consumers of those projects can also access them. Investment in upstream open source projects not only makes it possible for consuming applications to be accessible, those investments can make an exponential impact on the accessibility of downstream projects. On the other hand, if accessibility is not embedded in upstream projects, it can be impossible or very expensive for downstream projects to support accessibility.
There are additional benefits beyond the accessibility of the projects themselves. Popular open source projects set trends for the entire software industry. When those communities include accessibility as a core requirement, they raise expectations and educate developers across the industry.
We need to increase the accessibility of mainstream open source projects from three perspectives:
- End users: so end users with disabilities can use applications and content that is built with open source software.
- Consumers: so developers and Information Technology (IT) professionals with disabilities can access the documentation, videos, and enablement materials that are required to consume open source projects and build on them.
- Contributors: so developers and other types of contributors with disabilities can join open source communities, contribute, and enjoy the benefits of learning and sharing with a like-minded group of creators.
Our strategy
We recognize that these goals are never “done.” By definition, technology evolves constantly and, as a result, we consider accessibility to be an on-going practice as much as a task that can be completed. We will use the following strategies to steward progress towards the goals going forward:
- Improve our platform
- Build partnerships
- Support open source communities
Improve our platform
Over the past three years, GitHub has invested heavily in the accessibility of our platform. We’ve integrated accessibility into our product development life cycle (PDLC) and removed many barriers that may have prevented people with disabilities from building on GitHub. For example, we have resolved more than 4,400 accessibility issues within our platform since January 1, 2022.
Going forward, we will continue to remove barriers from our platform and shift accessibility left in our PDLC, so we can increase our ability to ship new features and products that are accessible by default when they become generally available. We will also identify opportunities to improve our platform so default options are accessible and gentle nudges guide developers towards more accessible outcomes.
Build partnerships
Knowing that GitHub is just one part of the ecosystem of developer platforms and tools, this pledge is also a call to action for the entire technology industry. We invite individuals and organizations to join us as we continue to work toward a more equitable world.
We are particularly eager to partner with other organizations through their Open Source Program Offices (OSPOs) across the private sector, public sector, academia, and NGOs. We have seen global accessibility regulations become stronger over the past few years. Recent examples include the European Accessibility Act and the new rule for Title II of the Americans with Disabilities Act. We expect that trend to continue in the future. As a result, organizations in every sector will benefit from improvements in open source accessibility, and we believe OSPOs are a conduit for building partnerships that can help accelerate those improvements.
We are also eager to collaborate with the global community of approximately 1.3 billion people with disabilities and the advocacy organizations that represent them. Accessibility is not something we should do “for” people with disabilities. Rather, it is something we should do “with” people with disabilities. Or, better yet, people with disabilities should have the opportunity to lead accessibility. We will invite people with disabilities to join the open source movement and contribute.
Empower open source communities
We are eager to help open source maintainers build diverse and inclusive communities with cultures that value accessibility. We recognize that open source communities themselves are diverse in size, from communities of one contributor to communities of thousands. We will continue to engage maintainers to co-create opportunities to meet open source communities where they are and help move them to the next stage of their accessibility journey.
What’s next?
Incorporating accessibility as a core principle of open source software is a journey that is already well underway, and GitHub is humbled and honored to be a part of this movement. Looking ahead, we’re excited to share that we are organizing an Open Source Accessibility Summit, which will be a space where members of the disability, accessibility, and open source communities can come together to explore our shared goals and define next steps. Stay tuned to the GitHub Blog for more details in the near future.
Are you a developer with a disability? Learn more about GitHub accessibility.
References:
1. GitHub. 2022. “Octoverse 2022: The state of open source software.” https://octoverse.github.com/2022/. & OpenUK. 2021. “State of Open: The UK in 2021.” https://openuk.uk/wp-content/uploads/2021/10/openuk-state-of-open_final-version.pdf.
2. Blackduck. 2025. “Six takeaways from the 2025 “Open Source Security and Risk Analysis” report.” https://www.blackduck.com/blog/open-source-trends-ossra-report.html.
3. The Linux Foundation. 2022. “A Summary of Census II: Open Source Software Application Libraries the World Depends On.” https://www.linuxfoundation.org/blog/blog/a-summary-of-census-ii-open-source-software-application-libraries-the-world-depends-on. & Intel. 2025. “The Careful Consumption of Open Source Software.” https://www.intel.com/content/www/us/en/developer/articles/guide/the-careful-consumption-of-open-source-software.htm.
4. Harvard Business School. 2024. “The Value of Open Source Software.” https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4693148.
The post Our pledge to help improve the accessibility of open source software at scale appeared first on The GitHub Blog.
In April, we experienced three incidents that resulted in degraded performance across GitHub services.
April 11 03:05 UTC (lasting 39 minutes)
On April 11, 2025, from 03:05 UTC to 03:44 UTC, approximately 75% of Codespaces users faced create and start failures. These were caused by manual configuration changes to an internal dependency that escaped our test coverage. Our monitors and detection mechanism triggered, which helped us triage, revert the changes, and restore service health.
We are working on building additional gates, safer mechanisms for testing, and rolling out such configuration changes. We expect no further disruptions.
April 23 07:00 UTC (lasting 20 minutes)
On April 23, 2025, between 07:00 UTC and 07:20 UTC, multiple GitHub services experienced degradation caused by resource contention on database hosts. The resulting error rates, which ranged from 2–5% of total requests, led to intermittent service disruption for users. The issue was triggered by an interaction between query load and ongoing schema change that led to connection saturation. The incident recovered after the schema migration was completed.
Our prior investments in monitoring and improved playbooks helped us effectively organize our first responder teams, leading to faster triaging of the incident. We have also identified a regression in our schema change tooling that led to increased resource utilization during schema and reverted to a previous stable version.
To prevent similar issues in the future, we are reviewing the capacity of the database, improving monitoring and alerting systems, and implementing safeguards to reduce time to detection and mitigation.
April 23 19:13 UTC (lasting 42 minutes)
On April 23, 2025, between 19:13:50 UTC and 22:11:00 UTC, GitHub’s Migration service experienced elevated failures caused by a configuration change that removed access for repository migration workers. During this time, 837 migrations across 57 organizations were affected. Impacted migrations required a retry after the log message “Git source migration failed. Error message: An error occurred. Please contact support for further assistance.” was displayed. Once access was restored, normal operations resumed without further interruption.
As a result of this incident, we have implemented enhanced test coverage and refined monitoring thresholds to help prevent similar disruptions in the future.
Please follow our status page for real-time updates on status changes and post-incident recaps. To learn more about what we’re working on, check out the GitHub Engineering Blog.
The post GitHub Availability Report: April 2025 appeared first on The GitHub Blog.
With all the work involved in creating and maintaining a project, sometimes writing documentation can slip through the cracks. However, good docs are a huge asset to any project. Consider the benefits:
- Better collaboration: Clear, consistent documentation ensures everyone’s on the same page, from your immediate team to outside stakeholders. Additionally, docs promote independent problem solving, saving core contributors the time and effort of answering every question.
- Smoother onboarding: By providing ways to get started, explaining core concepts, and including tutorial-style content, good documentation allows new team members to ramp up quickly.
- Increased adoption: The easier it is to understand, set up, and run your project, the more likely someone will use it.
With these benefits in mind, let’s take a look at some important principles of documentation, then dive into how you can quickly create effective docs for your project.
Key tenets of documentation
There are three key principles you should follow as you document your project.
Keep it clear
Use plain language that’s easy to understand. The goal is to make your documentation as accessible as possible. A good guideline is to ask yourself if there are any acronyms or technical terms in your documentation that some folks in your target audience won’t understand. If that’s the case, either swap them for simpler language, or make sure they’re defined in your document.
Keep it concise
Document only necessary information. Trying to cover every possible edge case will overwhelm your readers. Instead, write docs that help the vast majority of readers get started, understand core concepts, and use your project.
Additionally, keep each document focused on a particular topic or task. If you find yourself including information that isn’t strictly necessary, move it into separate, smaller documents and link to them when it’s helpful.
Keep it structured
Consider the structure of each document as you write it to make sure it is easy to scan and understand:
- Put the most important information first to help readers quickly understand if a document is relevant to them.
- Use headings and a table of contents to tell your readers where to find specific information. We suggest using documentation templates with common headings to quickly and consistently create structured content.
- Use text highlighting like boldface and formatting elements like bulleted lists to help readers scan content. Aim for 10% or less text highlighting to make sure emphasized text stands out.
- Be consistent with your styling. For example, if you put important terminology in bold in one document, do the same in your other content.
Organizing your documentation
Just as there are principles to follow when writing individual documents, you should also follow a framework for organizing documents in your repo.
There are many approaches to organizing documentation in your repo, but one that we’ve used for several projects and recommend is the Diátaxis framework. This is a systematic approach to organizing all the documents relevant to your project.
Applying a systematic approach to documenting your repositories can make it easier for users to know where to go to find the information that they need. This reduces frustration and gets folks contributing to your project faster.
Diátaxis divides documents based on their purpose into four categories:
- Tutorials: Learning-oriented documents
- How-to guides: Goal-oriented instructions for specific tasks
- Explanation: Discussions providing understanding of the project
- Reference: Technical specifications and information
Each document in your repository should fit into one of these categories. This helps users quickly find the appropriate resource for their current situation, whether they need to learn a new concept, solve a specific problem, understand underlying principles, or look up technical details.
This can also be a helpful guide to identify which documentation your repository is missing. Is there a tool your repository uses that doesn’t have a reference document? Are there enough tutorials for contributors to get started with your repository? Are there how-to guides to explain some of the common tasks that need to be accomplished within your repository?
Organizing your documentation according to this framework helps ensure you’re taking a holistic approach to building and maintaining key content for your project.
The post Documentation done right: A developer’s guide appeared first on The GitHub Blog.
Originally, Issues search was limited by a simple, flat structure of queries. But with advanced search syntax, you can now construct searches using logical AND/OR operators and nested parentheses, pinpointing the exact set of issues you care about.
Building this feature presented significant challenges: ensuring backward compatibility with existing searches, maintaining performance under high query volume, and crafting a user-friendly experience for nested searches. We’re excited to take you behind the scenes to share how we took this long-requested feature from idea to production.
Here’s what you can do with the new syntax and how it works behind the scenes
Issues search now supports building queries with logical AND/OR operators across all fields, with the ability to nest query terms. For example is:issue state:open author:rileybroughten (type:Bug OR type:Epic)
finds all issues that are open AND were authored by rileybroughten AND are either of type bug or epic.

How did we get here?
Previously, as mentioned, Issues search only supported a flat list of query fields and terms, which were implicitly joined by a logical AND. For example, the query assignee:@me label:support new-project
translated to “give me all issues that are assigned to me AND have the label support AND contain the text new-project.”
But the developer community has been asking for more flexibility in issue search, repeatedly, for nearly a decade now. They wanted to be able to find all issues that had either the label support
or the label question
, using the query label:support OR label:question
. So, we shipped an enhancement towards this request in 2021, when we enabled an OR style search using a comma-separated list of values.
However, they still wanted the flexibility to search this way across all issue fields, and not just the labels field. So we got to work.
Technical architecture and implementation

From an architectural perspective, we swapped out the existing search module for Issues (IssuesQuery), with a new search module (ConditionalIssuesQuery), that was capable of handling nested queries while continuing to support existing query formats.
This involved rewriting IssueQuery, the search module that parsed query strings and mapped them into Elasticsearch queries.

To build a new search module, we first needed to understand the existing search module, and how a single search query flowed through the system. At a high level, when a user performs a search, there are three stages in its execution:
- Parse: Breaking the user input string into a structure that is easier to process (like a list or a tree)
- Query: Transforming the parsed structure into an Elasticsearch query document, and making a query against Elasticsearch.
- Normalize: Mapping the results obtained from Elasticsearch (JSON) into Ruby objects for easy access and pruning the results to remove records that had since been removed from the database.
Each stage presented its own challenges, which we’ll explore in more detail below. The Normalize step remained unchanged during the re-write, so we won’t dive into that one.
Parse stage
The user input string (the search phrase) is first parsed into an intermediate structure. The search phrase could include:
- Query terms: The relevant words the user is trying to find more information about (ex: “models”)
- Search filters: These restrict the set of returned search documents based on some criteria (ex: “assignee:Deborah-Digges”)
Example search phrase:
- Find all issues assigned to me that contain the word “codespaces”:
is:issue assignee:@me codespaces
- Find all issues with the label documentation that are assigned to me:
assignee:@me label:documentation
The old parsing method: flat list
When only flat, simple queries were supported, it was sufficient to parse the user’s search string into a list of search terms and filters, which would then be passed along to the next stage of the search process.
The new parsing method: abstract syntax tree
As nested queries may be recursive, parsing the search string into a list was no longer sufficient. We changed this component to parse the user’s search string into an Abstract Syntax Tree (AST) using the parsing library parslet.
We defined a grammar (a PEG or Parsing Expression Grammar) to represent the structure of a search string. The grammar supports both the existing query syntax and the new nested query syntax, to allow for backward compatibility.
A simplified grammar for a boolean expression described by a PEG grammar for the parslet parser is shown below:
class Parser < Parslet::Parser
rule(:space) { match[" "].repeat(1) }
rule(:space?) { space.maybe }
rule(:lparen) { str("(") >> space? }
rule(:rparen) { str(")") >> space? }
rule(:and_operator) { str("and") >> space? }
rule(:or_operator) { str("or") >> space? }
rule(:var) { str("var") >> match["0-9"].repeat(1).as(:var) >> space? }
# The primary rule deals with parentheses.
rule(:primary) { lparen >> or_operation >> rparen | var }
# Note that following rules are both right-recursive.
rule(:and_operation) {
(primary.as(:left) >> and_operator >>
and_operation.as(:right)).as(:and) |
primary }
rule(:or_operation) {
(and_operation.as(:left) >> or_operator >>
or_operation.as(:right)).as(:or) |
and_operation }
# We start at the lowest precedence rule.
root(:or_operation)
end
For example, this user search string:is:issue AND (author:deborah-digges OR author:monalisa )
would be parsed into the following AST:
{
"root": {
"and": {
"left": {
"filter_term": {
"attribute": "is",
"value": [
{
"filter_value": "issue"
}
]
}
},
"right": {
"or": {
"left": {
"filter_term": {
"attribute": "author",
"value": [
{
"filter_value": "deborah-digges"
}
]
}
},
"right": {
"filter_term": {
"attribute": "author",
"value": [
{
"filter_value": "monalisa"
}
]
}
}
}
}
}
}
}
Query
Once the query is parsed into an intermediate structure, the next steps are to:
- Transform this intermediate structure into a query document that Elasticsearch understands
- Execute the query against Elasticsearch to obtain results
Executing the query in step 2 remained the same between the old and new systems, so let’s only go over the differences in building the query document below.
The old query generation: linear mapping of filter terms using filter classes
Each filter term (Ex: label:documentation
) has a class that knows how to convert it into a snippet of an Elasticsearch query document. During query document generation, the correct class for each filter term is invoked to construct the overall query document.
The new query generation: recursive AST traversal to generate Elasticsearch bool query
We recursively traversed the AST generated during parsing to build an equivalent Elasticsearch query document. The nested structure and boolean operators map nicely to Elasticsearch’s boolean query with the AND, OR, and NOT operators mapping to the must, should, and should_not clauses.
We re-used the building blocks for the smaller pieces of query generation to recursively construct a nested query document during the tree traversal.
Continuing from the example in the parsing stage, the AST would be transformed into a query document that looked like this:
{
"query": {
"bool": {
"must": [
{
"bool": {
"must": [
{
"bool": {
"must": {
"prefix": {
"_index": "issues"
}
}
}
},
{
"bool": {
"should": {
"terms": {
"author_id": [
"<DEBORAH_DIGGES_AUTHOR_ID>",
"<MONALISA_AUTHOR_ID>"
]
}
}
}
}
]
}
}
]
}
// SOME TERMS OMITTED FOR BREVITY
}
}
With this new query document, we execute a search against Elasticsearch. This search now supports logical AND/OR operators and parentheses to search for issues in a more fine-grained manner.
Considerations
Issues is one of the oldest and most heavily -used features on GitHub. Changing core functionality like Issues search, a feature with an average of nearly 2000 queries per second (QPS)—that’s almost 160M queries a day!—presented a number of challenges to overcome.
Ensuring backward compatibility
Issue searches are often bookmarked, shared among users, and linked in documents, making them important artifacts for developers and teams. Therefore, we wanted to introduce this new capability for nested search queries without breaking existing queries for users.
We validated the new search system before it even reached users by:
- Testing extensively: We ran our new search module against all unit and integration tests for the existing search module. To ensure that the GraphQL and REST API contracts remained unchanged, we ran the tests for the search endpoint both with the feature flag for the new search system enabled and disabled.
- Validating correctness in production with dark-shipping: For 1% of issue searches, we ran the user’s search against both the existing and new search systems in a background job, and logged differences in responses. By analyzing these differences we were able to fix bugs and missed edge cases before they reached our users.
- We weren’t sure at the outset how to define “differences,” but we settled on “number of results” for the first iteration. In general, it seemed that we could determine whether a user would be surprised by the results of their search against the new search capability if a search returned a different number of results when they were run within a second or less of each other.
Preventing performance degradation
We expected more complex nested queries to use more resources on the backend than simpler queries, so we needed to establish a realistic baseline for nested queries, while ensuring no regression in the performance of existing, simpler ones.
For 1% of Issue searches, we ran equivalent queries against both the existing and the new search systems. We used scientist, GitHub’s open source Ruby library, for carefully refactoring critical paths, to compare the performance of equivalent queries to ensure that there was no regression.
Preserving user experience
We didn’t want users to have a worse experience than before just because more complex searches were possible.
We collaborated closely with product and design teams to ensure usability didn’t decrease as we added this feature by:
- Limiting the number of nested levels in a query to five. From customer interviews, we found this to be a sweet spot for both utility and usability.
- Providing helpful UI/UX cues: We highlight the AND/OR keywords in search queries, and provide users with the same auto-complete feature for filter terms in the UI that they were accustomed to for simple flat queries.
Minimizing risk to existing users
For a feature that is used by millions of users a day, we needed to be intentional about rolling it out in a way that minimized risk to users.
We built confidence in our system by:
- Limiting blast radius: To gradually build confidence, we only integrated the new system in the GraphQL API and the Issues tab for a repository in the UI to start. This gave us time to collect, respond to, and incorporate feedback without risking a degraded experience for all consumers. Once we were happy with its performance, we rolled it out to the Issues dashboard and the REST API.
- Testing internally and with trusted partners: As with every feature we build at GitHub, we tested this feature internally for the entire period of its development by shipping it to our own team during the early days, and then gradually rolling it out to all GitHub employees. We then shipped it to trusted partners to gather initial user feedback.
And there you have it, that’s how we built, validated, and shipped the new and improved Issues search!
Feedback
Want to try out this exciting new functionality? Head to our docs to learn about how to use boolean operators and parentheses to search for the issues you care about!
If you have any feedback for this feature, please drop us a note on our community discussions.
Acknowledgements
Special thanks to AJ Schuster, Riley Broughten, Stephanie Goldstein, Eric Jorgensen Mike Melanson and Laura Lindeman for the feedback on several iterations of this blog post!
The post GitHub Issues search now supports nested queries and boolean operators: Here’s how we (re)built it appeared first on The GitHub Blog.
Welcome to the next episode in our GitHub for Beginners series, where we’re diving into the world of GitHub Copilot. This is our sixth episode, and we’ve covered quite a lot of ground. You can check out all our previous episodes on our blog or as videos.
Today we’re going to use GitHub Copilot to help us build a frontend project using React. In the previous episode, we created a backend API for the travel itinerary builder, Planventure. We’ll continue that work and create a React client that leverages our API to interact with Planventure. To see a full description of what we’ll be building, go to this repository and switch to the client-start branch to get started.
What you’ll need
Before we get started, here’s what you’ll need:
- A code editor like VS Code
- The latest version of Node.js
- A package manager like npm
- Access to GitHub Copilot — sign up for free!
Alternatively, you can use a GitHub Codespace to build in the cloud—you’ll still need Copilot access if you’re using a codespace. You can click the Open in GitHub Codespaces button in the repo.

What we’re building
We’re creating a frontend app that connects to the backend API we created in the previous episode. We’ll be using the React library, so it’s recommended that you’re familiar with using React to build client side applications. More specifically, we’ll be using:
- React with Vite for the client.
- Material UI as the component library.
Our goal is to build a working frontend app that has the following features:
- Authenticate users
- Add protected routes
- Add trips and itinerary information
- Edit existing itineraries
Let’s get started!
Step 1: Initial setup
Before we get started, we need to create the appropriate working environment.
- Clone the Planventure repository by opening your terminal in your code editor and running the following command.
git clone https://github.com/github-samples/planventure
- Navigate to the planventure-client directory and switch to the
client-start
branch.
cd planventure-client
git switch client-start
- Install necessary dependencies.
npm install
- Start the server.
npm run dev
- Open a browser to http://localhost:5173 to verify the app is running.

- Become familiar with the code by examining the existing files. Note that some basic components have already been installed and configured. Open Copilot Chat and send it the following prompt to get a basic summary of the existing code:
@workspace Tell me about the configuration setup in the react app.
Now that the initial setup is complete, take a look at the GitHub issue to read detailed information about what we need to build.
Step 2: Create login and registration forms
The first thing we need to add is authentication. We’ll do this by building login and registration forms. But first, we need to create an AuthLayout component to use for all authenticated routes.
- Open Copilot Chat and use the model selector to select the Claude Sonnet 3.5 model.
- Send Copilot the following prompt.
@workspace Create AuthLayout component with navigation and centered content.
- Hover over the proposed solution, click …, and then select Insert into New File.
- Review the added code and save the file. You should always review the code provided by Copilot so that you understand what it is doing and make sure it addresses your prompt.
Now that we’ve created the AuthLayout component, it’s time to build a login form to implement it.
- Send the following prompt to Copilot Chat.
@workspace Build a LoginForm component with email/password fields and validation.
- Create a new folder named auth under the src/components folder.
- Navigate back to Copilot Chat, hover over the proposed solution, click …, and then select Insert into New File.
- Review the added code and save the file in the auth folder.

- Return to Copilot Chat and choose Edits from the dropdown.
- Use the Add Files button to add the following files if they are not already listed in the working set:
- AuthLayout.jsx
- LoginForm.jsx
- Routes.jsx
- Send Copilot Edits the following prompt.
Create a new loginPage. Update route and authLayout as needed.

- Review and accept all the changes from Copilot Edits. Don’t forget to save your files.
- Send the following prompt to Copilot Edits to update the Home component.
Update the navbar to use the new loginpage and add a get started button to the home page that routes to the login page.
- Review the code and make any necessary changes. Then save all of the updated files.
- Navigate back to the browser page and refresh it to see the latest changes.

- Commit your changes to the repository. You can use Copilot to automatically generate a commit message by clicking the sparkle button in the top-right corner of the commit message box.

Now we have a Get started button on the UI and the login page. Next we need to create a sign up page so that new users can register.
- Open up Copilot Chat and send it the following prompt.
@workspace Create SignupForm component matching the login form style and a new SignUpPage. Be sure to update routing.
- Hover over the proposed solution to add a SignupForm component in the auth folder, click …, and then select Insert into New File.
- Hover over the proposed solution to add a SignupPage component in the pages folder, click …, and then select Insert into New File.
- For each file that has proposed changes:
- Open the relevant file in your editor.
- Hover over the proposed changes to the file, and click the Apply in editor button.
- Review and accept the changes.
- Save the file.
- Refresh your browser and test the new pages.
- Commit your changes.
For the final piece to this part of the puzzle, we need to update the AuthContext.jsx file.
- Open the AuthContext.jsx file in your editor. It’s located in the context folder.
- Open Copilot Chat and send it the following prompt.
Setup authentication context and token management #file:ProtectedRoute.jsx
- For each file that has proposed changes:
- Open the relevant file in your editor.
- Hover over the proposed changes to the file, and click the Apply in editor button.
- Review and accept the changes.
- Save the file.
- Commit your changes.
Step 3: Use the API for authentication
Now that we have the necessary forms, it’s time to make requests to the backend server so users can register and log in. To do this, we’re going to need to edit three files at once. Luckily, Copilot Chat supports editing multiple files at the same time!
- Send Copilot Chat the following prompt.
@workspace Setup api service functions for login and register routes. Update login and signup forms to use api service. #file:SignupForm.jsx
- For each file that has proposed changes:
- Open the relevant file in your editor.
- Hover over the proposed changes to the file, and click the Apply in editor button.
- Review and accept the changes.
- Save the file.
- Create a new terminal window and cd into the planventure-api directory, then start the server by running the following command:
flask run --debug
- Refresh your browser and try logging into the app . If you run into an error, attempt to debug it with GitHub Copilot. Review the video version of this episode to see some examples of possible things you might need to debug.
- Try signing up a new user and debug any errors that come up.
Now that we’ve verified that we can log in and create new users, we want to add the ability to log out into the navigation bar.
- Send Copilot Chat the following prompt.
@workspace update navbar to include the logout function
- For each file that has proposed changes:
- Open the relevant file in your editor.
- Hover over the proposed changes to the file, and click the Apply in editor button.
- Review and accept the changes.
- Save the file.
- Head back to the browser, refresh it, and test the Logout button. Note that you need to be logged in to see the Logout button.
- Commit all of these changes to your repository.
Congratulations! You’ve now successfully added user authentication!
Step 4: Add a dashboard and sidebar navigation
To ensure that users have easy access to common functionality, we want to create a dashboard with sidebar navigation.
- Click the Copilot button at the top of the window and select Open Copilot Edits.
- Use the Add Files button to add the following files if they are not already listed in the working set:
- App.jsx
- AuthContent.jsx
- AuthLayout.jsx
- LoginForm.jsx
- Navbar.jsx
- SignupForm.jsx
- Send Copilot Edits the following prompt.
Build a dashboard layout component with sidebar navigation.
- Review and accept the suggested code changes, then save the files.
- Refresh your browser to see the new dashboard.
Now that we have a layout, we need to create a TripCard and TripList component to showcase the user’s trips.
- Send the following prompt to Copilot Edits to create the necessary components.
Create TripCard and TripList components for displaying trips with loading states.
- Review and accept the suggested code changes.
- Send the following prompt to add a dashboard component displaying the user’s trips.
Create a dashboard component that displays the trips that users are routed to on login.
- Review and accept the suggested code changes.
- Users should receive a welcome message if they don’t have any trips. They also should get a message if there are any errors when they login. To add these elements, send the following prompt:
Update the dashboard coponent to show a welcome message to users if they have no trips. If there is an error receiving trips, display the image with a message that's quirky and apologetic.
- Review and accept the suggested code changes.
- Now let’s fetch our user trips from the API. Send the following prompt to Copilot Edits:
Update to fetch trips from the flask api.
- Review and accept the suggested code changes, then save the files.
- Return to your browser and refresh the page.
- Test the new functionality by logging in. You should see your trips, a welcome message, or a fun and quirky error message. Try to test all three possibilities.
- Once you’ve verified these changes are working (after debugging any errors that crop up), commit your changes to the repository.
Once we have this piece working, we’re almost there!
Step 5: Enable trip management
Now we need to add the ability for users to manage their trips. In order to do this, we’ll need to start by creating a new form that allows users to add their trips and save them to the dashboard. Then we’ll need to make a POST request to the /trips/new
route.
- Open up your terminal and install the necessary dependencies.
npm install dayjs @mui/x-date-pickers
- Open up Copilot Edits and send it the following prompt.
Create NewTripForm with destination and date inputs. Use the dayjs library.
- Review and accept the suggested code changes, then save the files.
- Navigate back to your browser and refresh the page.
- Click the ADD NEW TRIP button and test the form.
- Commit your changes.
Next up, we want to add the ability for users to manage their itineraries.
- Send Copilot Edits the following prompt.
Create ItineraryDay and TimeSlot components for managing daily activities with editing capabilities.
- Review and accept the suggested code changes, then save the files.
- Send the following prompt to Copilot to encourage users to provide itineraries for their trips.
If the user doesn't have an itinerary for their trip, prompt them to add an itinerary. Also include a default itinerary template.
- Review and accept the suggested code changes, then save the files.
- Return to your browser and refresh the page.
- Click VIEW DETAILS for a trip, then click the ITINERARY tab. You should see a prompt encouraging you to create an itinerary for this trip. Try adding an itinerary, and verify that the app provides a default template.
- Navigate back to Copilot Edits and send the following prompt to give users the ability to edit their trips.
Allow users to edit their trip details.
- Review and accept the suggested code changes, then save the files.
- Return to your browser and refresh the page.
- Try editing a trip and verify that you can change the details.
- Commit your changes.
As a finishing touch, let’s add the ability for users to add additional information about where they’ll be staying and how they’ll be traveling.
- Send the following prompt to Copilot Edits.
Allow users to add their accommodations and transportation details to the Overview page.
- Review and accept the suggested code changes, then save the files.
- Return to your browser, refresh the page, and test this new functionality.

- Don’t forget to commit your changes!
Your next steps
Impressive! You’ve now built a fully functional MVP of the application in a short time, thanks to GitHub Copilot. This is the power of an AI tool that just works.
Don’t forget that you can use GitHub Copilot for free! If you have any questions, pop them in the GitHub Community thread, and we’ll be sure to respond. Join us for the next part in this series, where we’ll build a full app using this API we created.
Happy coding!
The post GitHub for Beginners: Building a React App with GitHub Copilot appeared first on The GitHub Blog.
Claude 3.7 Sonnet, Gemini 2.5 Pro, GPT-4… developer choice is key to GitHub Copilot, and that’s especially true when it comes to picking your frontier model of choice.
But with so many frontier generative AI models now available to use with GitHub Copilot (and more coming seemingly every day), how do you pick the right one for the job—especially with the growing capabilities of Copilot Chat, edit, ask, and agent modes?
In a recent video, I worked with GitHub’s Developer Advocate Kedasha Kerr (aka @ladykerr) to answer this exact question. Our goal? To build the same travel‑reservation app three different ways with Copilot ask, edit, and agent modes while swapping between Copilot’s growing roster of foundation models to compare each AI model in real-world development workflows.
We set out to build a very simple travel‑reservation web app (think “browse hotel rooms, pick dates, book a room”). To keep the demo snappy, we chose a lightweight stack:
- Backend: Flask REST API
- Frontend: Vue.js, styled with Tailwind
- Data: a local
data.json
file instead of a real database
That gave us just enough surface area to compare models while scaffolding the app, wiring up endpoints, and adding tests, docs, and security tweaks along the way .
Here are a few key takeaways from our video (which you should watch).
But first, let’s talk about Copilot’s three modes
GitHub Copilot gives you three distinct “modes:” ask, edit, and agent mode. Ask is there to answer questions, edit is a precise code‑rewriting scalpel, and agent mode can drive an entire task from your prompt to the finished commit. Think of it this way: Ask answers, edit assists, agent executes.
Tip 1: No matter what model you use, context matters more than you think
The model you use is far from the only variable, and the context you offer your model of choice is often one of the most important elements.
That means the way you shape your prompt—and the context you provide Copilot with your prompt and additional files—makes a big difference in output quality. By toggling between capabilities, such as Copilot agent or edit mode, and switching models mid-session, we explored how Copilot responds when fed just the right amount of detail—or when asked to think a few steps ahead.
Our demo underscores that different modes impact results, and thoughtful prompting can dramatically change a model’s behavior (especially in complex or ambiguous coding tasks).
The takeaway: If you’re not shaping your prompts and context deliberately, you’re probably leaving performance on the table.
For a deeper dive into model choice, the guide “Which AI model should I use with GitHub Copilot?” offers a comprehensive breakdown.
Tip 2: Copilot agent mode is a powerful tool
Agent mode, which is still relatively new and evolving fast, allows Copilot to operate more autonomously by navigating files, making changes, and performing repository-wide tasks with minimal hand holding.
This mode opens up new workflow possibilities (especially for repetitive or large-scale changes). But it also demands a different kind of trust and supervision. Seeing it in action helps demystify where it fits in your workflows.
Here are two ways we used agent mode in our demo:
- One‑click project scaffolding: Kedasha highlighted the project README and simply told Copilot “implement this.” Agent mode (running Gemini 2.5 Pro) created the entire Flask and Vue repository with directories, boiler‑plate code, unit tests, and even seeded data.
- End‑to‑end technical docs: I started using agent mode with Claude 3.5 and prompted: “Make documentation for this app … include workflow diagrams in Mermaid.” Copilot generated a polished README, API reference, and two Mermaid sequence/flow diagrams, then opened a preview so I could render the charts before committing .
Tip 3: Use custom instructions to set your ground rules
Another insight from the session is just how much mileage you can get from customizing Copilot’s behavior with custom instructions.
If you don’t know, custom instructions let you lay down the rules before Copilot suggests anything (like how APIs need to be called, naming conventions, and style standards).
Kedasha in particular underscored how custom instructions can tailor tone, code style, and task focus to fit your workflow—or your team’s.
One example? Using custom instructions to give every model the same ground rules, so swaps between each model produced consistent, secure code without re‑explaining standards each time.
Whether you’re nudging Copilot to avoid over-explaining, stick to a certain stack, or adopt a consistent commenting voice, the customization options are more powerful than most people realize. If you haven’t personalized Copilot yet, try custom instructions (and check out our Docs on them to get started).
Tip 4: The balance between speed vs. output quality
No matter what model you use, there are always tradeoffs between responsiveness, completeness, and confidence. A larger model may not provide quick suggestions when you’re working through an edit, for instance—but a smaller model may not offer the best refactoring suggestions, even if it’s faster in practice.
TL;DR: It’s not about chasing the “best” model—it’s about knowing when to switch, and why. Your default model might work 80% of the time—but having others on deck lets you handle edge cases faster and better.
Take this with you
This video demo isn’t a scripted feature demo. It’s two devs using Copilot the way you would—navigating unknowns, poking at what’s possible, and figuring out how to get better results by working smarter instead of harder.
If you’ve been sticking with the defaults or haven’t explored multi-model workflows, this is your invitation to take things further.
👉 Watch the full video to see how we put Copilot to work—and got more out of every mode, prompt, and model.
Now—what will you build? Try GitHub Copilot to get started (we have a free tier that’s pretty great, too).
Additional resources:
- Explore the demo repository: Try forking our demo repository from the video to test our different models with GitHub Copilot.
- Which AI model should I use with GitHub Copilot? A look into each model currently offered with GitHub Copilot from Cassidy Williams.
- A guide to deciding what model to use in GitHub Copilot: A framework for figuring out which model to use when new models are appearing what feels like every day (or maybe every week).
The post Real‑world video demo: Using different AI models in GitHub Copilot appeared first on The GitHub Blog.
In part one of our design system annotation series, we discussed the ways in which accessibility can get left out of design system components from one instance to another. Our solution? Using a set of “Preset annotations” for each component with Primer. This allows designers to include specific pre-set details that aren’t already built into the component and visually communicated in the design itself.
That being said, Preset annotations are unique to each design system — and while ours may be a helpful reference for how to build them — they’re not something other organizations can utilize if you’re not also using the Primer design system.
Luckily, you can build your own. Here’s how.
How to make Preset annotations for your design system
Start by assessing components to understand which ones would need Preset annotations—not all of them will. Prioritize components that would benefit most from having a Preset annotation, and build that key information into each one. Next, determine what properties should be included. Only include key information that isn’t conveyed visually, isn’t in the component properties, and isn’t already baked into a coded component.

Prioritizing components
When a design system has 60+ components, knowing where to start can be a challenge. Which components need these annotations the most? Which ones would have the highest impact for both design teams and our users?
When we set out to create a new set of Preset annotations based on our proof of concept, we decided to use ten Primer components that would benefit the most. To help pick them, we used an internal tool called Primer Query that tracks all component implementations across the GitHub codebase as well as any audit issues connected to them. Here is a video breakdown of how it works, if you’re curious.
We then prioritized new Preset annotations based on the following criteria:
- Components that align to organization priorities (i.e. high value products and/or those that receive a lot of traffic).
- Components that appear frequently in accessibility audit issues.
- Components with React implementations (as our preferred development framework).
- Most frequently implemented components.
Mapping out the properties
For each component, we cross-referenced multiple sources to figure out what component properties and attributes would need to be added in each Preset annotation. The things we were looking for may only exist in one or two of those places, and thus are less likely to be accounted for all the way through the design and development lifecycle. The sources include:
Component documentation on Primer.style
Design system docs should contain usage guidance for designers and developers, and accessibility requirements should be a part of this guidance as well. Some of the guidance and requirements get built into the component’s Figma asset, while some only end up in the coded component.
Look for any accessibility requirements that are not built into either Figma or code. If it’s built in, putting the same info in the Preset annotation may be redundant or irrelevant.
Coded demos in Storybook
Our component sandbox helped us see how each component is built in React or Rails, as well as what the HTML output is. We looked for any code structure or accessibility attributes that are not included in the component documentation or the Figma asset itself—especially when they may vary from one implementation to another.
Component properties in the Figma asset library
Library assets provide a lot of flexibility through text layers, image fills, variants, and elaborate sets of component properties. We paid close attention to these options to understand what designers can and can’t change. Worthwhile additions to a Preset Annotation are accessibility attributes, requirements, and usage guidance in other sources that aren’t built into the Figma component.
Other potential sources
- Experiences from team members: The designers, developers, and accessibility specialists you work with may have insight into things that the docs and design tools may have missed. If your team and design system have been around for a while, their insights may be more valuable than those you’ll find in the docs, component demos, or asset libraries. Take some time to ask which components have had challenging bugs and which get intentionally broken when implemented.
- Findings from recent audits: Design system components themselves may have unresolved audit issues and remediation recommendations. If that’s the case, those issues are likely present in Storybook demos and may be unaccounted for in the component documentation. Design system audit issues may have details that both help create a Preset annotation and offer insights about what should not be carried over from existing resources.
What we learned from creating Preset annotations
Preset annotations may not be for every team or organization. However, they are especially well suited for younger design systems and those that aren’t well adopted.
Mature design systems like Primer have frequent updates. This means that without close monitoring, the design system components themselves may fall out of sync with how a Preset annotation is built. This can end up causing confusion and rework after development starts, so it may be wise to make sure there’s some capacity to maintain these annotations after they’ve been created.
For newer teams at GitHub, new members of existing teams, and team members who were less familiar with the design system, the built-in guidance and links to documentation and component demos proved very useful. Those who are more experienced are also able to fine-tune the Presets and how they’re used.
If you don’t already have extensive experience with the design system components (or peers to help build them), it can take a lot of time to assess and map out the properties needed to build a Preset. It can also be challenging to name a component property succinctly enough that it doesn’t get truncated in Figma’s properties panel. If the context is not self-evident, some training or additional documentation may help.
It’s not always clear that you need a Preset annotation
There may be enough overlap between the Preset annotation for a component and types of annotations that aren’t specific to the design system.
For example, the GitHub Annotation Toolkit has components to annotate basic <textarea>
form elements in addition to a Preset annotation for our <TextArea>
Primer component:

In many instances, this flexibility may be confusing because you could use either annotation. For example, the Primer <TextArea>
Preset has built-in links to specific Primer docs, and while the non-Preset version doesn’t, you could always add the links manually. While there’s some overlap between the two, using either one is better than none.
One way around this confusion is to add Primer-specific properties to the default set of annotations. This would allow you to do things like toggle a boolean property on a normal Button annotation and have it show links and properties specific to your design system’s button component.
Our Preset creation process may unlock automation
There are currently a number of existing Figma plugins that advertise the ability to scan a design file to help with annotations. That being said, the results are often mixed and contain an unmanageable amount of noise and false positives. One of the reasons these issues happen is that these public plugins are design system agnostic.
Current automated annotation tools aren’t able to understand that any design system components are being used without bespoke programming or thorough training of AI models. For plugins like this to be able to label design elements accurately, they first need to understand how to identify the components on the canvas, the variants used, and the set properties.

With that in mind, perhaps the most exciting insight is that the process of mapping out component properties for a Preset annotation—the things that don’t get conveyed in the visual design or in the code—is also something that would need to be done in any attempt to automate more usable annotations.
In other words, if a team uses a design system and wants to automate adding annotations, the tool they use would need to understand their components. In order for it to understand their components well enough to automate accurately, these hidden component properties would need to be mapped out. The task of creating a set of Preset annotations may be a vital stepping stone to something even more streamlined.
A promising new method: Figma’s Code Connect
While building our new set of Preset annotations, we experimented with other ways to enhance Primer with annotations. Though not all of those experiments worked out, one of them did: adding accessibility attributes through Code Connect.
Primer was one of the early adopters of Figma’s new Code Connect feature in Dev Mode. Says Lukas Oppermann, our staff systems designer, “With Code Connect, we can actually move the design and the code a little bit further apart again. We can concentrate on creating the best UX for the designers working in Figma with design libraries and, on the code side, we can have the best developer experience.”
To that end, Code Connect allows us to bypass much of our Preset annotations, as well as the downsides of some of our other experiments. It does this by adding key accessibility details directly into the code that developers can export from Figma.
GitHub’s Octicons are used in many of our Primer components. They are decorative by default, but they sometimes need alt
text or aria-label
attributes depending on how they’re used. In the IconButton component, that button uses an Octicon and needs an accessible name to describe its function.
When using a basic annotation kit, this may mean adding stamps for a Button and Decorative Image as well as a note in the margins that specifies what the aria-label
should be. When using Preset annotations, there are fewer things to add to the canvas and the annotation process takes less time.
With Code Connect set up, Lukas added a hidden layer in the IconButton Figma component. It has a text property for aria-label
which lets designers add the value directly from the component properties panel. No annotations needed. The hidden layer doesn’t disrupt any of the visuals, and the aria-label
property gets exported directly with the rest of the component’s code.

It takes time to set up Code Connect with each of your design system components. Here are a few tips to help:
- Consistency is key. Make sure that the properties you create and how you place hidden layers is consistent across components. This helps set clear expectations so your teams can understand how these hidden layers and properties function.
- Use a branch of your design system library to experiment. Hiding attributes like aria-label is quite simple compared to other complex information that Preset annotations are capable of handling.
- Use visual regression testing (VRT). Adding complexity directly to a component comes with increased risk of things breaking in the future, especially for those with many variants. Figma’s merge conflict UI is helpful, but may not catch everything.
As we continue to innovate with annotations and make our components more accessible, we are aiming to release our GitHub Annotation Toolkit in the near future. Stay tuned!
Further reading
Accessibility annotation kits are a great resource, provided they’re used responsibly. Eric Bailey, one of the contributors to our forthcoming GitHub Annotation Toolkit, has written extensively about how annotations can highlight and amplify deeply structural issues when you’re building digital products.
The post Design system annotations, part 2: Advanced methods of annotating components appeared first on The GitHub Blog.
When it comes to design systems, every organization tends to be at a different place in their accessibility journey. Some have put a great deal of work into making their design system accessible while others have a long way to go before getting there. To help on this journey, many organizations rely on accessibility annotations to make sure there are no access barriers when a design is ready to be built.
However, it’s a common misconception (especially for organizations with mature design systems) that accessible components will result in accessible designs. While design systems are fantastic for scaling standards and consistency, they can’t prevent every issue with our designs or how we build them. Access barriers can still slip through the cracks and make it into production.
This is the root of the problem our Accessibility Design team set out to solve.
In this two-part series, we’ll show you exactly how accessible design system components can produce inaccessible designs. Then we’ll demonstrate our solution: integrating annotations with our Primer components. This allows us to spend less time annotating, increases design system adoption, and reaches teams who may not have accessibility support. And in our next post, we’ll walk you through how you can do the same for your own components.
Let’s dig in.
What are annotations and their benefits?
Annotations are notes included in design projects that help make the unseen explicit by conveying design intent that isn’t shown visually. They improve the usability of digital experiences by providing a holistic picture for developers of how an experience should function. Integrating annotations into our design process helps our teams work better together by closing communication gaps and preventing quality issues, accessibility audit issues, and expensive re-work.
Some of the questions annotations help us answer include:
- How is assistive technology meant to navigate a page from one element to another?
- What’s the alternative text for informative images and buttons without labels?
- How does content shift depending on viewport size, screen orientation, or zoom level?
- Which virtual keyboard should be used for a form input on mobile?
- How should focus be managed for complex interactions?
Our answers to questions like this—or the lack thereof—can make or break the experience of the web for a lot of people, especially users with disabilities. Some annotation tools are built specifically to help with this by guiding designers to include key details about web standards, platform functionality, and accessibility (a11y).
Most public annotation kits are well suited for teams who are creating new design system components, teams who aren’t already using a design system, or teams who don’t have specialized accessibility knowledge. They usually help annotate things like:
- Controls such as buttons and links
- Structural elements such as headings and landmarks
- Decorative images and informative descriptions
- Forms and other elements that require labels and semantic roles
- Focus order for assistive technology and keyboard navigation
GitHub’s annotation’s toolkit
One of our top priorities is to meet our colleagues where they’re at. We wanted all our designers to be able to use annotations out of the box because we believe they shouldn’t need to be a certified accessibility specialist in order to get things built in an accessible way.

To this end, last year we began creating an internal Figma library—the GitHub Annotation Toolkit (which we aim to release to the public soon). Our toolkit builds on the legacy of the former Inclusive Design team at CVS Health. Their two open source annotation kits help make documentation that’s easy to create and consume, and are among the most widely used annotation libraries in the Figma Community.
While they add clarity, annotations can also add overhead. If teams are only relying on specialists to interpret designs and technical specifications for developers, the hand-off process can take longer than it needs to. To create our annotation toolkit, we rebuilt its predecessor from the ground up to avoid that overhead, making extensive improvements and adding inline documentation to make it more intuitive and helpful for all of our designers—not just accessibility specialists.
Design systems can also help reduce that overhead. When you audit your design systems for accessibility, there’s less need for specialist attention on every product feature, since you’re using annotations to add technical semantics and specialist knowledge into every component. This means that designers and developers only need to adhere to the usage guidelines consistently, right?
The problems with annotations and design system components
Unfortunately, it’s not that simple.
Accessibility is not binary
While design systems can help drive more accessible design at scale, they are constantly evolving and the work on them is never done. The accessibility of any component isn’t binary. Some may have a few severe issues that create access barriers, such as being inoperable with a keyboard or missing alt text. Others may have a few trivial issues, such as generic control labels.
Most of the time, it will be a misnomer to claim that your design system is “fully accessible.” There’s always more work to do—it’s just a question of how much. The Web Content Accessibility Guidelines (WCAG) are a great starting point, but their “Success Criteria” isn’t tailored for the unique context that is your website or product or audience.
While the WCAG should be used as a foundation to build from, it’s important to understand that it can’t capture every nuance of disabled users’ needs because your users’ needs are not every user’s needs. It would be very easy to believe that your design system is “fully accessible” if you never look past WCAG to talk to your users. If Primer has accessible components, it’s because we feel that direct participation and input from daily assistive technology users is the most important aspect of our work. Testing plans with real users—with and without disabilities—is where you really find what matters most.
Accessible components do not guarantee accessible designs
Arranging a series of accessible components on a page does not automatically create an accurate and informative heading hierarchy. There’s a good chance that without additional documentation, the heading structure won’t make sense visually—nor as a medium for navigating with assistive technology.

It’s great when accessible components are flexible and responsive, but what about when they’re placed in a layout that the component guidance doesn’t account for? Do they adapt to different zoom levels, viewport sizes, and screen orientations? Do they lose any functionality or context when any of those things change?
Component usage is contextual. You can add an image or icon to your design, but the design system docs can’t write descriptive text for you. You can use the same image in multiple places, but the image description may need to change depending on context.
Similarly, forms built using the same input components may do different things and require different error validation messages. It’s no wonder that adopting design system components doesn’t get rid of all audit issues.
Design system components in Figma don’t include all the details
Annotation kits don’t include components for specific design systems because almost every organization is using their own. When annotation kits are adopted, teams often add ways to label their design system components.
This labeling lets developers know they can use something that’s already been built, and that they don’t need to build something from scratch. It also helps identify any design system components that get ‘detached’ in Figma. And it reduces the number of things that need to be annotated.
Let’s look at an example:

If we’re using this Primer Button component from the Primer Web Figma library, there are a few important things that we won’t know just by looking at the design or the component properties:
- Functional differences when components are implemented. Is this a link that just looks visually like a button? If so, a developer would use the
<LinkButton>
React component instead of<Button>
. - Accessible labels for folks using assistive technology. The icon may need alt text. In some cases, the button text might need some visually-hidden text to differentiate it from similar buttons. How would we know what that text is? Without annotations, the Figma component doesn’t have a place to display this.
- Whether user data is submitted. When a design doesn’t include an obvious form with input fields, how do we convey that the button needs specific attributes to submit data?
It’s risky to leave questions like this unanswered, hoping someone notices and guesses the correct answer.
A solution that streamlines the annotation process while minimizing risk
When creating new components, a set of detailed annotations can be a huge factor in how robust and accessible they are. Once the component is built, design teams can start to add instances of that component in their designs. When those designs are ready to be annotated, those new components shouldn’t need to be annotated again. In most cases, it would be redundant and unnecessary—but not in every case.
There are some important details in many Primer components that may change from one instance to another. If we use the CVS Health annotation kit out of the box, we should be able to capture those variations, but we wouldn’t be able to avoid those redundant and unnecessary annotations. As we built our own annotation toolkit, we built a set of annotations for each Primer component to do both of those things at once.

This accordion component has been thoroughly annotated so that an engineer has everything they need to build it the first time. These include heading levels, semantics for <detail>
and <summary>
elements, landmarks, and decorative icons. All of this is built into the component so we don’t need to annotate most of this when adding the accordion to our new designs.
However, there are two important things we need to annotate, as they can change from one instance to another:
- The optional title at the top.
- The heading level of each item within the accordion.
If we don’t specify these things, we’re leaving it to chance that the page’s heading structure will break or that the experience will be confusing for people to understand and navigate the page. The risks may be low for a single button or basic accordion, but they grow with pattern complexity, component nesting, interaction states, duplicated instances, and so on.

Instead of annotating what’s already built into the component or leaving these details to chance, we can add two quick annotations. One Stamp to point to the component, and one Details annotation where we fill in some blanks to make the heading levels clear.
Because the prompts for specific component details are pre-set in the annotation, we call them Preset annotations.

Introducing our Primer A11y Preset annotations
With this proof of concept, we selected ten frequently used Primer components for the same treatment and built a new set of Preset annotations to document these easily missed accessibility details—our Primer A11y Presets.
Those Primer components tend to contribute to more accessibility audit issues when key details are missing on implementation. Issues for these components relate to things like lack of proper labels, error validation messages, or missing HTML or ARIA attributes.

Each of our Preset annotations is linked to component docs and Storybook demos. This will hopefully help developers get straight to the technical info they need without designers having to find and add links manually. We also included guidance for how to fill out each Preset, as well as how to use the component in an accessible way. This helps designers get support inline without leaving their Figma canvas.
Want to create your own? Check out Design system annotations, part 2
Button components in Google’s Material Design and Shopify’s Polaris, IBM’s Carbon, or our Primer design system are all very different from one another. Because Preset annotations are based on specific components, they only work if you’re also using the design system they’re made for.
In part 2 of this series, we’ll walk you through how you can build your own set of Preset annotations for your design system, as well as some different ways to document important accessibility details before development starts.
You may also like:
If you’re more of a visual learner, you can watch Alexis Lucio explore Preset annotations during GitHub’s Dev Community Event to kick off Figma’s Config 2024.
The post Design system annotations, part 1: How accessibility gets left out of components appeared first on The GitHub Blog.