Open Source Has a Bot Problem
The Glama team published a piece in march titled Open Source Has a Bot Problem and it landed with me. Not because it said anything I didn’t already suspect, but because it said it clearly and with receipts.
The short version: AI agents are flooding open source repositories with automated pull requests, issues, and contributions. Volume is up. Signal-to-noise is down. Maintainers are drowning. The people who keep the infrastructure of the internet running, mostly unpaid and mostly invisible, are now also having to perform triage on a wave of synthetic contributions generated by tools that don’t have to care about their time.
This is not a neutral development.
What Is Actually Happening
The mechanism is straightforward. AI coding assistants — GitHub Copilot, Cursor, Devin, and a growing range of agentic systems — are being set loose on open source projects. Some of this is intentional: developers using AI to help them write a fix and open a PR. Some of it is fully automated: agents trawling issue trackers, identifying fixable bugs, and submitting PRs without any human in the loop. Some of it is well-intentioned but low quality. Some of it is outright spam.
The effect on maintainers is cumulative. Every PR needs to be evaluated. Even a PR that gets rejected in ten seconds still had to be opened, read, and actioned. At scale, the maintenance cost of evaluating synthetic contributions can exceed the cost of the original contributions. You end up in a situation where open source maintainers are spending more time reviewing AI work than humans are spending creating it.
That is not a good trade.
The Signal Problem
The harder problem is not volume. It is signal.
Human contributions are varied in quality. But they tend to carry a particular kind of signal: the person opening the PR has a stake in the outcome. They hit the bug. They needed the feature. They care whether it gets merged because they need to use the result. That stake is not always visible in the diff, but it shapes the contribution — the issue description, the test cases added, the discussion in the comments when reviewers push back.
AI contributions don’t have that stake. They can write perfectly formatted commit messages, comprehensive test suites, and polite PR descriptions. They can be indistinguishable from human contributions on the surface. But the contribution is optimising for merge, not for correctness, long-term maintainability, or alignment with the project’s direction. Those are different objectives.
When maintainers can no longer distinguish high-stake human contributions from zero-stake automated ones, they lose the ability to triage by quality signal. Everything looks the same. Everything has to be evaluated to the same depth. That is an enormous hidden tax on unpaid work.
What This Means for Contributor Guidelines
Most CONTRIBUTING.md files were written for a world where contributors were humans. They assume the person reading the file wants to contribute something useful, understands the codebase at some level, and will be embarrassed if they break things. They set social expectations. They work as a filter because humans respond to social expectations.
Bots don’t respond to social expectations.
The Glama post raises the question of what new signals are needed, and I think the right response is practical: contributor guidelines need to evolve to explicitly address AI-assisted and automated contributions. Not to ban them — banning them doesn’t work and misses the genuine value AI can add when used well. But to set expectations that apply regardless of how code was generated:
- Disclose AI assistance. If a contribution was substantially generated by an AI tool, say so. Not as a disclaimer but as context. It helps reviewers calibrate expectations and ask better questions.
- Human review is not optional. A contribution that was generated by an AI and submitted without human review is not a contribution — it is a task that has been transferred to the maintainer. The submitter is responsible for what they submit.
- Quality standards apply equally. AI-generated code is not exempt from the same standards as human-generated code. Tests need to pass. The code needs to be readable. The change needs to solve the stated problem and not introduce new ones. “The AI wrote it” is not an explanation for a gap.
- Interaction is a responsibility. If you open a PR, you are responsible for responding to review comments. An automated agent that opens a PR and cannot engage in the subsequent conversation is offloading that cost to the maintainer.
These are not radical demands. They are just the same things you would ask of any contributor, applied explicitly to the new context.
What I Am Doing About It
I maintain a GitHub template repository that I use as the base for new projects. It includes standard tooling, pre-commit hooks, and configuration. It does not currently have a CONTRIBUTORS.md. That is an omission I have been meaning to address for a while, and this felt like the right moment to do it properly.
I have prepared a CONTRIBUTORS.md for the template — a pull request to add it to abuxton/github-template is coming. The file is meant to be a starting template — something you can copy into a new project and adapt — rather than a comprehensive policy document. The proposed content is at the end of this post.
The key addition, compared to most generic CONTRIBUTING.md files, is an explicit section on AI-assisted contributions. It is not hostile to AI tooling. I use AI tooling every day and find it genuinely useful. But it asks that contributions meet the same standards regardless of how they were produced, and that contributors take responsibility for what they submit rather than treating the act of submission as a cost-free operation.
The Maintainer Problem Is a Shadow Engineering Problem
The Glama post frames this primarily as a technical problem — detection, rate limiting, signals. I think that framing misses something.
The deeper problem is that open source maintainership is already one of the most undervalued and under-resourced roles in the software industry. Maintainers do invisible work that makes everything else possible. They are the shadow engineers of the software commons. When we add a new source of uncompensated toil to their lives, we are not making a neutral technical choice. We are making a decision about whose time is worth protecting.
If you use AI tooling to contribute to open source projects — and I think you should, done well — the question to ask yourself is not “can the AI write a valid PR?” It is “am I making this maintainer’s life easier or harder?” A contribution that requires significant reviewer effort to evaluate, that does not come with context or a stake in the outcome, that offloads the cost of verification onto someone who is already working for free: that is not a contribution. That is extraction.
The bot problem in open source is a maintainer problem. And the maintainer problem is a question of who bears the cost of automation, and whether we are comfortable with the answer.
What Good Looks Like
I want to be clear that AI-assisted open source contribution can be genuinely excellent. The engineers doing it well are using AI as a force multiplier for the things that are hard — reading a large codebase quickly, understanding a complex bug, writing test coverage — and then bringing their own judgment, their own stake, and their own responsibility to the contribution.
They read the issue carefully. They understand the codebase before they try to change it. They test their change, not just run the CI pipeline. They write a PR description that explains the problem, the approach, and what tradeoffs they made. They respond to review comments thoughtfully. The AI helped with the code. The human is responsible for the contribution.
That is exactly what contributor guidelines should be asking for. It is exactly what the CONTRIBUTORS.md I have added to the template asks for. It is not a high bar. But it is a real one.
The CONTRIBUTORS.md proposed for abuxton/github-template is designed to be adapted rather than adopted wholesale. Full content below.
Proposed CONTRIBUTORS.md
# Contributing to this project Thank you for taking the time to contribute. This document sets out the expectations for contributions to this repository — what we ask of contributors, and what contributors can expect in return. ## Before You Contribute - Read the `README.md` to understand what this project does and who it is for. - Check the open issues and pull requests to avoid duplicating work in progress. - For non-trivial changes, open an issue first to discuss the proposed change before investing time in an implementation. ## How to Contribute 1. Fork the repository and create your branch from `main`. 2. Make your changes, following the code style and conventions in the project. 3. Ensure that all tests pass and no new warnings are introduced. 4. Write a clear commit message following [Conventional Commits](https://www.conventionalcommits.org/) format. 5. Open a pull request with a description that explains what changed, why, and what tradeoffs were made. ## Pull Request Standards Every pull request should: - **Solve a stated problem.** Reference the issue it addresses (`Fixes #123` or `Relates to #123`). - **Be appropriately sized.** Prefer small, focused PRs over large ones that change many things at once. - **Include tests.** New behaviour should be covered. Bug fixes should include a test that would have caught the bug. - **Pass CI.** All pre-commit hooks and automated checks must pass before requesting review. - **Be ready for discussion.** The submitter is expected to engage with review comments. A PR is the start of a conversation, not a handoff. ## AI-Assisted Contributions AI coding tools are in wide use, and contributions that are partially or substantially generated by AI are welcome, subject to the following expectations. ### Disclose AI assistance If a contribution was substantially generated by an AI tool, say so in the PR description. A note like "Generated with GitHub Copilot / Cursor / [tool name], reviewed and tested by [author]" is sufficient. This is context, not a disclaimer — it helps reviewers understand the contribution and ask better questions. ### Human review is not optional Submitting a contribution means taking responsibility for it. If an AI tool generated the code, the submitter is responsible for reviewing it, testing it, and being confident it is correct before opening a PR. An AI-generated contribution that has not been reviewed by the submitter is not a contribution — it is work transferred to the maintainer. ### Quality standards apply equally AI-generated code is held to the same standards as human-generated code. Tests must pass. The code must be readable. The change must solve the stated problem and must not introduce new ones. "The AI wrote it" is not an explanation for a gap in quality. ### Engage with the review The submitter is responsible for responding to review comments, including on AI-generated code. Automated agents that open PRs and cannot engage in the subsequent conversation are not able to contribute to this project. ### What good AI-assisted contribution looks like - The submitter understands the problem being solved, not just the code submitted. - The AI was used to accelerate work (reading a complex codebase, writing test coverage, exploring approaches) — not to replace the submitter's judgment. - The PR description reflects the submitter's understanding of the change, not an AI-generated summary they have not read. - The submitter can defend the tradeoffs made in the implementation. ## Code of Conduct Be respectful. Disagreement about technical choices is fine; personal criticism is not. Maintainers reserve the right to close contributions that do not meet these standards without extensive explanation. ## Maintainer Note This project is maintained in spare time. Response times will vary. If a contribution sits without review for more than two weeks, a polite follow-up comment on the PR is welcome.