This tool checks public repositories for signs of AI-led development: It scans for commits co-authored by AI coding agents, and configuration files for tools like Copilot, Cursor, Claude Code, and others.
It does not attempt to detect AI-generated code by analyzing style or patterns. It only looks at the metadata that these tools leave behind. It does not relieve you from interpreting that data.
There is nothing inherently wrong with using tools that help you write code. Many people use these daily, and that is fine. This project does not exist to shame anyone. Please be respectful.
What has changed, observably, is code quality. There has been a visible and accelerating decline in the quality of packages and pull requests, as well as in the software and websites I use daily. Things just don't work anymore. So much code that looks roughly correct but does not quite do what it's supposed to.
AI agents can produce extremely large volumes of code quickly. The failure mode is not that this code is always wrong - it's that it's often close enough to look correct (without being correct) while also having so much of it that it becomes impossible to review. When an agent is allowed to commit directly, it suggests the output has not been reviewed with the necessary scrutiny such a commit would demand.
Every dependency in a project is a trust relationship. When package development is primarily AI-driven, the effective cost of including said package goes up. Not only because the tools often produce suboptimal code, but also because the vetting surface area grows and the confidence that someone has actually read the code goes down.
A repository with zero AI signals may still contain AI-written code. Not every tool leaves traces, and not every developer opts to commit their agent files. Conversely, a repository with many AI commits may be perfectly well-maintained by someone who reviews every line. Folks have also started to include agent files like this to deter others from using AI to contribute to their project.