Anand Naidu is a seasoned development expert with a deep mastery of both frontend and backend engineering. Having navigated the evolution of various coding languages and the intricacies of large-scale systems, he brings a pragmatic perspective to the intersection of open-source governance and modern automation. His insights bridge the gap between technical execution and the legal frameworks that keep the global software ecosystem running.
Some distributions have banned AI-generated code due to legal concerns, while others now require an “Assisted-by” tag instead of a traditional sign-off. How does this shift impact a maintainer’s daily workflow, and what specific steps are taken to distinguish between helpful automation and low-quality code?
The shift toward the “Assisted-by” tag is a necessary evolution because it forces transparency right at the front door of the repository. For a maintainer, this changes the workflow from a standard review to a heightened state of scrutiny where we have to look for “AI slop” or hallucinated logic that might look correct on the surface but fails under stress. In projects like NetBSD or Gentoo, the outright bans were born from a fear of “tainted” code, but the Linux kernel’s approach is more about managing the flow. We distinguish helpful automation by looking for the human touch in the logic; if a patch feels like a generic template without context, it’s a red flag. Ultimately, the “Assisted-by” tag serves as a signal for us to dig deeper into the performance metrics rather than just checking if the code compiles.
When developers certify the provenance of code through a Developer Certificate of Origin, the use of language models creates unique legal hurdles. Since humans now bear sole responsibility for AI-generated bugs, what specific vetting processes should teams implement, and how does this change the liability landscape for corporate contributors?
The legal landscape has become significantly more high-stakes because the Developer Certificate of Origin requires a human to guarantee they have the right to submit that code. Since LLMs are trained on massive datasets that include restrictive licenses like the GPL, a human can never be 100% sure of the code’s provenance, which is why the Linux policy now places all liability on the submitter. To manage this, teams must implement rigorous internal audits and sandboxed testing to ensure that what the AI produces doesn’t inadvertently violate third-party IP. For corporate contributors, this means their legal departments are now just as involved in the PR process as the engineers themselves. If an AI-generated bug or security flaw makes it into the kernel, the individual who signed off is the one who answers to the community, creating a powerful deterrent against lazy submissions.
Undisclosed AI patches have previously led to project forks and significant community backlash. Beyond technical metrics, how can project leaders maintain contributor trust when these tools are used, and what are the long-term consequences for a project’s culture when transparency regarding AI assistance is ignored?
Trust in open-source is built on the unspoken agreement that we are all doing the heavy lifting together, so when a maintainer like Sasha Levin or a developer like Graf Zahl submits undisclosed AI code, it feels like a betrayal of that craft. The birth of the UZDoom fork after the GZDoom controversy shows that the community will literally walk away from twenty years of history if they feel the leadership is being dishonest. To maintain trust, leaders must be radically transparent about their use of tools, admitting that while a machine might have suggested a line, a human verified its intent. If transparency is ignored, the project’s culture becomes one of suspicion where every contribution is viewed through a lens of “is this real or a machine hallucination?” This cultural erosion is far more dangerous to a project’s longevity than any technical bug could ever be.
Large-scale, AI-generated patches can exceed 10,000 lines, often overwhelming reviewers or triggering the closure of bug bounty programs. What strategies can prevent maintainer burnout in this high-volume environment, and how do you determine if a patch is high-quality or simply a hallucination that introduces regressions?
The sheer volume is staggering; we’ve seen projects like Node.js and OCaml get hit with massive 10,000-line patches that no human can reasonably vet in a single sitting. To prevent burnout, many maintainers are adopting “defensive” workflows, such as the auto-closing of external PRs seen in tools like tldraw, or even shutting down bug bounties when they become flooded with AI hallucinations. Determining quality requires looking for subtle regressions that AI often misses, like the performance dip found in the kernel 6.15 patch despite it being “functional.” We have to prioritize small, incremental updates over these massive “slop” dumps because a large patch that isn’t properly labeled is almost impossible to review accurately. If a developer isn’t willing to stand by every single line of a 10,000-line file, then as a maintainer, I have no choice but to reject it to protect the integrity of the project.
What is your forecast for AI-assisted open-source development?
I believe we are heading toward a future where AI becomes as standard as a compiler, but the “human-in-the-loop” requirement will only become more rigid. We will likely see a standardized “Assisted-by” protocol adopted across all major foundations to ensure that every line of code has a clear path of accountability back to a person. While the initial panic led some to ban these tools, the pragmatism of leaders like Linus Torvalds suggests that the industry will eventually stop caring how the code was written and focus entirely on whether it works and who is responsible for it. My forecast is that the “AI slop” era will eventually subside as we develop better automated filtering tools, leaving us with a more efficient but highly scrutinized development cycle where the human developer acts more like a high-level editor than a typist. The ultimate goal remains the same: if the code is good, it stays, but if it breaks the system, there must be a human name attached to the fix.
