My Thoughts on AI-Generated PRs for My OSS Projects

What Contributors Can Do to Help

Hi,

AI-assisted coding is great, without a doubt. But when it comes to OSS contributions, the story is a bit more complicated.

The Problem

For my OSS projects, when I receive a PR, I need to look into it. At that point, I do not know how much AI assistance was used. Let us take an extreme case. If a PR is 100% AI-generated, I need to review it instead of the contributor, and I need to ask the contributor to fix it if necessary. That feels like a real waste.

I should just ask the AI directly. Or, what is even better is to ask the AI from the start. Then I know the full context, what AI I am using, what I asked, and how I should control it. Having a human contributor is pure overhead.

Of course, that is not the case if the original PR is perfect. But that is unlikely, because not everything is in the code base and the docs. Although small, there is something in my brain that is not exposed. It is probably impossible, because I do not know what it is. It is like my preference, but I do not know it until I get asked.

What Contributors Can Do

That was an extreme case, but even if the contributor did some review of the AI-generated code, things do not change a lot. Unless humans do more work, it does not make sense. That is only possible for those who can do the work without AI.

So, what do I think would work? I think after reporting issues, sending a PR with a failing test would help, but solutions should not be included. The failing test can be assisted by AI, as long as the human reviews it, and the test is human-readable, representing the spec.

Final Thoughts

This idea might be controversial, but it is what I feel now. It can change when things change, which can happen pretty soon in this AI era.

Happy coding.

Reply

or to participate.