Select Topic Area
Product Feedback Feature Area
Issues Body
I find the following two news items on the front page:
https://github.blog/changelog/2025-05-19-github-copilot-coding-agent-in-public-preview/
This says to me that github will soon start allowing github users to submit issues which they did not write themselves and were machine-generated. I would consider these issues/PRs to be both a waste of my time and a violation of my projects’ code of conduct¹. Filtering out AI-generated issues/PRs will become an additional burden for me as a maintainer, wasting not only my time, but also the time of the issue submitters (who generated “AI” content I will not respond to), as well as the time of your server (which had to prepare a response I will close without response).
As I am not the only person on this website with “AI”-hostile beliefs, the most straightforward way to avoid wasting a lot of effort by literally everyone is if Github allowed accounts/repositories to have a checkbox or something blocking use of built-in Copilot tools on designated repos/all repos on the account. If we are not granted these tools, and “AI” junk submissions become a problem, I may be forced to take drastic actions such as closing issues and PRs on my repos entirely, and moving issue hosting to sites such as Codeberg which do not have these maintainer-hostile tools built directly into the website.
Note: Because it appears that both issues and PRs written this way are posted by the “copilot” bot, a straightforward way to implement this would be if users could simply block the “copilot” bot. In my testing, it appears that you have special-cased “copilot” so that it is exempt from the block feature.
image
So you could satisfy my feature request by just not doing that.
¹ i don’t at this time have codes of conduct on all my projects, but i will now be adding them for purposes of barring “AI”-generated submissions Guidelines
I doubt they’ll get anywhere with weak action like that. “Stop forcing copilot on us or we’ll be very sad and we’ll strongly consider moving some of our hosting to another site.”
GitHub is a disaster for open source software. MS controls some insane amount of all the code created on earth, and even with self-hosted forges being more prolific and easier to access than ever, people act like their projects can’t live without Big Daddy MS’s social media for coders.
I saw someone the other day, on Lemmy and in full seriousness, proclaim that the world really needed distributed version control. To avoid censorship, like how the fediverse is decentralized.
This is what GitHub has done to a generation of programmers. For those missing the joke, git is already decentralized. You don’t need a central Hub of some kind for your code. You do for your issues, releases, and all that, but not for the code. And if we’d collectively moved to a well designed, intentionally improved system like Fossil, all that woukd have been decentralized and distributed too.
But no, easier and more efficient/profitable to keep using the one C library that’s compatible with Torvald’s pile of old Perl scripts. My website can’t live without a built in Travis CI bot and nonstop PRs from dependency bot, but allowing every moron on earth to submit AI generated content, at last we’ve found the step too far.
Microsoft is so deep in the AI nonsense that asking them to stop pushing CoPilot is time wasted that could be better spent on something else, like moving to a different platform. I honestly don’t really see the point of remaining on GitHub. To me at least it doesn’t offer anything particularly important I can’t get elsewhere.
Why not submit a million vibecoded PRs against various Microsoft repos? At some point they might get it.
The developers will hate you and the suits couldn’t care less.
If theres no form to fill in the blanks with, then AI it is.
An LLM could be useful for explaining people how they failed to follow simple guidelines, like including the software version or not filing multiple issues in one. However, issues written by AI can fuck the right off.
I would absolutely be happy to have a feature where an LLM could read previous issues, the docpage, and the FAQ/wiki, then you could query it regarding your issue to (a) see if it is a legitimate issue, (b) check that the issue you submit contains the info you need, and c) help you link in previous issues/PR’s referring to relevant stuff.
Never in hell do I want an LLM to be generating issues by itself.
My last job did AI submitted PRs and they were the most obnoxious thing. They’d make broad changes and break shit and we were expected to merge them in because “AI”.
“your ai generated commit is failing my AI generated unit test, rejected”
Honestly: That’s on the maintainers then, as there is no practical difference between AI slop generated issues and a telemetry registering issues which potentially lack further checks.