- cross-posted to:
- technology@lemmy.zip
- cross-posted to:
- technology@lemmy.zip
Researchers have found that large language models (LLMs) tend to parrot buggy code when tasked with completing flawed snippets.
That is to say, when shown a snippet of shoddy code and asked to fill in the blanks, AI models are just as likely to repeat the mistake as to fix it.
The easiest way I found to get out of that loop, is to get mad at the AI so it hangs up on you.