I don’t see how this could go wrong 👍
Is this really a worm if all it’s doing is sending a prompt to persuade an AI agent with authority to send emails to send emails with the prompt to other AI agents? Like this guy’s saying in the article, just assume that AI output produced as a result of user prompts is the same as unfiltered user input and should be treated that way:
“With a lot of these issues, this is something that proper secure application design and monitoring could address parts of,” says Adam Swanda, a threat researcher at AI enterprise security firm Robust Intelligence. “You typically don’t want to be trusting LLM output anywhere in your application.”
deleted by creator
Agreed. I consulted my temporal thermometer and noticed a worrying trend.
Not related to the article, but this post felt just a little scarier because at first the link thumbnail and comments weren’t loading at all.