ylai@lemmy.ml to Not The Onion@lemmy.worldEnglish · 9 months agoAI chatbots tend to choose violence and nuclear strikes in wargameswww.newscientist.comexternal-linkmessage-square40fedilinkarrow-up1234cross-posted to: futurology@futurology.todaytechnology@lemmy.worldartificial_intel@lemmy.ml
arrow-up1234external-linkAI chatbots tend to choose violence and nuclear strikes in wargameswww.newscientist.comylai@lemmy.ml to Not The Onion@lemmy.worldEnglish · 9 months agomessage-square40fedilinkcross-posted to: futurology@futurology.todaytechnology@lemmy.worldartificial_intel@lemmy.ml
minus-squarePlopp@lemmy.worldlinkfedilinkEnglisharrow-up27·9 months agoHere’s a wild thought. Maybe that’s why the chat bot (I assume LLM) does it too, because it’s been trained on us! 🤯
minus-squareMalfeasant@lemmy.worldlinkfedilinkEnglisharrow-up2·9 months agoI learned it from watching you!
minus-squarefidodo@lemmy.worldlinkfedilinkEnglisharrow-up5·9 months agoWhere are all these nuclear strikes?
minus-squareVisstix@lemmy.worldlinkfedilinkEnglisharrow-up8·9 months agoSid Meier’s Civilization games
minus-squareDeath_Equity@lemmy.worldlinkfedilinkEnglisharrow-up1·9 months agoGhandi has the right idea.
So do humans.
Here’s a wild thought. Maybe that’s why the chat bot (I assume LLM) does it too, because it’s been trained on us! 🤯
I learned it from watching you!
Where are all these nuclear strikes?
Sid Meier’s Civilization games
Ghandi has the right idea.