BlushedPotatoPlayers@sopuli.xyz to Technology@lemmy.worldEnglish · 11 months agoAI chatbots tend to choose violence and nuclear strikes in wargameswww.newscientist.comexternal-linkmessage-square47fedilinkarrow-up1220file-textcross-posted to: futurology@futurology.todaynottheonion@lemmy.worldartificial_intel@lemmy.ml
arrow-up1220external-linkAI chatbots tend to choose violence and nuclear strikes in wargameswww.newscientist.comBlushedPotatoPlayers@sopuli.xyz to Technology@lemmy.worldEnglish · 11 months agomessage-square47fedilinkfile-textcross-posted to: futurology@futurology.todaynottheonion@lemmy.worldartificial_intel@lemmy.ml
minus-squarekibiz0r@midwest.sociallinkfedilinkEnglisharrow-up5·11 months agoFor AGI, sure, those kinds of game theory explanations are plausible. But an LLM (or any other kind of statistical model) isn’t extracting concepts, forming propositions, and estimating values. It never gets beyond the realm of tokens.
For AGI, sure, those kinds of game theory explanations are plausible. But an LLM (or any other kind of statistical model) isn’t extracting concepts, forming propositions, and estimating values. It never gets beyond the realm of tokens.