Gaywallet (they/it)@beehaw.org to Technology@beehaw.org · 1 year agoA jargon-free explanation of how AI large language models workarstechnica.comexternal-linkmessage-square16fedilinkarrow-up1102cross-posted to: auai@programming.devaicompanions@lemmy.worldtechnology@lemmy.world
arrow-up1102external-linkA jargon-free explanation of how AI large language models workarstechnica.comGaywallet (they/it)@beehaw.org to Technology@beehaw.org · 1 year agomessage-square16fedilinkcross-posted to: auai@programming.devaicompanions@lemmy.worldtechnology@lemmy.world
minus-squarePenguinTD@lemmy.calinkfedilinkEnglisharrow-up1·1 year agocause in the end it’s all statistics and math, human are full of mistakes(intentional or not), living language evolve over time(even the grammar), so whatever we are building “now” is a contemporary “good enough” representation.
minus-squarekosmoz@beehaw.orglinkfedilinkEnglisharrow-up1·1 year agoAlso, humans tend to be notoriously bad at both statistics and math :^)
cause in the end it’s all statistics and math, human are full of mistakes(intentional or not), living language evolve over time(even the grammar), so whatever we are building “now” is a contemporary “good enough” representation.
Also, humans tend to be notoriously bad at both statistics and math :^)