pavnilschanda@lemmy.world to Technology@lemmy.worldEnglish · 1 year agoLLMs are surprisingly great at compressing images and audio, DeepMind researchers findventurebeat.comexternal-linkmessage-square8fedilinkarrow-up193cross-posted to: aicompanions@lemmy.worldtechnology@lemmy.mltech@kbin.social
arrow-up193external-linkLLMs are surprisingly great at compressing images and audio, DeepMind researchers findventurebeat.compavnilschanda@lemmy.world to Technology@lemmy.worldEnglish · 1 year agomessage-square8fedilinkcross-posted to: aicompanions@lemmy.worldtechnology@lemmy.mltech@kbin.social
minus-squareNaibofTabr@infosec.publinkfedilinkEnglisharrow-up9·1 year agoDo you need the dataset to do the compression? Is the trained model not effective on its own?
minus-squareTibert@compuverse.uklinkfedilinkEnglisharrow-up12·1 year agoWell from the article a dataset is required, but not always the heavier one. Tho it doesn’t solve the speed issue, where the llm will take a lot more time to do the compression. gzip can compress 1GB of text in less than a minute on a CPU, an LLM with 3.2 million parameters requires an hour to compress
minus-squarerubikcuber@programming.devlinkfedilinkEnglisharrow-up1·1 year agoI imagine that the compression is linked to the dataset, so if you update or retrain then you maybe lose access to the compressed data.
Do you need the dataset to do the compression? Is the trained model not effective on its own?
Well from the article a dataset is required, but not always the heavier one.
Tho it doesn’t solve the speed issue, where the llm will take a lot more time to do the compression.
I imagine that the compression is linked to the dataset, so if you update or retrain then you maybe lose access to the compressed data.