

Edit: also i have a very strong suspicion that someone will figure out a way to make most matrix multiplications in an LLM be sparse, doing mostly same shit in a different basis. An answer to a specific query does not intrinsically use every piece of information that LLM has memorized.
Like MoE (Mixture of Experts) models? This technique is already in use by many models - Deepseek, Llama 4, Kimi 2, Mixtral, Qwen3 30B and 235B, and many more. I read that GPT 4 was leaked and confirmed to use MoE, and Grok is confirmed to use MoE; I suspect most large, hosted, proprietary models are using MoE in some manner.
I still wouldn’t call a car an “investment” or anything, but 100% agreed. The whole “cars lose 50% of their value when you drive off the lot” thing might have been true before the Cash for Clunkers program, but it isn’t anymore. Or maybe it’s true if you’re trying to trade-in the vehicle.
If I wanted to buy the (fairly popular) car I’ve been driving for over 6 years with the same mileage, it’d cost me over 2/3rds what it cost new When I bought it, new cars were less expensive than used cars (i.e., like less than two years old with less than 25k miles) thanks to how much better the interest rates were on the loans. A couple years later, I was getting offers for more than I paid for it. And none of that is a unique experience.