Is the LAM a Scam? Down the rabbit hole we go.Support Investigative Journalism: ► Patreon: https://patreon.com/coffeezillaPeople who helped this investigatio...
Coffeezilla asks: “Is the LAM a Scam? Down the rabbit hole we go.”
Pretty much everything AI is a scam, I mean it has its uses but isn’t exactly as claimed yet. Pretty much every non phone AI gadget I’ve seen so far definetly is a scam.
If you think that “pretty much everything AI is a scam”, then you’re either setting your expectations way too high, or you’re only looking at startups trying to get the attention of investors.
There are plenty of AI models out there today that are open source and can be used for a number of purposes: Generating images (stable diffusion), transcribing audio (whisper), audio generation, object detection, upscaling, downscaling, etc.
Part of the problem might be with how you define AI… It’s way more broad of a term than what I think you’re trying to convey.
Sure, but don’t let that feed into the sentiment that AI = scams. It’s way too broad of a term that covers a ton of different applications (that already work) to be used in that way.
And there are plenty of popular commercial AI products out there that work as well, so trying to say that “pretty much everything that’s commercial AI is a scam” is also inaccurate.
We have:
Suno’s music generation
NVidia’s upscaling
Midjourney’s Image Generation
OpenAI’s ChatGPT
Etc.
So instead of trying to tear down everything and anything “AI”, we should probably just point out that startups using a lot of buzzwords (like “AI”) should be treated with a healthy dose of skepticism, until they can prove their product in a live environment.
I mean, LLaMA is open-source and it’s made by Facebook for profit, there’s grey areas. Imo tho, any service that claims to be anything more than a fancy wrapper for OpenAI, Anthropic, etc. API calls is possibly a scam. Especially if they’re trying to sell you hardware, or the service costs more than like $10/month, LLM API calls are obscenely cheap. I use a local frontend as an AI assistant that works by making API calls through a service called openrouter (basically a unified service that makes API calls to all the major cloud LLM providers for you). I put like $5 in it 3 or 4 months ago and it still hasn’t run out.
This is because dedicated consumer AI hardware is a dumb idea. If it’s powerful enough to run a model locally, you should be able to use it for other things (like, say, as a phone or PC) and if it’s sending all its API requests to the cloud, then it has no business being anything but a smartphone app or website.
I can’t agree with that. ASICs can specialize to do one thing at lightning speeds, and fail to do even the most basic of anything else. It’s like claiming your GPU is super powerful so it should be able to run your PC without a CPU.
That’s fair, dedicated ASICs for AI acceleration are totally a valid consumer product, but I meant more along the lines of independent devices (like Rabbit R1 and the AI Pin), not components you can add to an existing device. I should have been more clear.
Just go all out, and gamble that in 5 years the technology will be here to actually make it all function like your dreamt it would. And by then you are the defacto name within that space and can take advantage of that.
I use chat gpt occasionally. It’s not a scam, it’s useful for what I need it to do. I’m just not fooled by the notion that these LLM know factual data or can do much more than generate text. If you accept that, LLMs are pretty darn useful.
Investments in AI are in the billions. With that kind of money flying around, it’s going to attract a lot of snake oil salesmen. It didn’t help that for the general public and investors, any sufficiently advanced technology is indistinguishable from magic, and LLMs reached that point for many.
Just keep the hype cycle in mind. It’ll all go downhill after the point of inflated expectations. With AI, it always does.
Pretty much everything AI is a scam, I mean it has its uses but isn’t exactly as claimed yet. Pretty much every non phone AI gadget I’ve seen so far definetly is a scam.
If you think that “pretty much everything AI is a scam”, then you’re either setting your expectations way too high, or you’re only looking at startups trying to get the attention of investors.
There are plenty of AI models out there today that are open source and can be used for a number of purposes: Generating images (stable diffusion), transcribing audio (whisper), audio generation, object detection, upscaling, downscaling, etc.
Part of the problem might be with how you define AI… It’s way more broad of a term than what I think you’re trying to convey.
I think it’s becoming fair to label a lot of commercial AI “scams” at this point, considering the huge gulf between the hype and the end results.
Open source projects are different due to their lack of commercialisation.
Sure, but don’t let that feed into the sentiment that AI = scams. It’s way too broad of a term that covers a ton of different applications (that already work) to be used in that way.
And there are plenty of popular commercial AI products out there that work as well, so trying to say that “pretty much everything that’s commercial AI is a scam” is also inaccurate.
We have:
Suno’s music generation
NVidia’s upscaling
Midjourney’s Image Generation
OpenAI’s ChatGPT
Etc.
So instead of trying to tear down everything and anything “AI”, we should probably just point out that startups using a lot of buzzwords (like “AI”) should be treated with a healthy dose of skepticism, until they can prove their product in a live environment.
I mean, LLaMA is open-source and it’s made by Facebook for profit, there’s grey areas. Imo tho, any service that claims to be anything more than a fancy wrapper for OpenAI, Anthropic, etc. API calls is possibly a scam. Especially if they’re trying to sell you hardware, or the service costs more than like $10/month, LLM API calls are obscenely cheap. I use a local frontend as an AI assistant that works by making API calls through a service called openrouter (basically a unified service that makes API calls to all the major cloud LLM providers for you). I put like $5 in it 3 or 4 months ago and it still hasn’t run out.
This is because dedicated consumer AI hardware is a dumb idea. If it’s powerful enough to run a model locally, you should be able to use it for other things (like, say, as a phone or PC) and if it’s sending all its API requests to the cloud, then it has no business being anything but a smartphone app or website.
I can’t agree with that. ASICs can specialize to do one thing at lightning speeds, and fail to do even the most basic of anything else. It’s like claiming your GPU is super powerful so it should be able to run your PC without a CPU.
That’s fair, dedicated ASICs for AI acceleration are totally a valid consumer product, but I meant more along the lines of independent devices (like Rabbit R1 and the AI Pin), not components you can add to an existing device. I should have been more clear.
It’s very much fake it till you make it.
Just go all out, and gamble that in 5 years the technology will be here to actually make it all function like your dreamt it would. And by then you are the defacto name within that space and can take advantage of that.
Go to bed Musk.
No, Google and Amazon were actually well run businesses with sensible business plans to meet needs in the market and did it well.
Sure, but competition is way higher now inn the upstart/ emergent tech market
I use chat gpt occasionally. It’s not a scam, it’s useful for what I need it to do. I’m just not fooled by the notion that these LLM know factual data or can do much more than generate text. If you accept that, LLMs are pretty darn useful.
Investments in AI are in the billions. With that kind of money flying around, it’s going to attract a lot of snake oil salesmen. It didn’t help that for the general public and investors, any sufficiently advanced technology is indistinguishable from magic, and LLMs reached that point for many.
Just keep the hype cycle in mind. It’ll all go downhill after the point of inflated expectations. With AI, it always does.