I mean, it is objectively bad for life. Throwing away millions to billions of gallons of water all so you can get some dubious coding advice.
Wait to power tripper db0 sees this. Crying that their ai photos in all their coms are cringe
Whether intentional or not, this is gaslighting. “Here’s the trendy reaction those wacky lemmings are currently upvoting!”
Getting to the core issue, of course we’re sick of AI, and have a negative opinion of it! It’s being forced into every product, whether it makes sense or not. It’s literally taking developer jobs, then doing worse. It’s burning fossil fuels and VC money and then hallucinating nonsense, but still it’s being jammed down our throats when the vast majority of us see no use-case or benefit from it. But feel free to roll your eyes at those acknowledging the truth…
it’s literally making its users nuts, or exacerbating their existing mental illness. not hyperbole, according to psychologists. and this isn’t conjecture:
https://futurism.com/openai-investor-chatgpt-mental-health
https://futurism.com/chatgpt-psychosis-antichrist-aliens
What is the gaslighting here? A trend, or the act of pointing out a trend, do not seem like gaslighting to me. At most it seems like bandwagon propaganda or the satire thereof.
For the second paragraph, I agree we (Lemmings) are all pretty against it and we can be echo-chambery about it. You know, like Linux!
But I would also DISagree that we (population of earth) are all against it.
It seems like the most immature and toxic thing to me to invoke terms like “gaslighting,” ironically “toxic,” and all the other terms you associate with these folks, defensively and for any reason, whether it aligns with what the word actually means or not. Like a magic phrase that instantly makes the person you use it against evil, manipulative and abusive, and the person that uses it a moral saint and vulnerable victim. While indirectly muting all those who have genuine uses for the terms. Or i’m just going mad exaggerating, and it’s just the typical over- and mis-using of words.
Anyhow, sadly necessary disclaimer, i agree with almost all of the current criticism raised against AI, and my disagreements are purely against mischaracterizations of the underlying technology.
EDIT: I just reminded myself of when a teacher went ballistic at class for misusing the term “antisocial,” saying we’re eroding and polluting all genuine and very serious uses of the term. Hm, yeah it’s probably just that same old thing. Not wrong for going ballistic over it, though.
Are you honestly claiming a shitpost is gaslighting?
What a world we live in.
It’s just a joke bro.
<.< The person I replied to was joking? Because it definitely doesn’t come off that way.
I was talking about how shitposting on reddit became a cesspool because people started to post actual shit takes and then claim “it’s just a joke bro” when called out on it. I’m starting to see the same thing here in Lemmy; this post being an example. I’d rather this community didn’t get overrun by chuds.
Wouldn’t the opposite of artificial intelligence be natural stupidity?
Much love
The currently hot LLM technology is very interesting and I believe it has legitimate use cases. If we develop them into tools that help assist work. (For example, I’m very intrigued by the stuff that’s happening in the accessibility field.)
I mostly have problem with the AI business. Ludicruous use cases (shoving AI into places where it has no business in). Sheer arrogance about the sociopolitics in general. Environmental impact. LLMs aren’t good enough for “real” work, but snake oil salesmen keep saying they can do that, and uncritical people keep falling for it.
And of course, the social impact was just not what we were ready for. “Move fast and break things” may be a good mantra for developing tech, but not for releasing stuff that has vast social impact.
I believe the AI business and the tech hype cycle is ultimately harming the field. Usually, AI technologies just got gradually developed and integrated to software where they served purpose. Now, it’s marred with controversy for decades to come.
If we develop them into tools that help assist work.
Spoilers: We will not
I believe the AI business and the tech hype cycle is ultimately harming the field.
I think this is just an American way of doing business. And it’s awful, but at the end of the day people will adopt technology if it makes them greater profit (or at least screws over the correct group of people).
But where the Americanized AI seems to suffer most is in their marketing fully eclipsing their R&D. People seem to have forgotten how DeepSeek spiked the football on OpenAI less than a year ago by making some marginal optimizations to their algorithm.
The field isn’t suffering from the hype cycle nearly so much as it suffers from malinvestment. Huge efforts to make the platform marketable. Huge efforts to shoehorn clumsy chat bots into every nook and cranny of the OS interface. Vanishingly little effort to optimize material consumption or effectively process data or to segregate AI content from the human data it needs to improve.
Spoilers: We will not
Generative inpainting/fill is enormously helpful in media production.
Implicit costs refer to the opportunity costs associated with a firm’s resources, representing the income that could have been earned if those resources were employed in their next best alternative use.
I don’t see the relevance here. Inpainting saves artists from time-consuming and repetitive labor for (often) no additional cost. Many generative inpainting models will run locally, but they’re also just included with an Adobe sub.
I don’t see the relevance here
Anthropic is losing $3 billion or more after revenue in 2025
OpenAI is on track to lose more than $10 billion.
xAI, makers of “Grok, the racist LLM,” losing it over $1 billion a month.
I don’t know that generative infill justifies these losses.
The different uses of AI are not inexctricable. This is the point of the post. We should be able to talk about the good and the bad.
We should be able to talk about the good and the bad.
Again, I point you to “implicit costs”. Something this trivial isn’t good if it’s this expensive.
The reason most web forum posters hate AI is because AI is ruining web forums by polluting it with inauthentic garbage. Don’t be treating it like it’s some sort of irrational bandwagon.
For those who know
I need to watch that video. I saw the first post but haven’t caught up yet.
it’s just slacktivism no different than all the other facebook profile picture campaigns.
I have no idea about what’s being called for at all.
Search for clippy and rossman
Now see, I like the idea of AI.
What I don’t like are the implications, and the current reality of AI.
I see businesses embracing AI without fully understanding the limits. Stopping the hiring juniors developers, often firing large numbers of seniors because they think AI, a group of cheap post grad vibe programmers and a handful of seasoned seniors will equal the workforce they got rid of when AI, while very good is not ready to sustain this. It is destroying the career progression for the industry and even if/when they realise it was a mistake, it might already have devastated the industry by then.
I see the large tech companies tearing through the web illegally sucking up anything they can access to pull into their ever more costly models with zero regard to the effects on the economy, the cost to the servers they are hitting, or the environment from the huge power draw creating these models requires.
It’s a nice idea, but private business cannot be trusted to do this right, we’re seeing how to do it wrong, live before our eyes.
And the whole AI industry is holding up the stock market, while AI has historically always ran the hype cycle and crashed into an AI winter. Stock markets do crash after billions pumped into a sector suddenly turn out to be not worth as much. Almost none of these AI companies run a profit and don’t have any prospect of becoming profitable. It’s when everybody starts yelling that this time it’s different that things really become dangerous.
and don’t have any prospect of becoming profitable
There’s a real twist here in regards to OpenAI.
They have some kind of weird corporate structure where OpenAI is a non-profit and it owns a for-profit arm. But, the deal they have with Softbank is that they have to transition to a for-profit by the end of the year or they lose out on the $40 billion Softbank invested. If they don’t manage to do that, Softbank can withhold something like $20B of the $40B which would be catastrophic for OpenAI. Transitioning to a For-Profit is not something that can realistically be done by the end of the year, even if everybody agreed on that transition, and key people don’t agree on it.
The whole bubble is going to pop soon, IMO.
Yep, exactly.
They knew the housing/real estate bubble would pop, as it currently is…
… So, they made one final last gambit on AI as the final bubble that would magically become super intelligent and solve literally all problems.
This would never, and is not working, because the underlying tech of LLM has no real actual mechanism by which it would or could develop complex, critical, logical analysis / theoretization / metacognition that isn’t just a schizophrenic mania episode.
LLMs are fancy, inefficient autocomplete algos.
Thats it.
They achieve a simulation of knowledge via consensus, not analytic review.
They can never be more intelligent than an average human with access to all the data they’ve … mostly illegally stolen.
The entire bet was ‘maybe superintelligence will somehow be an emergent property, just give 8t more data and compute power’.
And then they did that, and it didn’t work.
I agree with everything you said, but that doesn’t mean it can’t be very useful in many fields.
I mean, I also agree with that, lol.
There absolutely are valid use cases for this kind of ‘AI’.
But it is very, very far from the universal panacea that the capital class seems to think it is.
When all the hype dies down, we will see where it’s actually useful. But I can bet you it will have uses, it’s been very helpful in making certain aspects of my life a lot easier. And I know many who say the same.
That too is the classical hype cycle. After the trough of disillusionment, and that’s going to be a deep one from the look of things, people figure out where it can be used in a profitable way in its own niches.
… Unless its mass proliferation of shitty broken code and mis/disinformation and hyperparasocial relationships and waste of energy and water are actually such a net negative that it fundamentally undermines infrastructure and society, thus raising the necessary profit margin too high for such legit use cases to be workable in a now broken economic system.
The world revolves around the profit margin, so the current trend may even continue indefinitely… Sad.
Time will tell how much was just hype, and how much actually had merit. I think it will go the way of the
.com
bubble.LOTS of uses for the internet of things, but it’s still overhyped
The .com bubble had nothing to do with the Internet of Things.
Fair enough.
The dot-com bubble (late 1990s–2000) was when investors massively overvalued internet-related companies just because they had “.com” in their name, even if they had no profits or solid business plans. It burst in 2000, wiping out trillions in value.
The “Internet hype” bubble popped. But the Internet still has many valid uses.
It’s a nice idea, but private business cannot be trusted to do this right, we’re seeing how to do it wrong, live before our eyes.
You’re right. It’s the business model driving technological advancement in the 21st century that’s flawed.
i see a silver lining.
i love IT but hate IT jobs, here’s hoping techbros just fucking destroy themselves…
tbf now I think AI is just a tool… in 3 years it will be a really impactfull problem
I 100% agree with you
I have to disagree that it’s even a nice idea. The “idea” behind AI appears to be wanting a machine that thinks or works for you with (at least) the intelligence of a human being and no will or desires of its own. At its root, this is the same drive behind chattel slavery, which leads to a pretty inescapable conundrum: either AI is illusory marketing BS or it’s the rebirth of one of the worst atrocities history has ever seen. Personally, hard pass on either one.
You nailed it, IMO. However, I would like a real artificial sentience of some sort just to add to the beautiful variety of the universe. It does seem that many of my fellow humans just want chattle slaves though. Which is saddening.
The problem isn’t AI. The problem is Capitalism.
The problem is always Capitalism.
AI, Climate Change, rising fascism, all our problems are because of capitalism.
Wrong.
The problem are humans, the same things that happen under capitalism can (and would) happen under any other system because humans are the ones who make these things happen or allow them to happen.Problems would exist in any system, but not the same problems. Each system has its set of problems and challenges. Just look at history, problems change. Of course you can find analogies between problems, but their nature changes with our systems. Hunger, child mortality, pollution, having no free time, war, censorship, mass surveilence,… these are not constant through history. They happen more or less depending on the social systems in place, which vary constantly.
While you aren’t wrong about human nature. I’d say you’re wrong about systems. How would the same thing happen under an anarchist system? Or under an actual communist (not Marxist-Leninist) system? Which account for human nature and focus to use it against itself.
It will happen regardless because we are not machines, we don’t follow theory, laws, instructions or whatever a system tells us to perfectly and without little changes here and there.
I think you are underestimating how adaptable humans are. We absolutely conform to the systems that govern us, and they are NOT equally likely to produce bad outcomes.
Every system eventually ends with someone corrupted with power and greed wanting more. Putin and his oligrachs, Trump and his oligarchs… Xi isn’t great, but at least I haven’t heard news about the Uyghurs situation for a couple of years now. Hope things are better there nowadays and people aren’t going missing anymore just for speaking out against their government.
Time doesn’t end with corrupt power, those are just things that happen. Bad shit always happens, it’s the Why, How Often and How We Fix It that are more indicative of success. Every machine breaks down eventually.
I mean you’d have to be pretty smart to make the perfect system. Things failing isn’t proof that things can’t be better.
I see, so you don’t understand. Or simply refuse to engage with what was asked.
Can, would… and did. The list of environmental disasters in the Soviet is long and intense.
Rather, our problem is that we live in a world where the strongest will survive, and the strongest does not mean the smart… So alas we will always be in complete shit until we disappear.
The fittest survive. The problem is creating systems where the best fit are people who lack empathy and and a moral code.
A better solution would be selecting world leaders from the population at random.
Lots of AI is technologically interesting and has tons of potential, but this kind of chatbot and image/video generation stuff we got now is just dumb.
I firmly believe we won’t get most of the interesting, “good” AI until after this current AI bubble bursts and goes down in flames. Once AI hardware is cheap interesting people will use it to make cool things. But right now, the big players in the space are drowning out anyone who might do real AI work that has potential, by throwing more and more hardware and money at LLMs and generative AI models because they don’t understand the technology and see it as a way to get rich and powerful quickly.
AI is good and cheap now because businesses are funding it at a loss, so not sure what you mean here.
The problem is that it’s cheap, so that anyone can make whatever they want and most people make low quality slop, hence why it’s not “good” in your eyes.
Making a cheap or efficient AI doesn’t help the end user in any way.
It appears good and cheap. But it’s actually burning money, energy and water like crazy. I think somebody mentioned to generate a 10 second video, it’s the equivalent in energy consumption as driving a bike for 100km.
It’s not sustainable. I think the thing the person above you is referring to is if we ever manage to make LLMs and such which can be run locally on a phone or laptop with good results. That would make people experiment and try out things themselves, instead of being dependent on paying monthly for some services that can change anytime.
i mean. i have a 15 amp fuse in my apartment and a 10 second cideo takes like 10 minutes to make, i dont know how much energy a 4090 draws but anyone that has an issue with me using mine to generate a 10 second bideo better not play pc games.
You and OP are misunderstanding what is meant by good and cheap.
It’s not cheap from a resource perspective like you say. However that is irrelevant for the end user. It’s “cheap” already because it is either free or costs considerably less for the user than the cost of the resources used. OpenAI or Meta or Twitter are paying the cost. You do not need to pay for a monthly subscription to use AI.
So the quality of the content created is not limited by cost.
If the AI bubble popped, this won’t improve AI quality.
I’m using “good” in almost a moral sense. The quality of output from LLMs and generative AI is already about as good as it can get from a technical standpoint, continuing to throw money and data at it will only result in minimal improvement.
What I mean by “good AI” is the potential of new types of AI models to be trained for things like diagnosing cancer, and and other predictive tasks that we haven’t thought of yet that actually have the potential to help humanity (and not just put artists and authors out of their jobs).
The work of training new, useful AI models is going to be done by scientists and researchers, probably on a limited budgets because there won’t be a clear profit motive, and they won’t be able to afford thousands of $20,000 GPUs like are being thrown at LLMs and generative AI today. But as the current AI race crashes and burns, the used hardware of today will be more affordable and hopefully actually get used for useful AI projects.
Ok. Thanks for clarifying.
Although I am pretty sure AI is already used in the medical field for research and diagnosis. This “AI everywhere” trend you are seeing is the result of everyone trying to stick and use AI in every which way.
The thing about the AI boom is that lots of money is being invested into all fields. A bubble pop would result in investment money drying up everywhere, not make access to AI more affordable as you are suggesting.
I don’t know if the current AI phase is a bubble, but i agree with you that if it were a bubble and burst, it wouldn’t somehow stop or end AI, but cause a new wave of innovation instead.
I’ve seen many AI opponents imply otherwise. When the dotcom bubble burst, the internet didn’t exactly die.
I find it very funny how just a mere mention of the two letters A and I will cause some people to seethe and fume, and go on rants about how much they hate AI, like a conservative upon seeing the word “pronouns.”
One of these topics is about class consciousness, those other is about human rights.
An AI is not a person.
Someone with they/them pronouns is a person.
They have no business being compared to one another!
It’s a comparison of people, not of subjects. In becoming blind with rage upon seeing the letters A and I you act the same as a conservative person seeing the word “pronouns.”
Well if baseless bitching can keep homophobia alive and well, then it’s clear the strategy works.
It is always better to see and to write a sound argument, but barring that, perpetuating negativity is pretty effective, esp. on the internet.
I see what you’re getting at, though!
Commit to this. Let AI write all your responses from now on.
I never said I’m pro-AI?
You didn’t commit, sir.
Sorry, it didn’t occur to me that you were 13.
Failed again on multiple fronts.
Try using a double hyphen.
Distributed platform owned by no one founded by people who support individual control of data and content access
Majority of users are proponents of owning what one makes and supporting those who create art and entertainment
AI industry shits on above comments by harvesting private data and creative work without consent or compensation, along with being a money, energy, and attention tar pit
Buddy, do you know what you’re here for?
EDIT: removed bot accusation, forgot to check user history
Or are you yet another bot lost in the shuffle?
Yes, good job, anybody with opinions you don’t like is a bot.
It’s not like this was even a pro-AI post rather than just pointing out that even the most facile “ai bad, applause please” stuff will get massively upvoted
Yes, good job, anybody with opinions you don’t like is a bot.
I fucking knew it!
Yeah, I guess that was a bit too far, posted before I checked the user history or really gave it time to sit in my head.
Still, this kind of meme is usually used to imply that the comment is just a trend rather than a legitimate statement.
Maybe there’s some truth to it then. Have you considered that possibility?
deleted by creator
HaVe YoU ConSiDeReD thE PoSSiBiLiTY that I’m not pro-AI and I understand the downsides, and can still point out that people flock like lemmings (*badum tss*) to any “AI bad” post regardless of whether it’s actually good or not?
Ok, so your point is: Look! People massively agree with an idea that makes sense and it’s true.
Color me surprised…
Why would a post need to be good? It just needs a good point. Like this post is good enough, even if I don’t agree that we have enough facile ai = posts.
Depends on the community, but for most of them pointing out ways that ai is bad is probably relevant, welcome, and typical.
Why would you lend and credence to the weakest appeal to the masses presented on the site?
Its true. We can have a nuanced view. Im just so fucking sick of the paid off media hyping this shit, and normies thinking its the best thing ever when they know NOTHING about it. And the absolute blind trust and corpo worship make me physically ill.
I’m a lot more sick of the word ‘slop’ than I am of AI. Please, when you criticize AI, form an original thought next time.
Yes! Will people stop with their sloppy criticisms?
Not all AI is bad. But there’s enough widespread AI that’s helping cut jobs, spreading misinformation (or in some cases, actual propaganda), creating deepfakes, etc, that in many people’s eyes, it paints a bad picture of AI overall. I also don’t trust AI because it’s almost exclusively owned by far right billionaires.
Machines replacing people is not a bad thing if they can actually perform the same or better; the solution to unemployment would be Universal Basic Income.
Unfortunately, UBI is just a solution to unemployment. Another solution (and the one apparently preferred by the billionaire rulers of this planet) is letting the unemployed rot and die.
Yeah, that would be the solution, but it’s never happening.
For labor people don’t like doing, sure. I can’t imagine replacing a friend of mine with a conversation machine that performs the same or better, though.