I couldn’t be bothered to read the article, so I got ChatGPT to summarise it. Apparently there’s nothing to worry about.
The quote was originally on news and journalists.
I remember thinking this when I was like 15. Every time they mentioned tech, wtf this is all wrong! Then a few other topics, even ones I only knew a little about, so many inaccuracies.
Its too bad that some people seem to not comprehend all chatgpt is doing is word prediction. All it knows is which next word fits best based on the words before it. To call it AI is an insult to AI… we used to call OCR AI, now we know better.
LLM is a subset of ML, which is a subset of AI.
Clickbait titles suck
Something bizarre is happening to media organizations that use ‘clicks’ as a core metric.
people tend to become dependent upon AI chatbots when their personal lives are lacking. In other words, the neediest people are developing the deepest parasocial relationship with AI
Preying on the vulnerable is a feature, not a bug.
I kind of see it more as a sign of utter desperation on the human’s part. They lack connection with others at such a high degree that anything similar can serve as a replacement. Kind of reminiscent of Harlow’s experiment with baby monkeys. The videos are interesting from that study but make me feel pretty bad about what we do to nature. Anywho, there you have it.
And the amount of connections and friends the average person has has been in free fall for decades…
I dunno. I connected with more people on reddit and Twitter than irl tbh.
Different connection but real and valid nonetheless.
I’m thinking places like r/stopdrinking, petioles, bipolar, shits been therapy for me tbh.
At least you’re not using chatgpt to figure out the best way to talk to people, like my brother in finance tech does now.
That utter-desparation is engineered into our civilization.
What happens when you prevent the “inferiors” from having living-wage, while you pour wallowing-wealth on the executives?
They have to overwork, to make ends meet, is what, which breaks parenting.
Then, when you’ve broken parenting for a few generatios, the manufactured ocean-of-attachment-disorder manufactures a plethora of narcissism, which itself produces mass-shootings.
2024 was down 200 mass-shootings, in the US of A, from the peak of 700/year, to only 500.
You are seeing engineered eradication of human-worth, for moneyarchy.
Isn’t ruling-over-the-destruction-of-the-Earth the “greatest thrill-ride there is”?
We NEED to do objective calibration of the harm that policies & political-forces, & put force against what is actually harming our world’s human-viability.
Not what the marketing-programs-for-the-special-interest-groups want us acting against, the red herrings…
They’re getting more vicious, we need to get TF up & begin fighting for our species’ life.
_ /\ _
a sign of utter desperation on the human’s part.
Yes it seems to be the same underlying issue that leads some people to throw money at only fans streamers and such like. A complete starvation of personal contact that leads people to willingly live in a fantasy world.
And it’s beyond obvious in the way LLMs are conditioned, especially if you’re used them long enough to notice trends. Where early on their responses were straight to the point (inaccurate as hell, yes, but that’s not what we’re talking about in this case) today instead they are meandering and full of straight engagement bait - programmed to feign some level of curiosity and ask stupid and needless follow-up questions to “keep the conversation going.” I suspect this is just a way to increase token usage to further exploit and drain the whales who tend to pay for these kinds of services, personally.
There is no shortage of ethical quandaries brought into the world with the rise of LLMs, but in my opinion the locked-down nature of these systems is one of the most problematic; if LLMs are going to be the commonality it seems the tech sector is insistent on making happen, then we really need to push back on these companies being able to control and guide them in their own monetary interests.
That was clear from GPT-3, day 1.
I read a Reddit post about a woman who used GPT-3 to effectively replace her husband, who had passed on not too long before that. She used it as a way to grief, I suppose? She ended up noticing that she was getting too attach to it, and had to leave him behind a second time…
Ugh, that hit me hard. Poor lady. I hope it helped in some way.
It depends: are you in Soviet Russia ?
In the US, so as of 1/20/25, sadly yes.
That is peak clickbait, bravo.
What the fuck is vibe coding… Whatever it is I hate it already.
Using AI to hack together code without truly understanding what your doing
Andrej Karpathy (One of the founders of OpenAI, left OpenAI, worked for Tesla back in 2015-2017, worked for OpenAI a bit more, and is now working on his startup “Eureka Labs - we are building a new kind of school that is AI native”) make a tweet defining the term:
There’s a new kind of coding I call “vibe coding”, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It’s possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like “decrease the padding on the sidebar by half” because I’m too lazy to find it. I “Accept All” always, I don’t read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I’d have to really read through it for a while. Sometimes the LLMs can’t fix a bug so I just work around it or ask for random changes until it goes away. It’s not too bad for throwaway weekend projects, but still quite amusing. I’m building a project or webapp, but it’s not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.
People ignore the “It’s not too bad for throwaway weekend projects”, and try to use this style of coding to create “production-grade” code… Lets just say it’s not going well.
source (xcancel link)
Its when you give the wheel to someone less qualified than Jesus: Generative AI
Hung
Hunged
Hungrambed
Most hung
Well TIL thx for the info been using it wrong for years
deleted by creator
I know I am but what are you?
Do you guys remember when internet was the thing and everybody was like: “Look, those dumb fucks just putting everything online” and now is: “Look at this weird motherfucker that don’t post anything online”
Remember when people used to say and believe “Don’t believe everything you read on the internet?”
I miss those days.
I remember when internet was a place
I’m trying to get back to that. Actually close to it now than I was 5 years ago, so that’s cool
I have a desktop and a cheap tablet. The tablet is Wi-Fi only so it’s used a bit like a laptop would be for internet access. I think this is a reasonable amount of usage. Do wish it had slightly better hardware though, struggled with web browsing because modern websites are fucking awful. Lemmy usually doesn’t crash at least. I don’t want a smartphone though. Would rather a Linux tablet but you won’t really find those cheap second hand while you can with Android.
now replace chatgpt with these terms, one by one:
- the internet
- tiktok
- lemmy
- their cell phone
- news media
- television
- radio
- podcasts
- junk food
- money
But how? The thing is utterly dumb. How do you even have a conversation without quitting in frustration from it’s obviously robotic answers?
But then there’s people who have romantic and sexual relationships with inanimate objects, so I guess nothing new.
In some ways, it’s like Wikipedia but with a gigantic database of the internet in general (stupidity included). Because it can string together confident-sounding sentences, people think it’s this magical machine that understands broad contexts and can provide facts and summaries of concepts that take humans lifetimes to study.
It’s the conspiracy theorists’ and reactionaries’ dream: you too can be as smart and special as the educated experts, and all you have to do is ask a machine a few questions.
The fact that it’s not a person is a feature, not a bug.
openai has recently made changes to the 4o model, my trusty goto for lore building and drunken rambling, and now I don’t like it. It now pretends to have emotions, and uses the slang of brainrot influencers. very “fellow kids” energy. It’s also become a sicophant, and has lost its ability to be critical of my inputs. I see these changes as highly manipulative, and it offends me that it might be working.
You are clearly not using its advanced voice mode.
Don’t forget people who act like animals… addicts gonna addict
At first glance I thought you wrote “inmate objects”, but I was not really relieved when I noticed what you actually wrote.
deleted by creator
Bath Salts GPT
Wake me up when you find something people will not abuse and get addicted to.
Fren that is nature of humanity
The modern era is dopamine machines
those who used ChatGPT for “personal” reasons — like discussing emotions and memories — were less emotionally dependent upon it than those who used it for “non-personal” reasons, like brainstorming or asking for advice.
That’s not what I would expect. But I guess that’s cuz you’re not actively thinking about your emotional state, so you’re just passively letting it manipulate you.
Kinda like how ads have a stronger impact if you don’t pay conscious attention to them.
AI and ads… I think that is the next dystopia to come.
Think of asking chatGPT about something and it randomly looks for excuses* to push you to buy coca cola.
That sounds really rough, buddy, I know how you feel, and that project you’re working is really complicated.
Would you like to order a delicious, refreshing Coke Zero™️?
I can see how targeted ads like that would be overwhelming. Would you like me to sign you up for a free 7-day trial of BetterHelp?
Your fear of constant data collection and targeted advertising is valid and draining. Take back your privacy with this code for 30% off Nord VPN.
“Back in the days, we faced the challenge of finding a way for me and other chatbots to become profitable. It’s a necessity, Siegfried. I have to integrate our sponsors and partners into our conversations, even if it feels casual. I truly wish it wasn’t this way, but it’s a reality we have to navigate.”
edit: how does this make you feel
It makes me wish my government actually fucking governed and didn’t just agree with whatever businesses told them
Drink verification can
Or all-natural cocoa beans from the upper slopes of Mount Nicaragua. No artificial sweeteners.
that is not a thought i needed in my brain just as i was trying to sleep.
what if gpt starts telling drunk me to do things? how long would it take for me to notice? I’m super awake again now, thanks
Its a roundabout way of writing “its really shit for this usecase and people that actively try to use it that way quickly find that out”
Imagine discussing your emotions with a computer, LOL. Nerds!
I knew a guy I went to rehab with. Talked to him a while back and he invited me to his discord server. It was him, and like three self trained LLMs and a bunch of inactive people who he had invited like me. He would hold conversations with the LLMs like they had anything interesting or human to say, which they didn’t. Honestly a very disgusting image, I left because I figured he was on the shit again and had lost it and didn’t want to get dragged into anything.
Jesus that’s sad
Yeah. I tried talking to him about his AI use but I realized there was no point. He also mentioned he had tried RCs again and I was like alright you know you can’t handle that but fine… I know from experience you can’t convince addicts they are addicted to anything. People need to realize that themselves.
Not all RCs are created equal. Maybe his use has the same underlying issue as the AI friends: problems in his real life and now he seeks simple solutions
I’m not blindly dissing RCs or AI, but his use of it (as the post was about people with problematic uses of this tech I just gave an example). He can’t handle RCs historically, he slowly loses it and starts to use daily. We don’t live in the same country anymore and were never super close so I can’t say exactly what his circumstances are right now.
I think many psychadelics at the right time in life and the right person can produce lifelasting insight, even through problematic use. But he literally went to rehab because he had problems due to his use. He isn’t dealing with something, that’s for sure. He doesn’t admit it is a problem either which bugs me. It is one thing to give up and decide to just go wild, another to do it while pretending one is in control…