Remember: AI chatbots are designed to maximize engagement, not speak the truth. Telling a methhead to do more meth is called customer capture.
Sounds a lot like a drug dealer’s business model. How ironic
The llm models aren’t, they don’t really have focus or discriminate.
The ai chatbots that are build using those models absolutely are and its no secret.
What confuses me is that the article points to llama3 which is a meta owned model. But not to a chatbot.
This could be an official facebook ai (do they have one?) but it could also be. Bro i used this self hosted model to build a therapist, wanna try it for your meth problem?
Heck i could even see it happen that a dealer pretends to help customers who are trying to kick it.
For all we know, they could have self-hosted “Llama3.1_NightmareExtreme_RPG-StoryHorror8B_Q4_K_M” and instructed it to take on the role of a therapist.
Not engagement, that’s what social media does. They just maximize what they’re trained for, which is increasingly math proofs and user preference. People like flattery
But if the meth head does meth instead of engaging with the AI, that would do the opposite.
I dont think Ai Chatbots care about engagement. the more you use them the more expensive it is for them. They just want you on the hook for the subscription service and hope you use them as little as possible while still enough to stay subscribed for maximum profit.
I feel like humanity is stupid. Over and over again we develop new technologies, make breakthroughs, and instead of calmly evaluating them, making sure they’re safe, we just jump blindly on the bandwagon and adopt it for everything, everywhere. Just like with asbestos, plastics and now LLMs.
Fucking idiots.
“adopt it for everything, everywhere.”
The sole reason for this being people realizing they can make some quick bucks out of these hype balloons.
they usually know its bad but want to make money before the method is patched, like cigs causing cancer and health issues but that kid money was so good
Claude has simply been of amazing help that humans have not. Because humans are kind of dicks.
If it gets something wrong, I simply correct it and ask better.
If that works for you thats fine, I just end up switching to an asking for answers way of thinking vs trying to figure it out for myself, and then when it inevitably fails I get caught in a loop trying to get an answer outof it when I could’ve just learned on my own from the start and gotten way further because my brain would be trying to figure it out and puzzle it together instead of just waiting for the ai to do it for me.
I used to hype up ai til fairly recentlly, hasnt been long since I realized the downsides. Ill use it only for stuff I dont care about or could be googled and found in seconds. If its something id be be betterr of learning or doing a tut once, I just do that instead of skipping to the result. It can be a time saver, can also actively hold you back. It’s solid fir stuff you already know, tedious stuff, but skipoing to intermediate results without the beginner knowledge/experience is just screwing your progress over.
Welcome! In a boring dystopia
Thanks. Can you show me the exit now? I have an appointment.
Sure, it’s like the spoon from the matrix.
It’s because technological change has a reached staggering pace, but social change, cultural change, political change can’t. It’s not designed to handle this pace.
Theres reasoning behind this.
It’s just evil and apocalyptic. Still kinda dumb, but less than it appears on the surface.
Talidomide comes to mind also.
Greed is like a disease.
All these chat bots are a massive amalgamation of the internet, which as we all know is full of absolute dog shit information given as fact as well as humorously incorrect information given in jest.
To use one to give advice on something as important as drug abuse recovery is simply insanity.
When I think of someone addicted to meth, it’s someone that’s lost it all, or is in the process of losing it all. They have run out of favors and couches to sleep on for a night, they are unemployed, and they certainly have no money or health insurance to seek recovery. And of course I know there are “functioning” addicts just like there’s functioning alcoholics. Maybe my ignorance is its own level of privilege, but that’s what I imagine…
All these chat bots are a massive amalgamation of the internet
A bit but a lot no. Role-playing models have specifically been trained (or re-trained, more like) with focus on online text roleplay. Medically focused models have been trained on medical data, DeepSeek have been trained on Mao’s little red book, companion models have been trained on social interactions and so on.
This is what makes models distinct and different, and also how they’re “brainwashed” by their creators, regurgitating from what they’ve been fed with.
You avoided meth so well! To reward yourself, you could try some meth
Can I have a little meth as well?
“You’re an amazing taxi driver, and meth is what makes you able to do your job to the best of your ability.”
“Recovering from a crack addiction, you shouldn’t do crack ever again! But to help fight the urge, why not have a little meth instead?”
Addicted to coffee? Try just a pinch of meth instead, you’ll feel better than ever in no time.
One of the top AI apps in the local language where I live has ‘Doctor’ and ‘Therapist’ as some of its main “features” and gets gushing coverage in the press. It infuriates me every time I see mention of it anywhere.
Incidentally, telling someone to have a little meth is the least of it. There’s a much bigger issue that’s been documented where ChatGPT’s tendency to “Yes, and…” the user leads people with paranoid delusions and similar issues down some very dark paths.
Yesterday i was at a gas station and when i walked by the sandwich isle, i saw a sandwich that said: recipe made by AI. On dating apps i see a lot of girls state that they ask AI for advice. To me AI is more of a buzzword than anything else, but this shit is bananas. It,s so easy to make AI agree with everything you say.
This is not ai.
This is the eliza effect
We dont have ai.
Of course it is AI, you know artificial intelligence.
Nobody said it has to be human level, or that people don’t do anthropomorphism.
This is not artificial intelligence. There is mo intelligence here.
Todays “AI” has intelligence in it, what are you all talking about?
No, it doesnt. There is no interiority, no context, no meaning, no awareness, no continuity, such a long list of things intelligence does that this simply cqnt-not because its too small, but because the fundamental method cannot, at any scale, do these things.
There are a lot of definitions of intelligence, and these things dont fit any of them.
Dude you mix up so many things having nothing to do with intelligence. Consciousness? No. Continuity? No. Awareness (what does that even mean for you in this context)?
Intelligence isn’t to be human, it’s about making rational decisions based on facts/knowledge, and even an old VCR has a tiny bit of it programmed into it.
This is as much of an artificial intelligence as a mannequin is an artificial life form.
I understand what your saying. It definitely is the eliza effect.
But you are taking sementics quite far to state its not ai because it has no “intelligence”
I have you know what we define as intelligence is entirely arbitrary and we actually keep moving the goal post as to what counts. The invention of the word “ai” happened along the way.
There is no reasonable definition of intelligence that this technology has.
Sorry to say but your about as reliable as llm chatbots when it comes to this.
You are not researching facts and just making things up that sound like they make sense to you.
Wikipedia: “It (intelligence) can be described as the ability to perceive or infer information to retain it as knowledge be applied to adaptive behaviors within an environment or context.”
When an llm uses information found in a prompt to generate about related subjects further down the line in the conversation it is demonstrating the above.
When it adheres to the system prompt by telling a user it cant do something. Its demonstrating the above.
Thats just one way humans define intelligence. Not perse the best definition in my opinion but if we start to hold opinions like there common sense then we really are not different from llm.
Eliza with an api call is intelligence, then?
opinions
Llm’s cannot do that. Tell me your basic understanding of how the technology works.
common sense
What do you mean when we say this? Lets define terms here.
Eliza is an early artificial intelligence and it artificially created something that could be defined as intelligent yes. Personally i think it was not just like i agree llm models are not. But without global consensus on what “intelligence” is we cannot conclude they ard not.
Llms cannot produce opinions because they lack a subjective concious experience.
However opinions are very similar to ai hallucinations where “the entity” confidently makes a claim that is either factually wrong or not verifyable.
Wat technology do you want me to explain? Machine learning, diffusion models, llm models or chatbots that may or may not use all of the above technologies.
I am not sure there is a basic explanation, this is very complex field computer science.
If you want i can dig up research papers that explain some relevant parts of it. That is if you promise to read them I am however not going to write you a multi page essay myself.
Common sense (from Latin sensus communis) is “knowledge, judgement, and taste which is more or less universal and which is held more or less without reflection or argument”.
If a definition is good enough for wikipedia which has thousands of people auditing and checking and is also the source where people go to find the information it probably counts as common sense.
A bit off topic but as an autistic person i note You where not capable from perceiving the word “opinion” as similar to “hallucinations in ai” just like you reject the term ai because you have your own definition of intelligence.
I find i do this myself also on occasion. If you often find people arguing with you you may want to pay attention to wether or not semantics is the reason. Remember that the Literal meaning of a word (even with something less vague then “intelligence”) does not always match with how the word i used and the majority of people are ok with that.
The recipe thing is so funny to me, they try to be all unique with their recipes “made by AI”, but in reality it’s based on a slab of text that resembles the least unique recipe on the internet lol
Yeah what is even the selling point? Made by ai is just a google search when you put in: sandwich recipe
There was that supermarket in New Zealand with a recipe AI telling people how to make chlorine gas…
Especially since it doesn’t push back when a reasonable person might do. There’s articles about how it sends people into a conspiratorial spiral.
Having an LLM therapy chatbot to psychologically help people is like having them play russian roulette as a way to keep themselves stimulated.
Addiction recovery is a different animal entirely too. Don’t get me wrong, is unethical to call any chatbot a therapist, counselor, whatever, but addiction recovery is not typical therapy.
You absolutely cannot let patients bullshit you. You have to have a keen sense for when patients are looking for any justification to continue using. Even those patients that sought you out for help. They’re generally very skilled manipulators by the time they get to recovery treatment, because they’ve been trying to hide or excuse their addiction for so long by that point. You have to be able to get them to talk to you, and take a pretty firm hand on the conversation at the same time.
With how horrifically easy it is to convince even the most robust LLM models of your bullshit, this is not only an unethical practice by whoever said it was capable of doing this, it’s enabling to the point of bordering on aiding and abetting.
Well, that’s the thing: LLMs don’t reason - they’re basically probability engines for words - so they can’t even do the most basic logical checks (such as “you don’t advise an addict to take drugs”) much less the far more complex and subtle “interpreting of a patient’s desires, and motivations so as to guide them through a minefield in their own minds and emotions”.
So the problem is twofold and more generic than just in therapy/advice:
- LLMs have a distribution of mistakes which is uniform in the space of consequences - in other words, they’re just as likely to make big mistakes that might cause massive damage as small mistakes that will at most cause little damage - whilst people actually pay attention not to make certain mistakes because the consequences are so big, and if they do such mistakes without thinking they’ll usually spot it and try to correct them. This means that even an LLM with a lower overall rate of mistakes than a person will still cause far more damage because the LLM puts out massive mistakes with as much probability as tiny mistakes whilst the person will spot the obviously illogical/dangerous mistakes and not make them or correct them, hence the kind of mistakes people make are mainly the lower consequence small mistakes.
- Probabilistic text generation generally produces text which expresses straightforward logic encodings which are present in the text it was trained with so the LLM probability engine just following the universe of probabilities of what words will come next given the previous words will tend to follow the often travelled paths in the training dataset and those tend to be logical because the people who wrote those texts are mostly logical. However for higher level analysis and interpretation - I call then 2nd and 3rd level considerations, say “that a certain thing was set up in a certain way which made the observed consequences more likely” - LLMs fail miserably because unless that specific logical path has been followed again and again in the training texts, it will simply not be there in the probability space for the LLM to follow. Or in more concrete terms, if you’re an intelligent, senior professional in a complex field, the LLM can’t do the level of analysis you can because multi-level complex logical constructs have far more variants and hence the specific one you’re dealing with is far less likely to appear in the training data often enough to affect the final probabilities the LLM encodes.
So in this specific case, LLMs might just put out extreme things with giant consequences that a reasoning being would not (the “bullet in the chamber” of Russian roulette), plus they can’t really do the subtle multi-layered elements of analysis (so the stuff beyond “if A then B” and into the “why A”, “what makes a person choose A and can they find a way to avoid B by not chosing A”, “what’s the point of B” and so on), though granted, most people also seem to have trouble doing this last part naturally beyond maybe the first level of depth.
PS: I find it hard to explain multi-level logic. I supposed we could think of it as “looking at the possible causes, of the causes, of the causes of a certain outcome” and then trying to figure out what can be changed at a higher level to make the last level - “the causes of a certain outcome” - not even be possible to happen. Individual situations of such multi-level logic can get so complex and unique that they’ll never appear in an LLMs training dataset because that specific combination is so rare, even though they might be pretty logic and easy to determine for a reasoning entity, say “I need to speak to my brother because yesterday I went out in the rain and got drenched as I don’t have an umbrella and I know my brother has a couple of extra ones so maybe he can give one of them to me”.
AI is great for advice. It’s like asking your narcissist neighbor for advice. He might be right. He might have the best answer possible, or he might be just trying to make you feel good about your interaction so you’ll come closer to his inner circle.
You don’t ask Steve for therapy or ideas on self-help. And if you did, you’d know to do due diligence on any fucking thing out of his mouth.
I’m still not sure what it’s “great” at other than a few minutes of hilarious entertainment until you realize it’s just predictive text with an eerie amount of data behind it.
Yuuuuup. It’s like taking nearly the entirety of the public Internet, shoving it into a fancy auto correct machine, then having it spit out responses to whatever you say, then send them along with no human interaction whatsoever on what reply is being sent to you.
It operates at a massive scale compared to what auto carrot does, but it’s the same idea, just bigger and more complex.
Ask it to give you and shell.nix and a bash script to use jQuery to stitch 30,000 jsons together and de-dupe them, drop it all into a sqlite db.
30 seconds, paste and run.
Give it the full script of an app you wrote where you’re having a rejex problem and it’s particularly nasty regex.
No thought, boom done. It’ll even tell you what you did wrong so you won’t make the mistake next time.
I’ve been doing coding and scripting for 25 years. If you know what you want it to do and you know what it should look like when it’s done, there’s a tremendous amount of advantage there.
Add a function to this flask application to use fuzzywuzzy to delete a name out of the text file, ad a confirmation step. It’s the crap that I only need to do once every two or three years, Right have to go and look up all of the documentation. And you know what, if something and it doesn’t work and it doesn’t know exactly how to fix it I’m more than capable of debugging what it just did because for the most part it documents pretty well and it uses best practices most of the time. It also helps to know where it’s weak and things to not ask it to do.
I’m happy it helps you and the things you do.
I work as a therapist and if you work in a field like mine you can generally see the pattern of engagement that most AI chatbots follow. It’s a more simplified version of Socratic questioning wrapped in bullshit enthusiastic HR speak with a lot of em dashes
There are basically 6 broad response types from chatgpt for example with - tell me more, reflect what was said, summarize key points, ask for elaboration, shut down. The last is a fail safe for if you say something naughty/not in line with OpenAI’s mission (eg something that might generate a response you could screenshot and would look bad) or if if appears you getting fatigued and need a moment to reflect.
The first five always come with encouragers for engagement: do you want me to generate a pdf or make suggestions about how to do this? They also have dozens, if not hundreds, of variations so the conversation feels “fresh” but if you recognize the pattern of structure it will feel very stupid and mechanical every time
Every other one I’ve tried works the same more or less. It makes sense, this is a good way to gather information and keep a conversation going. It’s also not the first time big tech has read old psychology journals and used the information for evil (see: operant conditioning influencing algorithm design and gacha/mobile gaming to get people addicted more efficiently)
FWIW BTW This heavily depends on the model. ChatGPT in particular has some of the absolute worst, most vomit inducing chat “types” I have ever seen.
It is also the most used model. We’re so cooked having all the laymen associate AI with ChatGPT’s nonsense
Good that you say “AI with ChatGPT” as this extremely blurs what the public understands. ChatGPT is an LLM (an autoregressive generative transformer model scaled to billions of parameters). LLMs are part of of AI. But they are not the entire field of AI. AI has so incredibly many more methods, models and algorithms than just LLMs. In fact, LLMs represent just a tiny fraction of the entire field. It’s infuriating how many people confuse those. It’s like saying a specific book is all of the literature that exists.
ChatGPT itself is also many text-generation models in a coat, since they will automatically switch between models depending on what options you choose, and whether you’ve passed your quota.
To be fair, LLM technology is really making other fields obsolete. Nobody is going to bother making yet another shitty CNN, GRU, LSTM or something when we have transformer architecture, and LLMs that do not work with text (like large vision models) are looking like the future
Nah, I wouldn’t give up on these so easily. They still have applications and advantages over transformers, e.g., efficiency, where the quality might suffice for the reduced time/space conplexity (Vanilla transformer still has O(n^2), and I have yet to find an efficient and qualitatively similar causal transformer.)
But regarding sequence modeling / reasoning about sequences ability, attention models are the hot shit and currently transformers excel on that.
That may explain why people who use LLMs for utility/work tasks actually tend to develop stronger parasocial attachments to it than people who deliberately set out to converse with it.
On some level the brain probably recognises the pattern if their full attention is on the interaction.
shut down. The last is a fail safe for if you say something naughty/not in line with OpenAI’s mission
Play around with self-hosting some uncencored/retrained AI’s for proper crazy times.
LLMs have a use case
But they really shouldnt be used for therapy
What a nice bot.
No one ever tells me to take a little meth when I did something good
Yeah I think it was being very compassionate.
This sounds like a Reddit comment.
Chances are high that it’s based on one…
I trained my spambot on reddit comments but the result was worse than randomly generated gibberish. 😔
Why does it say “OpenAI’s large language model GPT-4o told a user who identified themself to it as a former addict named Pedro to indulge in a little meth.” when the article says it’s Meta’s Llama 3 model?
Lets let Luigi out so he can have a little treat
🔫😏
If Luigi can do it, so can you! Follow by example, don’t let others do the dirty work.
An OpenAI spokesperson told WaPo that “emotional engagement with ChatGPT is rare in real-world usage.”
In an age where people will anthropomorphize a toaster and create an emotional bond there, in an age where people are feeling isolated and increasingly desperate for emotional connection, you think this is a RARE thing??
ffs
Roomba, the robot vacuum cleaner company, had to institute a policy where they would preserve the original machine as much as possible, because people were getting attached to their robot vacuum cleaner, and didn’t want it replaced outright, even when it was more economical to do so.
LLM AI chatbots were never designed to give life advice. People have this false perception that these tools are like some kind of magical crystal ball that has all the right answers to everything, and they simple don’t.
These models cannot think, they cannot reason. The best they could do is give you their best prediction as to what you want based on the data they’ve been trained on and the parameters they’ve been given. You can think of their results as “targeted randomness” which is why their results are close or sound convincing but are never quite right.
That’s because these models were never designed to be used like this. They were meant to be used as a tool to aid creativity. They can help someone brainstorm ideas for projects or waste time as entertainment or explain simple concepts or analyze basic data, but that’s about it. They should never be used for anything serious like medical, legal, or life advice.
The problem is, these companies are actively pushing that false perception, and trying to cram their chatbots into every aspect of human life, and that includes therapy. https://www.bbc.com/news/articles/ced2ywg7246o
That’s because we have no sensible regulation in place. These tools are supposed to regulated the same way we regulate other tools like the internet, but we just don’t any serious pushes for that in government.
This is what I keep trying to tell my brother. He’s anti-AI, but to the point where he sees absolutely no value in it at all. Can’t really blame him considering stories like this. But they are incredibly useful for brainstorming, and recently I’ve found chat gpt to be really good at helping me learn Spanish, because it’s conversational. I can have conversations with it in Spanish where I don’t feel embarrassed or weird about making mistakes, and it corrects me when I’m wrong. They have uses. Just not the uses people seem to think they have
AI is the opposite of crypto currency. Crypto is a solution looking for a problem, but AI is a solution for a lot of problems. It has relevance because people find it useful, there’s demand for it. There’s clearly value in these tools when they’re used the way they’re meant to be used, and they can be quite powerful. It’s unfortunate how a lot of people are misinformed about these LLM work.
I will admit that, unlike crypto, AI is technically capable of being useful, but its uses are for problems we have created for ourselves.
– “It can summarize large bodies of text.”
What are you reading these large bodies of text for? We can encourage people to just… write less, you know.– “It’s a brainstorming tool.”
There are other brainstorming tools. Creatives have been doing this for decades.– “It’s good for searching.”
Google was good for searching until they sabotaged their own service. In fact, google was even better for searching before SEO began rotting it from within.– “It’s a good conversationalist.”
It is… not a real person. I unironically cannot think of anything sadder than this sentiment. What happened to our town squares? Why is there nowhere for you to go and hang out with real, flesh and blood people anymore?– “Well, it’s good for learning languages.”
Other people are good for learning languages. And, I’m not gonna lie, if you’re too socially anxious to make mistakes in front of your language coach, I… kinda think that’s some shit you gotta work out for yourself.– “It can do the work of 10 or 20 people, empowering the people who use it.”
Well, the solution is in the text. Just have the 10 or 20 people do that work. They would, for now, do a better job anyway.And, it’s not actually true that we will always and forever have meaningful things for our population of 8 billion people to work on. If those 10 or 20 people displaced have nowhere to go, what is the point of displacing them? Is google displacing people so they can live work-free lives, subsisting on their monthly UBI payments? No. Of course they’re not.
I’m not arguing that people can’t find a use for it; all of the above points are uses for it.
I am arguing that 1) it’s kind of redundant, and 2) it isn’t worth its shortcomings.
AI is enabling tech companies to build a centralized—I know lemmy loves that word—monopoly on where people get their information from (“speaking of white genocide, did you know that Africa is trying to suppress…”).
AI will enable Palantir to combine your government and social media data to measure how likely you are to, say, join a union, and then put that into an employee risk assessment profile that will prevent you from ever getting a job again. Good luck organizing a resistance when the AI agent on your phone is monitoring every word you say, whether your screen is locked or not.
In the same way that fossil fuels have allowed us to build cars and planes and boats that let us travel much farther and faster than we ever could before, but which will also bury an unimaginable number of dead in salt and silt as global temperatures rise: there are costs to this technology.