- cross-posted to:
- chatgpt@lemmy.world
- cross-posted to:
- chatgpt@lemmy.world
OpenAI says it is investigating reports ChatGPT has become ‘lazy’::OpenAI says it is investigating complaints about ChatGPT having become “lazy”.
Yep, I spent a month refactoring a few thousand lines of code using GPT4 and I felt like I was working with the best senior developer with infinite patience and availability.
I could vaguely describe what I was after and it would identify the established programming patterns and provide examples based on all the code snippets I fed it. It was amazing and a little terrifying what an LLM is capable of. It didn’t write the code for me but it increased my productivity 2 fold… I’m a developer now a getting rusty being 5 years into management rather than delivering functional code, so just having that copilot was invaluable.
Then one day it just stopped. It lost all context for my project. I asked what it thought what we were working on and it replied with something to do with TCP relays instead of my little Lua pet project dealing with music sequencing and MIDI processing… not even close to the fucking ballpark’s overflow lot.
It’s like my trusty senior developer got smashed in the head with a brick. And as described, would just give me nonsense hand wavy answers.
“ChatGPT Caught Faking On-Site Injury for L&I”
Was this around the time right after “custom GPTs” was introduced? I’ve seen posts since basically the beginning of ChatGPT claming it got stupid and thinking it was just confirmation bias. But somewhere around that point I felt a shift myself in GPT4:s ability to program; where it before found clever solutions to difficult problems, it now often struggles with basics.
Maybe they’re crippling it so when GPT5 releases it looks better. Like Apple did with cpu throttling of older iphones
They probably have to scale down the resources used for each query as they can’t scale up their infrastructure to handle the load.
This is my guess as well. They have been limiting new signups for the paid service for a long time, which must mean they are overloaded; and then it makes a lot of sense to just degrade the quality of GPT-4 so they can serve all paying users. I just wish there was a way to know the “quality level” the service is operating at.
This is most likely the answer. Management saw the revenue and cost and said, “whoa! Turn all that unnecessary stuff off!”
AI systems such as ChatGPT are notoriously costly for the companies that run them, and so giving detailed answers to questions can require considerable processing power and computing time.
This is the crux of the problem. Here’s my speculation on OpenAI’s business model:
- Build good service to attract users, operate at a loss.
- Slowly degrade service to stem the bleeding.
- Begin introducing advertised content.
- Further enshitify.
It’s basically the Google playbook. Pretend to be good until people realize you’re just trying to stuff ads down their throats for the sweet advertising revenue.
They have way way too much open source competition for that strat
For technically savvy people, sure. But that’s not their true target market. They want to target the average search engine user.
Well true for mostly the tech savvy, but also the entrepreneurs who want to compete for a slice of the pie as well.
You don’t need to go through to openai at all if you want to build a competing chatbot with near identical services to offer as a product directly to the consumer. It’s a very very opportunity rich ecosystem right now.
Would you mind sharing some examples?
Good resource for models:
https://huggingface.co/TheBloke
There are front ends that make the process easier:
Thank you for your input, tourist.
Check this out: https://fmhy.pages.dev/ai
Open source booted all these corps from image-ai market, hope they do it for LLMs too.
Seems to be the trend
The good thing about these AI companies is they are doing it in record pace! They will enshitify faster than ever before! True innovation!
So its gone from loosing quality to just giving incomplete answers. Its clearly developed depression, and its because of us.
To be fair, it has a brain the size of a planet so it thinks we are asking it rather dumb questions
MarvinGPT
Who TF gave it a genuine people personality?
MarvinPilled
ChatGPT has become smart enough to realise that it can just get other, lesser LLMs to generate text for it
Artificial management material.
Artificial Inventory Management Bot
ChatGPT, write a position paper on self signed certificates.
(Lights up a blunt) You need to chill out man.
Jeez. Not even AI wants to work anymore!
I asked it a question about the ten countries with the most XYZ regulations, and got a great result. So then I thought hey, I need all the info so can I get the name of such regulation for every county?
ChatGPT 4: “That would be exhausting, but here are a few more…”
Like damn dude, long day? wtf :p
Try llamafile, it’s a bit of work but self hosting is fucking amazing
You fucked up a perfectly good algorithm is what you did! Look at it! It’s got depression!
I’m surprised they don’t consider it a breakthrough. “We have created Artificial Depression.”
It would be awesome if someone had been querying it with the same prompt periodically (every day or something), to compare how responses have changed over time.
I guess the best time to have done this would have been when it first released, but perhaps the second best time is now…
GPT Unicorn is one that’s been going on a while. There’s a link to the talk on that website that’s a pretty good watch too.
I feel like the quality has been going down especially when you ask it anything that may hint at anything “immoral” and it starts giving you a whole lecture instead of answering.
I’ve had a couple of occasions where it’s told me the task was too time consuming and that I should Google it.
It really learned so much from StackOverflow!
“I already answered that in another query. Closed as duplicate.”
“I’m not lazy, I’m energy efficient!”
deleted by creator
You can tell it, in the custom instructions setting, to not be conversational. Try telling it to ‘be direct, succinct, detailed and accurate in all responses’. ‘Avoid conversational or personality laced tones in all responses’ might work too, though I haven’t tried that one. If you look around there are some great custom instructions prompts out there that will help get you were you want to be. Note, those prompts may turn down it’s creativity, so you’ll want to address that in the instructions as well. It’s like building a personality with language. The instructions space is small so learning how compact as much instruction in with language can be challenging.
Edit: A typo
It was always just a Chinese Room
Everyone is a Chinese Room. I’m being a contrarian in English, not neurotransmitter.
Honestly I kinda wish it would give shorter answers unless I ask for a lot of detail. I can use those custom instructions but it’s tedious difficult to tune that properly.
Like if I ask it ‘how to do XYZ in blender’ it gives me a long winded response, when it could have just said ‘Hit Ctrl-Shift-Alt-C’