Jesus Christ, can we leave things alone that aren’t infinitely growing
“Gentlemen, it’s come to our attention that every one who could pay to use our product is paying to use our product. Unfortunately, it also means we’re no longer growing infinitely like we promised the shareholders we would. How do we fix this?”
The infinite growth mindset is so fucking stupid. Like, you’re still making an insane amount of money, what’s the fucking problem?
Yup. Shareholders are the problem, who bought shares at price X and want to sell those shares at X+Y.
And they will do anything to get it.
Yes but what they stupidly never realize is Y is a signed integer not unsigned
Because stock bros be lazy AF. They can’t even be bothered to buy and sell stock based off of who is and isn’t doing well, so they utilize investment firms who offer safe and risky bets like ETFs and Futures respectively. Ultimately, everyone really just wants to buy one stock, have it make them money that exponentially makes even more money quarter after quarter forever, and then do whatever they’d actually do if money were no object (i.e. actually live life).
We don’t live in a world where desire/need scales evenly with this desire for exponential, eternal growth. Therefore capitalists, who promise this impossible prospect to Wall Street, exploit human fears, desires, and needs however they can (union busting, lobbying, etc.) to keep growth up for as long as they can.
George Carlin probably put this concept best in what has to be one of the greatest bits of all time, on The Big Club. Although not directly related to this topic per se, it points to the truth of how this game between government politicians and capitalist corporate overlords is played. All the while, the majority of people, who are not in “The Big Club”, are basically fucked.
Removed by mod
Business people really are just monkeys chasing shiny things. They tend to be less developed emotionally and are often very insecure on top of the entitlement. All they have is the chase, nothing else.
The most useless degree a university can grant is one in business administration.
No. Money has to make money with those sweet sweet interested payments. It’s baked in the system. How else are you going to maintain a small elite of filthy rich people?
Ever more wealth must be constantly created in the World purelly to service the interest on all that debt out there otherwise you would get defaults and banks failing.
It’s not by chance that everywhere the “solution” for the 2008 Crash (that happenned due to over-indebtness mainly in the mortgage segment) was to lower interest rates to pretty much zero - it weakens the pressure on the entire system to constantly grow merelly to generate the additional wealth needed to pay the interest on the debt.
You’ll also notice that as soon as interest rates went up just a bit bank profits mssivelly grew.
This!
Removed by mod
It writes my most boring emails so that I can save a scrap of mental energy for parenting properly after work. Even though my WPM ranges between 70-90 with >98% accuracy, I would rather save some of that mental energy to respond more thoughtfully as a dad.
Of note, I do not give one cold shit about GPT’s “growth”. It’s a linguistic power tool that needs to be carefully handled if you use it for any valuable work.
Of note, I do not give one cold shit about GPT’s “growth”
I mean, if you like the platform, it’s growth is tied to its continued existence and free usability. Still in the honeymoon phase as long as it’s growing.
Why is growth tied to continued existence?
Because companies insist on it and when growth stops they’ll start to cannibalize their own company and charge more money for things that used to be free or fairly-priced until they price themselves out of the market entirely and die as a service.
Yay, capitalism!
Capitalism demands that number must go up.
Removed by mod
Just wanted to say that “GPT” is a general term and not just a name. OpenAI tried to trademark it but couldn’t because of that. It’s as if Nintendo was trying to trademark the word “Kart” because of “Mario Kart”.
hopefully it dies a quick but very painful death
How dare they provide a useful tool like this, those bastards.
This was inevitable, not sure why it’s newsworthy. ChatGPT blew up because it brought LLM tech to the masses in an easily accessible way and was novel at the mainstream level.
The majority of people don’t have a use for chat bots day-to-day, especially one that’s as censored and outdated as ChatGPT (its dataset is from over 2 years ago). Casual users would want it for simple stuff like quickly summarizing current events or even as a Google search-like repository of info. Can’t use it for that when even seemingly innocuous queries/prompts are met with ChatGPT scolding you for being offensive, or that its dataset is old and not current. Sure, it was fun to have it make your grocery lists and workout plans, but that novelty eventually wears off as it’s not very practical all the time.
I think LLMs in the form of ChatGPT will truly become ubiquitous when they can train in real time on up-to-date data. And since that’s very unlikely to happen in the near future, I think OpenAI has quite a bit of progress left to make before their next breakout moment comes again. Although, Sora did wow the mainstream (anyone in the AI scene has been well aware of AI generated video for awhile now), OpenAI has already said they’re not making that publicly available for now (which is a good thing for obvious reasons unless strict safety measures are implemented).
The P in GPT is Pretrained. Its core to the architecture design. You would need to use some other ANN design if you wanted it to continuously update, and there is a reason we don’t use those at scale atm, they scale much worse than pretrained transformers.
It’s not exactly training, but Google just recently previewed a LLM with a million-token context that can do effectively the same thing. One of the tests they did was to put a dictionary for a very obscure language (only 200 speakers worldwide) into the context, knowing that nothing about that language was in its original training data, and the LLM was able to translate it fluently.
OpenAI has already said they’re not making that publicly available for now
This just means that OpenAI is voluntarily ceding the field to more ambitious companies.
Gemini is definitely poised to bury ChatGPT if its real world performance lives up to the curated examples they’ve demostrated thus far. As much as I dislike that it’s Google, I am still interested to try it out.
This just means that OpenAI is voluntarily ceding the field to more ambitious companies.
Possibly. While text to video has been experimented with for the last year by lots of hobbyists and other teams, the end results have been mostly underwhelming. Sora’s examples were pretty damn impressive, but I’ll hold judgment until I get to see more examples from common users vs cherry-picked demos. If it’s capable of delivering that level of quality consistently, I don’t see another model catching up for another year or so.
Sora’s capabilities aren’t really relevant to the competition if OpenAI isn’t allowing it to be used, though. All it does is let the actual competitors know what’s possible if they try, which can make it easier to get investment.
They are allowing people and companies to use it, it’s just limited access. I do not think it’s a good idea for them to open it to the public without plenty of safeguards. Deep fakes are becoming way, way too easy to manufacture nowadays, and I’m in no hurry to throw even more gasoline on a fire that’s already out of control.
GPT tried to convince me that there was more time in 366 days than 1.78 years.
Large language models are notoriously poor at math, you should probably use a different tool for that sort of thing.
glorified autocorrect bad at math. who could have guessed
How do reconcile that with all of the people claiming they use it to write code?
Writing code to do math is different from actually doing the math. I can easily write “x = 8982.2 / 98984”, but ask me what value x actually has and I’ll need to do a lot more work and quite probably get it wrong.
This is why one of the common improvements for LLM execution frameworks these days is to give them access to external tools. Essentially, give it access to a calculator.
If you’re like most developers, cognitive dissonance? https://www.gitclear.com/coding_on_copilot_data_shows_ais_downward_pressure_on_code_quality>
deleted by creator
hopefully it shrinks :)
I hope it goes away
This was always going to be limited. Eventually, it doesn’t matter how much data you dump in, it won’t be unique enough to train anything new out of the model.
I somehow think its usefulness isn’t tied to consumers logging in and just using it. Eventually, this thing will be bundled into a personal assistant like Siri or Alexa and that will be how most people use it.
Additionally, businesses are going to try to use it in a lot of different ways… replacing phone and chat support, writing ad copy, articles, code, legal documents, etc, etc, etc. It still feels a bit early for companies to have adopted it. I imagine it will just take a lot of work integrating it. ChatGPT also needs to be able to DO things, and wiring that all up is some work. For example, if I call a company for support with a product and ask for a refund, the AI needs to have access to the company’s systems to be able to do that task. It also has to do it reliably and correctly all the time.
It’s still really early days.
Microsoft has a library that does exactly what you describe - Semantic Kernel. You can register plugins that do all sorts of things in the real world, and the AI API responds with instructions on how to use these plugins.
I’m very sceptical of LLMs, but this is the closest this technology came to actually being useful.
Given they quietly walked back their stance on military projects during the altman drama my guess would be MI related contracts.
This article will be aged junk in 3,2 …
I get that the article is about user count but that really is about perceived usefulnesses and also more decent ai competitors to chatgpt pro.
They literally only just released text to video which they say will be used as a foundation for agi reasoning.
They have also hinted that training on gpt5 has begun and that it will be faster to train then gpt4
Just before that google came out with a new model that can keep track off 10 mil tokens, beats gemini pro and is also much faster to train. Gemini pro is barely a month old
This will not be a quite year for ai, if theres any flatline its going go be vertical, googles progress nearly is. Most ai progress benefits the entire industry over time.
Obviously you’re not evolved enough to realize th AI is THE FUTURE of all things and everything is better with AI! A child in a poor environment was saved by AI! A king who was mean was dethroned with AI! Everyone was made happy by AI! Say it! SAY IT!!!
Who’s Al??
Allen Iverson.
deleted by creator
That’s what the 7 TRILLION dollars they are seeking are for.
Was this article written by an ai?
The image is AI. Look at the keys on that keyboard. What fucking language is that?
And for you AI fanbois who have to downvote this because nooooooo AI images future - pffft. Your boos mean nothing to me, I’ve seen what makes you cheer.
I’m downvoting you because you’re annoying and a detriment to the conversation, not because you recognised an AI generated image, which really didn’t require an inspection of the keyboard to determine.
Oh yeah well you’re snobbish and truculent and i hereby demote you to poophead. Plus, you should see the other thread where the “obvious” AI image was roundly supported as being real.
That looks like a human generated image. If you’ve played around with text to image generators you’d be hard pressed to find a keyboard that accurate. If I had to guess it’s a separate keyboard and monitor. That’s why the keyboard doesn’t seem to be centered. The hands are in the wrong place probably because they’re simply placed further to the right than a normal typist would be.
For real, try to generate images with a variety of current text to image models and you’ll see what I mean.
They explicitly say that the image is AI. That doesn’t mean the article is. I’m not sure what your comment is implying.
Why do you think so, and why does it matter?
Exactly. Article looked fine to me, if it was AI-written then it did a good job.