- cross-posted to:
- chatgpt@lemdro.id
- cross-posted to:
- chatgpt@lemdro.id
They’ve been acting like that from the start 🤷🏻♂️
Oh boy, how surprising.
I’m clutching my pearls as I type this.
So the development of inorganic intelligence, considered by many as an inflection point in human civilisation is to be handed to business graduates who are historically proven to be capable of any level of atrocity in the name of corporate greed. America, fuck yeah.
AmericaGreed, fuck yeah.Don’t fool yourself. The USA lost the exclusivity deal on unchecked corpo greed a long time ago. This is a global issue now.
Always has been.
Yeah, the American tag was just a throwaway line, greed unchecked, insane and self-harming has always been with us. We let it sit with us around our camp fires like wolves but unlike wolves we never tamed it.
Then again, the US and China are basically the only players in this “game” atm. Hugging Face is trying hard to get the EU on-boarded, and I’m sure we’ll see more contenders. But right now it’s a 2-player game.
I think western interests at least are beginning to detach from nation states.
Actually corporations themselves are 99% of what people fear about AGI already in their inhuman decisionmaking to the detriment of humanity.
What do you mean by “inorganic intelligence,” exactly? Do you think openai has already achieved it?
Dont see the issue mann, people are hard at work at openai to make the best quality ai on the market. Why would you not give it the best economic system on the planet as well? Its literally the best of the best of the best
“ClosedAI” rebrand when?
🤣
NopeAI
Open Your Wallet AI
Stop depending on these proprietary LLMs. Go to !localllama@sh.itjust.works.
There are open-source LLMs you can run on your own computer if you have a powerful GPU. Models like OLMo and Falcon are made by true non-profits and universities, and they reach GPT-3.5 level of capability.
There are also open-weight models that you can run locally and fine-tune to your liking (although these don’t have open-source training data or code). The best of these (Alibaba’s Qwen, Meta’s llama, Mistral, Deepseek, etc.) match and sometimes exceed GPT 4o capabilities.
And there are also free, online hosted instances of those same LLMs in a (relatively speaking) privacy-protecting format from DuckDuckGo, for anyone who doesn’t have a powerful GPU :)
i’m not so sure on the privacy of any of this.
Interesting. So they mix the requests between all DDG users before sending them to “underlying model providers”. The providers like OAI and Anthropic will likely log the requests, but mixing is still a big step forward. My question is what do they do with the open-weight models? Do they also use some external inference provider that may log the requests? Or does DDG control the inference process?
All requests are proxied through DuckDuckGo, and all personalized user metadata is removed. (e.g. IPs, any sort of user/session ID, etc)
They have direct agreements to not train on or store user data, (the training part is specifically relevant to OpenAI & Anthropic) with a requirement they delete all information once no longer necessary (specifically for providing responses) within 30 days.
For the Llama & Mixtral models, they host them on together.ai (an LLM-focused cloud platform) but that has the same data privacy requirements as OpenAI and Anthropic.
Recent chats that are saved for later are stored locally (instead of on their servers) and after 30 conversations, the last chat before that is automatically purged from your device.
Obviously there’s less technical privacy guarantees than a local model, but for when it’s not practical or possible, I’ve found it’s a good option.
Okay that sounds like the best one could get without self-hosting. Shame they don’t have the latest open-weight models, but I’ll try it out nonetheless.
The issue with that method, as you’ve noted, is that it prevents people with less powerful computers from running local LLMs. There are a few models that would be able to run on an underpowered machine, such as TinyLlama; but most users want a model that can do a plethora of tasks efficiently like ChatGPT can, I daresay. For people who have such hardware limitations, I believe the only option is relying on models that can be accessed online.
For that, I would recommend Mistral’s Mixtral models (https://chat.mistral.ai/) and the surfeit of models available on Poe AI’s platform (https://poe.com/). Particularly, I use Poe for interacting with the surprising diversity of Llama models they have available on the website.
There are open-source LLMs you can run on your own computer if you have a powerful GPU.
What defines powerful? What if you don’t have the necessary hardware?
You can check Hugging Face’s website for specific requirements. I will warn you that lot of home machines don’t fit the minimum requirements for a lot of models available there. There is TinyLlama and it can run on most underpowered machines, but its functionalities are very limited and it would lack a lot as an everyday AI Chatbot. You can check my other comment too for other options.
llama is good and I’m looking forward to trying deepseek 3, but the big issue is that those are the frontier open source models while 4o is no longer openai’s best performing model, they just dropped o3 (god they are literally as bad as microsoft at naming) which shows in benchmarks tremendous progress in reasoning
When running llama locally I appreciate the matched capabilities like structured output, but it is objectively significantly worse than openai’s models. I would like to support open source models and use them exclusively but dang it’s hard to give up the results
I suppose one way to start for me would be dropping cursor and copilot in favor of their open source equivalents, but switching my business to use llama is a hard pill to swallow
Booooooooooo!
Anyway: ill just keep using alpaca to run llms locally
is there an easy way to do this that doesn’t require me to understand how github works?
I recommend Ollama, its easy to setup and the cli can download and run llms. With some more techsavviness you can get openwebui as a nice ui.
For someone who doesn’t understand GitHub, the CLI might be a bit much, FWIW.
It would be nice if there were a GUI, download-and-run single click app with a webui built in.
in that case you’re looking for llamafiles. single file, llm included, starts into a web gui. the only limitation is that windows limits the size of executable files to 4GB so on that OS you’re limited to smaller models.
That sounds interesting, I’ll check it out, thanks!
I think that in that case, YouTube is your friend. There are a few pretty straight forward videos that can help you out; if you’re serious about it you’re going have to, eventually, become familiar with it.
Alpaca for linux is easy to use. You just install the flatpak and the llm of your choice. You dont need to know how to use github. (It might have a windows version but im not sure)
deleted by creator
Weird, I said this shit for years, and I was upvoted into the heavens, agreed with, called a hero, and acknowledged as a result.
Maybe is not what was being said?
deleted by creator
Heh, I warn about Mozilla/Firefox all the time and get the same. I hope I’m wrong though :(
I only see negative posts about openai, I kinda find this unbelievable, do you have any examples?
I don’t think you’ve paid enough attention. Back when ChatGPT first launched, they were treated as saints.
The negative opinions have corresponded with public sentiment souring towards them in general (this did happen quite quickly, however).
Can you provide even one example? AI is my autistic obsession, and I never saw anything like that on lemmy even once.
I was even regularly searching for “AI” using the search feature daily.
I have never once seen this, and I don’t find it believable at all, honestly.
I think it’s another example of “internet bubbles” - people with similar views tend to congregate together and this is particularly true on the internet, when going elsewhere is always just a mouse-click away.
When ChatGPT first launched, Lemmy was still pretty much a ghost town, and it did cause a lot of optimistic excitement e.g. on reddit. Lemmy got a big surge in numbers when reddit did its infamous API changes - enshittification driven by spez’s and other reddit executives’ insatiable lust to exploit the site for more and more money.
Perhaps for this reason, people on Lemmy are more averse to the enshittification trend and generally exploitive nature of large tech companies. I think this is what people on Lemmy object to - tech companies’ concentration of power and profits by ripping off the general public - not so much the concept of LLMs themselves, but the fact they could easily be used to further inequality in society.
The claim was that people on lemmy treated openai as saints when chatgpt first came out, I never saw anything like that and I’ve been here for 5 years. https://lemmy.ml/u/communist
i’ve never seen this on lemmy, other places, sure, but not once on lemmy.
Yes you’re right, sorry I went off on a tangent about the reasons for the intense negativity in the Lemmyverse about LLMs. I’ve been using lemmy for four years, and definitely don’t think there has ever been any positive feelings towards LLMs here, especially as ChatGPT’s arrival predates the first surge of users on Lemmy (and the subsequent appearance of all the instances we see today). On reddit, yes, and there are still many people there who still think OpenAI is great.
Removed by mod
One example and I shutup
Removed by mod
Removed by mod
Removed by mod
deleted by creator
On lemmy?
[citation needed]
https://lemmy.frozeninferno.xyz/search?q=openai&type=Posts&listingType=All&page=1&sort=TopAll I just tried this, went back 2 pages, not even one positive post, dunno what you’re talking about.
deleted by creator
??? I only see people talking about the negative of both of these on lemmy. Maybe reddit, sure, I have no idea there, but on lemmy??
I don’t know what you’re talking about.
I have never seen a positive post about openai get upvotes on lemmy, not even when they started.
I have never seen a positive post about elon musk on lemmy that got upvoted…
deleted by creator
Yeah but it certainly doesn’t mean it does exist just because you said so.
Something stinks.
That’s very open of them
Open to All Income.
I thought they were a for-profit company all this time.
Pretty much non-profit in name only. Some shady hybrid model.
OpenAI sure seems like a case study in how to grift everyone by masquerading as a non profit whilst actually enriching yourself and your shareholders, causing a whole new class of societal problems in the process.
Meh. I don’t think anyone that matters was really fooled.
Shocking nobody
Well, apart from the people like me who thought they had always been one because they acted exactly like one.
I thought they had successfully converted around the time they got the infusion of funds from MS. I thought they were started as a not-for-profit, but were already shady-as-shit when they stopped publishing stuff under open licenses.
There was never another outcome.
Capitalism breeds one thing, and it certainly isn’t innovation, and it most definitely isnt not-for-profit innovation.
Capitalism is extremely good at breeding superficial, go-to-market innovation. It’s less good at funding the pure research that leads to major discoveries. But once it gets closer to engineering than to science, it’s highly effective. Even Marx commented on that.
So, ads in chat now?
‘subtle’ product recommendations
I’m Open AI and this is my favorite shop in the Citadel.
Yup, conversational product plugs
They’ve already started testing that at Google For ad enhancement and For immersive ads there’s no way they keep the chatting models pristine and ad-free
The dystopian future of “pay to use this miraculous product or it will shove advertisements down your throat in a way we know will work because we’ve trained it to sell specifically to you”
Hahaha. April 1st is early this year.
They are never going to make enough money by selling licenses and subscriptions for the cost of their current models (smarter people than me have made good estimates), let alone the future ones. Those future models are at a much worse performance-cost ratio. Ads will at best bring in about 1 usd per user per month (estimated by Facebook revenue and number of users) - double or triple it just for lolz, and they would still be losing money.
So… how will this be pulled off? Only wrong answers!Have a partnership with Microsoft and ship Windows 12 as the new “AI only” OS. Every command must go through ChatGPT to work. Then push updates to older Win11 OS to make them unusable.
From what I’ve heard, they don’t need to push updates to achieve that
How fast are they burning money right now?
Based on their funding rounds, $10 billion lasts about 18 months.
So about $555 million per month.
They don’t care if they earn money the next 5-7 years.
And they will hit the point of a great model doing human work for less than a monthly salary. It’s just a matter of time.
I’m incredulous.
There was that thread asking what people are using LLMs for and it pretty much came down to “softening language in emails”.
For most jobs LLMs can provide a small productivity bump.
IMO if an LLM can do most of your job then you’re not producing much value anyway.
Without enough funding, they absolutely will care.
Thats between $33 billion and $47 billion at current costs. Someone needs to fund that.
I’d also note that their models seem to be getting worse, with outright irrelevant answers, worse perfoemance, failures in following instructions, etc. Stanford and UC Berkeley did a months-long comparison, and even basic math is going downhill.
I’d rather say that it’s a matter of exponentially increasing funding and computing power.
LLMs are not advancing enough any more. There just isn’t any more useful human generated text to train new models on. The net is already full of AI generated slop. OpenAI currently spends 2.35 USD to make 1 USD. It’s fundamentally unsustainable.
It costs 1 billion dollars to develop solar cells before they even sell the first product.
The costed 100.000 dollars when starting to sell.
They go for under 10 bucks per square today.
And it’s like that for any technology ever invented.
Yes, but solar cells are in the end very simple products made of very simple resources, with a limited task: concerting one type of energy into another. That said, there is still research in making them more efficient and cheaper, and the that research isn’t cheap.
But generative AI / LLM takes an insane amount of resources to train and maintain, is complex to create, with a very complex task, and a slight increase in quality takes progressively more resources (like, say 10% better would be 50% more energy use - I don’t have the numbers anymore but iirc they were even worse). A better LLM would therefore be much, much more expensive while people are apparently already underwhelmed with the latest models. With the growing competition, fast rising costs and meagre quality updates, while already unable to financially sustain themselves right now, I truly don’t see it. Honestly, this is why I think Microsoft is cramming their subpar Copilot into everything - to sort of justify all the money they pumped into this.It’s also like that for nearly every technology that has failed. For every Amazon that ran in the red until it grabbed enough market share to make a profit, there are 1000 firms that went tits-up, never having turned a profit. (Actual constant may vary from 1000, but it’s pretty damn big regardless).
I am honestly very very curious: how?
They’ll upgrade the Aibo and stick Altman’s face on it. People in offices can enjoy kicking it.
Ruh roh!