There’s a strong argument that any consumer facing chatbot AI is “censored”.
If the model is not allowed to spew Nazi propaganda or tell the user to end themselves, that is censorship. Censorship is not automatically bad, but the kind of censorship can make it bad.
This reeks of excluding all nuance to equate two things that are equal only at surface level. You’re bad because you punched the other person (ignoring that they stabbed your SO 15 times and kicked your dog across the room).
Chinese state censorship is well researched and extremely well documented. It does not equate to censorship against violent or inappropriate language. It is political censorship.
At best, western models are biased, not politically censored. You can make them say just about anything, but they will bias towards a particular viewpoint. Even if intentional, this is explainable by evaluating their training data, which itself is biased because western society is biased. You are not prevented from personally expressing or even convincing a western model from expressing dissenting political viewpoints.
I’m gonna take a second stab at replying, because you seem to be arguing in good faith.
My original point is that online chatbots have arbitrary curbs that are built in. I can run GPT 2.5 on my self host machine, and if I knew how to do it (I don’t) I could probably get it to have no curbs via retraining and clever prompting. The same is true of the deepseek models.
I don’t personally agree that there’s a huge difference between one model being curbed from discussing xi and another from discussing what the current politics du jour in the western sphere are. When you see platforms like meta censoring LGTBTQ topics but amplifying hate speech, or official congressional definitions of antisemitism including objection to active and on-going genocide, the idea of what government censorship is and isn’t becomes confusing.
Having personally received the bizarre internal agency emails circulating this week encouraging me to snitch out my colleagues to help root out the evils of DEIA thought in US gov’t the last week has only crystallized it for me. I’m not sure I care that much about Chinese censorship or authoritarianism; I’ve got budget authoritarianism at home, and I don’t even get high-speed rail out of the bargain. At least they don’t depend on forever wars and all of the attendant death and destruction that come with them to prop up their ponzi-scheme economies. Will they in the future, probably? They are basically just a heavily centralized/regulated capitalist enterprise now, so who knows. But right now? Do they engage in propaganda? Cyber-espionage? Yes and Yes. So do we, so do you, so does everyone who has a seat at the geopolitical table and the economy to afford it.
The point of all of this isn’t US GOOD CHINA BAD or US BAD CHINA GOOD. The article is about the deepseek models tearing out the floor of US dominance in AI. Personally, having deployed it and played with it, yeah. None of these products are truly useful to me yet, and I remain skeptical of their eventual value, but right now, party censorship or not, you can download a version of an LLM that you can run, retrain and bias however you want, and it costs you the bandwidth it took to download. And it performs on par with US commercial offerings that require pricey subscriptions. Offerings that apparently require huge public investment to keep afloat.
Where I disagree with you is not that the US is bad - the US is terrible, and there is plenty of evidence of that. I don’t even disagree with there being censorship in the US. In fact, Trump is objectively a piece of shit who wants nothing more than to become Xi/Putin himself.
What I disagree with is equating censorship in the US with Chinese censorship. I can call Trump a piece of shit online without worrying that the FBI will show up at my door. The models that are trained in the west will happily entertain any (non-violent) political discussions I want. There may be bias, and Trump may be trying to create censorship, but it’s not quite to that level yet.
Having personally received the bizarre internal agency emails circulating this week encouraging me to snitch out my colleagues to help root out the evils of DEIA thought in US gov’t the last week has only crystallized it for me.
I am concerned that the US will become as bad as China in terms of censorship, which is part of why I’m trying to leave right now. However, it’s not there yet. They are not yet equal, nor are they even close yet.
Yeah ok, I do basically agree with you. It’s not an accurate equivalency, yet. We’re trending bad though. I’d say the example of Stephen Miller sort of accidentally hinting that they shut down USAID because they all donated to the Harris campaign had some chilling implications for example. He could just be assuming that, since that’s a safe assumption for populous urban areas generally, but they could also have cross checked lists of employees against political contributions.
You say Chinese state censorship is an understood quantity. Could be. But I’d say that my points about equivalencies are to illustrate that what we think is true, is often much more grey. I’ve been to China, and while I was impressed and shocked at how much more advanced it was than I expected, I also couldn’t imaging living there. It doesn’t change the fact that a stagnant late-stage capital mafia state that lives off defense contracting is performing ooorly against a centrally controlled capitalist state that has set different priorities (that’s right boy, deepseek-r1 is a side project of a…. CHINESE HEDGE FUND). It’s value neutral. But if you dismiss reality based on a conception of political censorship that I doubt you’ve deeply engaged with, enjoy.
The so called free market certainly didn’t seem to take much reassurance in deepseek being compromised by communist censorship this morning though. Probably because the deepseek news isn’t exceptional because of China, or what it is, but because of what it isn’t, compared to the bloated tech carcasses that the US has pinned its hopes on.
If the model is not allowed to spew Nazi propaganda or tell the user to end themselves, that is censorship. Censorship is not automatically bad, but the kind of censorship can make it bad.
This reeks of excluding all nuance to equate two things that are equal only at surface level. You’re bad because you punched the other person (ignoring that they stabbed your SO 15 times and kicked your dog across the room).
Chinese state censorship is well researched and extremely well documented. It does not equate to censorship against violent or inappropriate language. It is political censorship.
At best, western models are biased, not politically censored. You can make them say just about anything, but they will bias towards a particular viewpoint. Even if intentional, this is explainable by evaluating their training data, which itself is biased because western society is biased. You are not prevented from personally expressing or even convincing a western model from expressing dissenting political viewpoints.
I’m gonna take a second stab at replying, because you seem to be arguing in good faith.
My original point is that online chatbots have arbitrary curbs that are built in. I can run GPT 2.5 on my self host machine, and if I knew how to do it (I don’t) I could probably get it to have no curbs via retraining and clever prompting. The same is true of the deepseek models.
I don’t personally agree that there’s a huge difference between one model being curbed from discussing xi and another from discussing what the current politics du jour in the western sphere are. When you see platforms like meta censoring LGTBTQ topics but amplifying hate speech, or official congressional definitions of antisemitism including objection to active and on-going genocide, the idea of what government censorship is and isn’t becomes confusing.
Having personally received the bizarre internal agency emails circulating this week encouraging me to snitch out my colleagues to help root out the evils of DEIA thought in US gov’t the last week has only crystallized it for me. I’m not sure I care that much about Chinese censorship or authoritarianism; I’ve got budget authoritarianism at home, and I don’t even get high-speed rail out of the bargain. At least they don’t depend on forever wars and all of the attendant death and destruction that come with them to prop up their ponzi-scheme economies. Will they in the future, probably? They are basically just a heavily centralized/regulated capitalist enterprise now, so who knows. But right now? Do they engage in propaganda? Cyber-espionage? Yes and Yes. So do we, so do you, so does everyone who has a seat at the geopolitical table and the economy to afford it.
The point of all of this isn’t US GOOD CHINA BAD or US BAD CHINA GOOD. The article is about the deepseek models tearing out the floor of US dominance in AI. Personally, having deployed it and played with it, yeah. None of these products are truly useful to me yet, and I remain skeptical of their eventual value, but right now, party censorship or not, you can download a version of an LLM that you can run, retrain and bias however you want, and it costs you the bandwidth it took to download. And it performs on par with US commercial offerings that require pricey subscriptions. Offerings that apparently require huge public investment to keep afloat.
Where I disagree with you is not that the US is bad - the US is terrible, and there is plenty of evidence of that. I don’t even disagree with there being censorship in the US. In fact, Trump is objectively a piece of shit who wants nothing more than to become Xi/Putin himself.
What I disagree with is equating censorship in the US with Chinese censorship. I can call Trump a piece of shit online without worrying that the FBI will show up at my door. The models that are trained in the west will happily entertain any (non-violent) political discussions I want. There may be bias, and Trump may be trying to create censorship, but it’s not quite to that level yet.
I am concerned that the US will become as bad as China in terms of censorship, which is part of why I’m trying to leave right now. However, it’s not there yet. They are not yet equal, nor are they even close yet.
Yeah ok, I do basically agree with you. It’s not an accurate equivalency, yet. We’re trending bad though. I’d say the example of Stephen Miller sort of accidentally hinting that they shut down USAID because they all donated to the Harris campaign had some chilling implications for example. He could just be assuming that, since that’s a safe assumption for populous urban areas generally, but they could also have cross checked lists of employees against political contributions.
You say Chinese state censorship is an understood quantity. Could be. But I’d say that my points about equivalencies are to illustrate that what we think is true, is often much more grey. I’ve been to China, and while I was impressed and shocked at how much more advanced it was than I expected, I also couldn’t imaging living there. It doesn’t change the fact that a stagnant late-stage capital mafia state that lives off defense contracting is performing ooorly against a centrally controlled capitalist state that has set different priorities (that’s right boy, deepseek-r1 is a side project of a…. CHINESE HEDGE FUND). It’s value neutral. But if you dismiss reality based on a conception of political censorship that I doubt you’ve deeply engaged with, enjoy.
The so called free market certainly didn’t seem to take much reassurance in deepseek being compromised by communist censorship this morning though. Probably because the deepseek news isn’t exceptional because of China, or what it is, but because of what it isn’t, compared to the bloated tech carcasses that the US has pinned its hopes on.