- cross-posted to:
- news@lemmy.world
- cross-posted to:
- news@lemmy.world
Google is coming in for sharp criticism after video went viral of the Google Nest assistant refusing to answer basic questions about the Holocaust — but having no problem answer questions about the Nakba.
How about this: don’t censor stuff.
If you train your large language model on all the internet’s bullshit and don’t want bullshit to come out, there’s not a lot of good options. Garbage in, garbage out
That kind of fits my opinion of LLMs in general. :)
Then you should say that instead of a reductive “don’t censor”. Censorship is important because you want to avoid false and harmful statements.
censor: to examine in order to suppress or delete anything considered objectionable
Removing false information isn’t the same as removing objectionable information.
But it is a subset of objectionable information.
objectionable: undesirable, offensive
Yes, false information is technically undesirable, but that’s not really what that word is trying to convey. The goal should be accurate information, not agreeable information. If the truth is objectionable/offensive, it should still be easily findable.
The same ai that made racially diverse Nazis? Why are Google so keen on rewriting history?
Those preventing history from being taught intend to repeat it.
Maybe it’s regional or something. I’m in Sweden and my nest have no problems answering questions about the holocaust and will happily quote Wikipedia for anything you ask
My google home started answering as soon as the guy on video asked. I’m in the US.