I asked it about human rights in China in the browser version. It actually wrote a fully detailed answer, explaining that it is reasonable to conclude that China violates human rights, and the reply disappear right in my face while I was reading. I manage to repeat that and record my screen. The interesting thing to know is that this wont happened if you run it locally, I’ve just tried it and the answer wasn’t censored.
Most likely there is a separate censor LLM watching the model output. When it detects something that needs to be censored it will zap the output away and stop further processing. So at first you can actually see the answer because the censor model is still “thinking.”
When you download the model and run it locally it has no such censorship.
I asked it about “CCP controversies” in the app and it did the exact same thing twice. Fully detailed answer removed after about 1 second when it finished.
I asked it about human rights in China in the browser version. It actually wrote a fully detailed answer, explaining that it is reasonable to conclude that China violates human rights, and the reply disappear right in my face while I was reading. I manage to repeat that and record my screen. The interesting thing to know is that this wont happened if you run it locally, I’ve just tried it and the answer wasn’t censored.
Most likely there is a separate censor LLM watching the model output. When it detects something that needs to be censored it will zap the output away and stop further processing. So at first you can actually see the answer because the censor model is still “thinking.”
When you download the model and run it locally it has no such censorship.
I asked it about “CCP controversies” in the app and it did the exact same thing twice. Fully detailed answer removed after about 1 second when it finished.
deleted by creator