I asked Chat-GPT aboout Elon’s salute, just for fun, and it replied it could not analyse pictures and I needed to describe it to him. I had pictures analysed before, even traslating and explaining a cartoon joke. I tought it could be caused by the .webp format. So I retried the joke image and it worked fine. I uploaded a png of Elon and it went wrong. Can others repeat the experience or was it just a fluke? I am not angry just let down a little.
to be honest elon using the nazi salute was probably not in the training data or high up on the probability chart.
As far as I know Elon although has beef with OpenAi.
That’s not how it works. If the training data had to include specific people doing specific things it’d be useless. Instead, the training let’s it infer individual image elements from different sources. It never had to be trained on orange pentagons to be able identify “orange” and “pentagon” from an image of an orange pentagon.
Well well well turns out artificial “intelligence” isn’t so intelligent after all
didnt want to suggest this was a response purpose-made for this image. still i find it worrying that instead of admitting it doesnt know this image/ cant tell if its fake or real, it instead directly jumps to “fake”