62
A prevailing sentiment online is that GPT-4 still does not understand what it talks about. We can argue semantics over what “understanding” truly means. I think it’s useful, at least today, to draw the line at whether GPT-4 has succesfully modeled parts of the world. Is it just picking words and connecting them with correct grammar? Or does the token selection actually reflect parts of the physical world?
One of the most remarkable things I’ve heard about GPT-4 comes from an episode of This American Life titled “Greetings, People of Earth”.
Sure, here’s a link for you: https://old.reddit.com/r/ChatGPT/comments/16m6yc7/gpt4_training_cutoff_date_is_now_january_2022/
I’m aware of that date.
The OpenAI GPT-4 video literally states that GPT-4 finished training in August 2022.
Either way, to clarify / reiterate, you’re refuting a different point than I’ve made. I said:
I’m not talking about whether it knows about its own training (I doubt that it does). I’m talking about it knowing about what’s happened in the broader AI landscape since.
I mean, your argument is still basically that it’s thinking inside there; everything I’ve said is germane to that point, including what GPT4 itself has said.
My argument?
I’m not saying it’s thinking or has thoughts. I’m saying I don’t know the answer to that, but if it is it definitely isn’t anything like human thoughts.