There’s no previous context to speak of; each screenshot shows a self-contained “conversation”, with no earlier input or output. And there’s no history to clear, since Gemini app activity is not even turned on.
And even with your suggested prompt, one of the issues is still there:
The other issue is not being tested in this shot as it’s language-specific, but it is relevant here because it reinforces that the issue is in the training, not in the context window.
What I am trying to get at is the misconception: AI can generate novel content not in its training dataset. An astronaut riding a horse is the classic test case, which did not exist anywhere before diffusion models, and it should be able to extrapolate a fuller wine glass. It’s just too dumb to do it, lol.
There’s no previous context to speak of; each screenshot shows a self-contained “conversation”, with no earlier input or output. And there’s no history to clear, since Gemini app activity is not even turned on.
And even with your suggested prompt, one of the issues is still there:
The other issue is not being tested in this shot as it’s language-specific, but it is relevant here because it reinforces that the issue is in the training, not in the context window.
Was just a guess. The AI is still shitty, lol.
What I am trying to get at is the misconception: AI can generate novel content not in its training dataset. An astronaut riding a horse is the classic test case, which did not exist anywhere before diffusion models, and it should be able to extrapolate a fuller wine glass. It’s just too dumb to do it, lol.