I’m sure the argument here will be about the definition of “novel” no matter what evidence I provide. Every time LLMs do something previously supposed impossible, people quickly move the goalposts. Downvote me all you want, I know Lemmy is strongly anti-AI and nothing I say actually matters.
Lol, I can’t even downvote you cuz my instance doesn’t support them.
I’m genuinely curious because it sounds like you’re suggesting that the models are moving past just being generative transformers into something more intelligent, and I just have not seen evidence of that. Only empty claims of it existing and using very weak examples of ‘novel responses’ that still is just a generative transformers response.
But sure, if you can’t support your point with solid evidence, passive aggressive dismissal of skepticism works just as well. People are constantly fed a narrative that AI is amazing and can do all this novel shit, but I have yet to see anything to back it up.
Only empty claims of it existing and using very weak examples of ‘novel responses’ that still is just a generative transformers response.
Right, this is exactly what I’m talking about. Saying that it’s “still just a generative transformer’s response” by definition presupposes that every response must be unnovel, even if the solution can be proven to not be in the training set. This is a pointless discussion if that is the line you want to draw.
I’m happy to discuss this further if you are willing to argue in good faith. The first step would be to set firm definitions for our terms so there is no goalpost moving. Otherwise, I have no interest in this conversation.
In order for him to answer your question he’d need you to define more precisely what you are asking so he doesn’t argue against a point you aren’t making. You seem to refuse that clarification and are just saying hes a bullshitter repeatedly. Which implies you don’t actually care if he can provide evidence at all.
Hes saying he doesn’t trust you not to waste his time. Arguing using evidence requires effort that is often wasted on people who don’t care about evidence. (Even if they say they do care about evidence)
Just pointing out I don’t have a horse in this race otherwise. I’m not going to make the claim AI can yet solve novel problems. I just despise intellectual dishonesty.
[citation needed]
I’m sure the argument here will be about the definition of “novel” no matter what evidence I provide. Every time LLMs do something previously supposed impossible, people quickly move the goalposts. Downvote me all you want, I know Lemmy is strongly anti-AI and nothing I say actually matters.
Lol, I can’t even downvote you cuz my instance doesn’t support them.
I’m genuinely curious because it sounds like you’re suggesting that the models are moving past just being generative transformers into something more intelligent, and I just have not seen evidence of that. Only empty claims of it existing and using very weak examples of ‘novel responses’ that still is just a generative transformers response.
But sure, if you can’t support your point with solid evidence, passive aggressive dismissal of skepticism works just as well. People are constantly fed a narrative that AI is amazing and can do all this novel shit, but I have yet to see anything to back it up.
Right, this is exactly what I’m talking about. Saying that it’s “still just a generative transformer’s response” by definition presupposes that every response must be unnovel, even if the solution can be proven to not be in the training set. This is a pointless discussion if that is the line you want to draw.
That’s a lot of words to not answer a question.
I’m happy to discuss this further if you are willing to argue in good faith. The first step would be to set firm definitions for our terms so there is no goalpost moving. Otherwise, I have no interest in this conversation.
You have enough interest to rant, but not enough to answer the question.
At this point, Im just going to assume you don’t have any evidence and are just having a giggle.
In order for him to answer your question he’d need you to define more precisely what you are asking so he doesn’t argue against a point you aren’t making. You seem to refuse that clarification and are just saying hes a bullshitter repeatedly. Which implies you don’t actually care if he can provide evidence at all.
Hes saying he doesn’t trust you not to waste his time. Arguing using evidence requires effort that is often wasted on people who don’t care about evidence. (Even if they say they do care about evidence)
Just pointing out I don’t have a horse in this race otherwise. I’m not going to make the claim AI can yet solve novel problems. I just despise intellectual dishonesty.