I’m sure the argument here will be about the definition of “novel” no matter what evidence I provide. Every time LLMs do something previously supposed impossible, people quickly move the goalposts. Downvote me all you want, I know Lemmy is strongly anti-AI and nothing I say actually matters.
Lol, I can’t even downvote you cuz my instance doesn’t support them.
I’m genuinely curious because it sounds like you’re suggesting that the models are moving past just being generative transformers into something more intelligent, and I just have not seen evidence of that. Only empty claims of it existing and using very weak examples of ‘novel responses’ that still is just a generative transformers response.
But sure, if you can’t support your point with solid evidence, passive aggressive dismissal of skepticism works just as well. People are constantly fed a narrative that AI is amazing and can do all this novel shit, but I have yet to see anything to back it up.
Only empty claims of it existing and using very weak examples of ‘novel responses’ that still is just a generative transformers response.
Right, this is exactly what I’m talking about. Saying that it’s “still just a generative transformer’s response” by definition presupposes that every response must be unnovel, even if the solution can be proven to not be in the training set. This is a pointless discussion if that is the line you want to draw.
I’m happy to discuss this further if you are willing to argue in good faith. The first step would be to set firm definitions for our terms so there is no goalpost moving. Otherwise, I have no interest in this conversation.
In order for him to answer your question he’d need you to define more precisely what you are asking so he doesn’t argue against a point you aren’t making. You seem to refuse that clarification and are just saying hes a bullshitter repeatedly. Which implies you don’t actually care if he can provide evidence at all.
Hes saying he doesn’t trust you not to waste his time. Arguing using evidence requires effort that is often wasted on people who don’t care about evidence. (Even if they say they do care about evidence)
Just pointing out I don’t have a horse in this race otherwise. I’m not going to make the claim AI can yet solve novel problems. I just despise intellectual dishonesty.
A lot of people here just lack the critical thinking to properly critisize AI. Yes, AI is guilty of a lot of mediocre slop, it doesn’t mean that AI as a whole is bad in every possible regard.
It also doesn’t take much of a big brain to realize you can do more with AI than to use it as a writer or as an ideation tool for writing, which is a bad way to use it.
Which goes back to my point that you need better critical thinking to criticize AI, because this ain’t it.
And my point is that if you dismiss AI as a whole because of these criticisms, you’re failing to see that AI can be used other ways.
But I’m not talking about you, I was responding to someone else.
That hasn’t been true for a little bit now. They are increasingly able to solve novel problems outside the training set.
I get the pushback against AI, I really do. But let’s at least be honest about their capabilities.
[citation needed]
I’m sure the argument here will be about the definition of “novel” no matter what evidence I provide. Every time LLMs do something previously supposed impossible, people quickly move the goalposts. Downvote me all you want, I know Lemmy is strongly anti-AI and nothing I say actually matters.
Lol, I can’t even downvote you cuz my instance doesn’t support them.
I’m genuinely curious because it sounds like you’re suggesting that the models are moving past just being generative transformers into something more intelligent, and I just have not seen evidence of that. Only empty claims of it existing and using very weak examples of ‘novel responses’ that still is just a generative transformers response.
But sure, if you can’t support your point with solid evidence, passive aggressive dismissal of skepticism works just as well. People are constantly fed a narrative that AI is amazing and can do all this novel shit, but I have yet to see anything to back it up.
Right, this is exactly what I’m talking about. Saying that it’s “still just a generative transformer’s response” by definition presupposes that every response must be unnovel, even if the solution can be proven to not be in the training set. This is a pointless discussion if that is the line you want to draw.
That’s a lot of words to not answer a question.
I’m happy to discuss this further if you are willing to argue in good faith. The first step would be to set firm definitions for our terms so there is no goalpost moving. Otherwise, I have no interest in this conversation.
You have enough interest to rant, but not enough to answer the question.
At this point, Im just going to assume you don’t have any evidence and are just having a giggle.
In order for him to answer your question he’d need you to define more precisely what you are asking so he doesn’t argue against a point you aren’t making. You seem to refuse that clarification and are just saying hes a bullshitter repeatedly. Which implies you don’t actually care if he can provide evidence at all.
Hes saying he doesn’t trust you not to waste his time. Arguing using evidence requires effort that is often wasted on people who don’t care about evidence. (Even if they say they do care about evidence)
Just pointing out I don’t have a horse in this race otherwise. I’m not going to make the claim AI can yet solve novel problems. I just despise intellectual dishonesty.
A lot of people here just lack the critical thinking to properly critisize AI. Yes, AI is guilty of a lot of mediocre slop, it doesn’t mean that AI as a whole is bad in every possible regard.
It doesn’t take much big brain to mistrust the ideas coming from AI. How are those children’s books written by AI and sold on Amazon coming along?
It also doesn’t take much of a big brain to realize you can do more with AI than to use it as a writer or as an ideation tool for writing, which is a bad way to use it. Which goes back to my point that you need better critical thinking to criticize AI, because this ain’t it.
My criticism of AI in this thread sums up to: it sucks for long form writing, and also for creating kids’ books.
Your thing is, be better at criticizing AI. Ok. Go ahead.
And my point is that if you dismiss AI as a whole because of these criticisms, you’re failing to see that AI can be used other ways. But I’m not talking about you, I was responding to someone else.