How that preclude these models from being creative? Randomness within rules can be pretty creative. All life on earth is the result of selection on random mutations. Its output is way more structured and coherent than random noise. That’s not a good comparison at all.
Either way, generative tools are a great way for the people using to create with, no model has to be creative on its own.
I don’t think all creativity requires intentionality. Some forms of creativity are the accumulation of unintentional outcomes, like when someone sets out to copy a thing, but due to mistakes or other factors outside their control end up with something unique to what they were going for.
A person sees a piece of art and is inspired. They understand what they see, be it a rose bush to paint or a story beat to work on. This inspiration leads to actual decisions being made with a conscious aim to create art.
An AI, on the other hand, sees a rose bush and adds it to its rose bush catalog, reads a story beat and adds to to its story database. These databases are then shuffled and things are picked out, with no mind involved whatsoever.
A person knows why a rose bush is beautiful, and internalises that thought to create art. They know why a story beat is moving, and can draw out emotional connections. An AI can’t do either of these.
Yeah, I know it doesn’t actually “see” anything, and is just making best guesses based on pre-gathered data. I was just simplifying for the comparison.
What does that mean, and isn’t that still something people can employ for their creative process?
LLMs analyse their inputs and create a stochastic model (i.e.: a guess of how randomness is distributed in a domain) of which word comes next.
Yes, it can help in a creative process, but so can literal noise. It can’t “be creative” in itself.
How that preclude these models from being creative? Randomness within rules can be pretty creative. All life on earth is the result of selection on random mutations. Its output is way more structured and coherent than random noise. That’s not a good comparison at all.
Either way, generative tools are a great way for the people using to create with, no model has to be creative on its own.
They lack intentionality, simple as that.
Yup, my original point still stands.
How is intentionality integral to creativity?
Are you serious?
Intentionality is integral to communication. Creative art is a subset of communication.
I was asking about creativity, not art. It’s possible for something to be creative and not be art.
I still posit that ceativity requires intentionality.
I don’t think all creativity requires intentionality. Some forms of creativity are the accumulation of unintentional outcomes, like when someone sets out to copy a thing, but due to mistakes or other factors outside their control end up with something unique to what they were going for.
A person sees a piece of art and is inspired. They understand what they see, be it a rose bush to paint or a story beat to work on. This inspiration leads to actual decisions being made with a conscious aim to create art.
An AI, on the other hand, sees a rose bush and adds it to its rose bush catalog, reads a story beat and adds to to its story database. These databases are then shuffled and things are picked out, with no mind involved whatsoever.
A person knows why a rose bush is beautiful, and internalises that thought to create art. They know why a story beat is moving, and can draw out emotional connections. An AI can’t do either of these.
The way you describe how these models work is wrong. This video does a good job of explaining how they work.
Here is an alternative Piped link(s):
This video
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
Yeah, I know it doesn’t actually “see” anything, and is just making best guesses based on pre-gathered data. I was just simplifying for the comparison.