You are unfortunately wrong on this one. The term “AI” has been used to describe things other than AGI since basically the invention of computers that could solve problems. The people that complain about using “AI” to describe LLMs are actually the ones trying to change language.
I think the modern pushback comes from people who get their understanding of technology from science fiction. SF has always (mis)used AI to mean sapient computers.
LLMs are a way of developing an AI. There’s lots of conspiracy theories in this world that are real it’s better to focus on them rather than make stuff up.
There really is an amazing technological development going on and you’re dismissing it on irrelevant semantics
The acronym AI has been used in game dev for ages to describe things like pathing and simulation which are almost invariably algorithms (such as A* used for autonomous entities to find a path to a specific destination) or emergent behaviours (which are also algorithms were simple rules are applied to individual entities - for example each bird on a flock - to create a complex whole from many such simple agents, and example of this in gamedev being Steering Behaviours, outside gaming it would be the Game Of Life).
Words might have meanings but AI has been used by researchers to refer to toy neutral networks longer than most people on Lemmy have been alive.
This insistence that AI must refer to human type intelligence is also such a weird distortion of language. Intelligence has never been a binary, human level indicator. When people say that a dog is intelligent, or an ant hive shows signs of intelligence, they don’t mean it can do what a human can. Why should AI be any different?
You honestly don’t seem to understand. This is not about the extent of intelligence. This is about actual understanding. Being able to classify a logical problem / a thought into concepts and processing it based on properties of such concepts and relations to other concepts.
Deep learning, as impressive as the results may appear, is not that. You just throw a training data at a few billion “switches” and flip switches until you get close enough to a desired result, without being able to predict how the outcome will be if a tiny change happens in input data.
With regards to the dog & my description of intelligence, you are wrong: Based on all that we know and observe, a dog (any animal, really) understands concepts and causal relations to varying degrees. That’s true intelligence.
When you want to have artificial intelligence, even the most basic software can have some kind of limited understanding that actually fits this attempt at a definition - it’s just that the functionality will be very limited and pretty much appear useless.
Think of it this way:
deterministic algorithm -> has concepts and causal relations (but no consciousness, obviously), results are predictable (deterministic) and can be explained
deep learning / neural networks -> does not implicitly have concepts nor causal relations, results are statistical (based on previous result observations) and can not be explained
-> there’s actually a whole sector of science looking into how to model such systems way to a solution
Addition: the input / output filters of pattern recognition systems are typically fed through quasi-deterministic algorithms to “smoothen” the results (make output more grammatically correct, filter words, translate languages)
If you took enough deterministic algorithms, typically tailored to very specific problems & their solutions, and were able to use those as building blocks for a larger system that is able to understand a larger part of the environment, then you would get something resembling AI. Such a system could be tested (verified) on sample data, but it should not require training on data.
Example: You could program image recognition using math to find certain shapes, which in turn - together with colour ranges and/or contrasts - could be used to associate object types, for which causal relations can be defined, upon which other parts of an AI could then base decision processes. This process has potential for error, but in a similar way that humans can mischaracterize the things we see - we also sometimes do not recognize an object correctly.
I’ve given up trying to enforce the traditional definitions of “moot”, “to beg the question”, “nonplussed”, and “literally” it’s helped my mental health. A little. I suggest you do the same, it’s a losing battle and the only person who gets hurt is you.
Op is an idiot though hope we can agree with that one.
Telling everyone else how they should use language is just an ultimately moronic move. After all we’re not French, we don’t have a central authority for how language works.
Telling everyone else how they should use language is just an ultimately moronic move. After all we’re not French, we don’t have a central authority for how language works.
There’s a difference between objecting to misuse of language and “telling everyone how they should use language” - you may not have intended it, but you used a straw man argument there.
What we all should be acutely aware of (but unfortunately many are not) is how language is used to harm humans, animals or our planet.
Fascists use language to create “outgroups” which they then proceed to dehumanize and eventually violate or murder.
Capitalists speak about investor risks to justify return on invest, and proceed to lobby for de-regulation of markets that causes human and animal suffering through price gouging and factory farming livestock.
Tech corporations speak about “Artificial Intelligence” and proceed to persuade regulators that - because there’s “intelligent” systems - this software may be used for autonomous systems that proceed to cause injury and death on malfunctions.
Yes, all such harm can be caused by individuals in daily life - individuals can be murderers or extort people on something they really need, or a drunk driver can cause an accident that kills people. However, the language that normalizes or facilitates such atrocities or dangers on a large scale, is dangerous and therefore I will proceed to continue calling out those who want to label the shitty penny market LLMs and other deep learning systems as “AI”.
This is such a half brained response. Yes “actual” AI in the form of simulated neurons is pretty far off, but it’s fairly obvious when people say they AI they mean LLMs and other advanced forms of computing. There’s other forms of AI besides LLMs anyways, like image analyzers
The only thing half-brained is the morons who advertise any contemporary software as “AI”. The “other forms” you mention are machine learning systems.
AI contains the word “intelligence”, which implies understanding. A bunch of electrons manipulating a bazillion switches following some trial-and-error set of rules until the desired output is found is NOT that. That you would think the term AI is even remotely applicable to any of those examples shows how bad the brain rot is that is caused by the overabundant misuse of the term.
What do you call the human brain then, if not billions of “switches” as you call them that translate inputs (senses) into an output (intelligence/consciousness/efferent neural actions)?
It’s the result of billions of years of evolutionary trial and error to create a working structure of what we would call a neural net, which is trained on data (sensory experience) as the human matures.
Even early nervous systems were basic classification systems. Food, not food. Predator, not predator. The inputs were basic olfactory sense (or a more primitive chemosense probably) and outputs were basic motor functions (turn towards or away from signal).
The complexity of these organic neural networks (nervous systems) increased over time and we eventually got what we have today: human intelligence. Although there are arguably different types of intelligence, as it evolved among many different phylogenetic lines. Dolphins, elephants, dogs, and octopuses have all been demonstrated to have some form of intelligence. But given the information in the previous paragraph, one can say that they are all just more and more advanced pattern recognition systems, trained by natural selection.
The question is: where do you draw the line? If an organism with a photosensitive patch of cells on top of its head darts in a random direction when it detects sudden darkness (perhaps indicating a predator flying/swimming overhead, though not necessarily with 100% certainty), would you call that intelligence? What about a rabbit, who is instinctively programmed by natural selection to run when something near it moves? What about when it differentiates between something smaller or bigger than itself?
What about you? How will you react when you see a bear in front of you? Or when you’re in your house alone and you hear something that you shouldn’t? Will your evolutionary pattern recognition activate only then and put you in fight-or-flight? Or is everything you think and do a form of pattern recognition, a bunch of electrons manipulating a hundred billion switches to convert some input into a favorable output for you, the organism? Are you intelligent? Or just the product of a 4-billion year old organic learning system?
Modern LLMs are somewhere in between those primitive classification systems and the intelligence of humans today. They can perform word associations in a semantic higher dimensional space, encoding individual words as vectors and enabling the model to attribute a sort of meaning between two words. Comparing the encoding vectors in different ways gets you another word vector, yielding what could be called an association, or a scalar (like Euclidean or angular distance) which might encode closeness in meaning.
Now if intelligence requires understanding as you say, what degree of understanding of its environment (ecosystem for organisms, text for LLM. Different types of intelligence, paragraph 4) does an entity need for you to designate it as intelligent? What associations need it make? Categorizations of danger, not danger and food, not food? What is the difference between that and the Pavlovian responses of a dog? And what makes humans different, aside from a more complex neural structure that allows us to integrate orders of magnitude more information more efficiently?
A consciousness is not an “output” of a human brain. I have to say, I wish large language models didn’t exist, because now for every comment I respond to, I have to consider whether or not a LLM could have written that :(
In effect, you compare learning on training data: “input -> desired output” with systematic teaching of humans, where we are teaching each other causal relations. The two are fundamentally different.
Also, you are questioning whether or not logical thinking (as opposed to throwing some “loaded” neuronal dice) is even possible. In that case, you may as well stop posting right now, because if you can’t think logically, there’s no point in you trying to make a logical point.
systematic teaching of humans, where we are teaching each other causal relations. The two are fundamentally different.
So you mean that a key component to intelligence is learning from others? What about animals that don’t care for their children? Are they not intelligent?
What about animals that can’t learn at all, wheere their barains are completely hard wired from birth. Is that not intelligence?
You seem to be objecting that OPs questions are too philosophical. The question “what is intelligence” can only be solved by philosophical discussion, trying to break it down into other questions. Why is the question about the “brain as a calculator” objectionable? I think it may be uncomfortable for you to even speak of but that would only be an indicator that there is something to it.
It would indeed throw your world view upside down if you realised that you are also just a computer made of flesh and all your output is deterministic, given the same input.
So you mean that a key component to intelligence is learning from others? What about animals that don’t care for their children? Are they not intelligent?
You contradict yourself, the first part of your sentence getting my point correctly, and the second questioning an incorrect understanding of my point.
What about animals that can’t learn at all, wheere their barains are completely hard wired from birth. Is that not intelligence?
Such an animal does not exist.
It would indeed throw your world view upside down if you realised that you are also just a computer made of flesh and all your output is deterministic, given the same input.
That’s a long way of saying “if free will didn’t exist”, at which point your argument becomes moot, because I would have no influence over what it does to my world view.
My main point is that falsifying a hypothesis based on how it makes you feel is not very productive. You just repeated it again. You seem to get mad by just posing the question.
A consciousness is not an “output” of a human brain.
Fair enough. Obviously consciousness is more complex than that. I should have put “efferent neural actions” first in that case, consciousness just being a side effect, something different yet composed of the same parts, an emergent phenomenon. How would you describe consciousness, though? I wish you would offer that instead of just saying “nuh uh” and calling me chatGPT :(
Not sure how you interpreted what I wrote in the rest of your comment though. I never mentioned humans teaching each other causal relations? I only compared the training of neural networks to evolutionary principles, where at one point we had entities that interacted with their environment in fairly simple and predictable ways (a “deterministic algorithm” if you will, as you said in another comment), and at some later point we had entities that we would call intelligent.
What I am saying is that at some point the pattern recognition “trained” by evolution (where inputs are environmental distress/eustress, and outputs are actions that are favorable to the survival of the organism) became so advanced that it became self-aware (higher pattern recognition on itself?) among other things. There was a point, though, some characteristic, self-awareness or not, where we call something intelligence as opposed to unintelligent. When I asked where you draw the line, I wanted to know what characteristic(s) need to be present for you to elevate something from the status of “pattern recognition” to “intelligence”.
It’s tough to decide whether more primitive entities were able to form causal relationships. When they saw predators, did they know that they were going to die if they didn’t run? Did they at least know something bad would happen to them? Or was it just a pre-programmed neural response that caused them to run? Most likely the latter.
Based on all that we know and observe, a dog (any animal, really) understands concepts and causal relations to varying degrees. That’s true intelligence.
From another comment, I’m not sure what you mean by “understands”. It could mean having knowledge about the nature of a thing, or it could mean interpreting things in some (meaningful) way, or it could mean something completely different.
To your last point, logical thinking is possible, but of course humans can’t do it on our own. We had to develop a system for logical thinking (which we call “logic”, go figure) as a framework because we are so bad at doing it ourselves. We had to develop statistical methods to determine causal relations because we are so bad at doing it on our own. So what does it mean to “understand” a thing? When you say an animal “understands” causal relations, do they actually understand it or is it just another form of pattern recognition (why I mentioned pavlov in my last comment)? When humans “understand” a thing, do they actually understand, or do we just encode it with the frameworks built on pattern recognition to help guide us? A scientific model is only a model, built on trial and error. If you “understand” the model you do not “understand” the thing that it is encoding. I know you said “to varying degrees”, and this is the sticking point. Where do you draw the line?
When you want to have artificial intelligence, even the most basic software can have some kind of limited understanding that actually fits this attempt at a definition - it’s just that the functionality will be very limited and pretty much appear useless.
[…]
You could program image recognition using math to find certain shapes, which in turn - together with colour ranges and/or contrasts - could be used to associate object types, for which causal relations can be defined, upon which other parts of an AI could then base decision processes. This process has potential for error, but in a similar way that humans can mischaracterize the things we see - we also sometimes do not recognize an object correctly.
I recognize that you understand the point I am trying to make. I am trying to make the same point, just with a different perspective. Your description of an “actually intelligent” artificial intelligence closely matches how sensory data is integrated in the layers of the visual cortex, perhaps on purpose. My question still stands, though. A more primitive species would integrate data in a similar, albeit slightly less complex, way: take in (visual) sensory information, integrate the data to extract easier-to-process information such as brightness, color, lines, movement, and send it to the rest of the nervous system for further processing to eventually yield some output in the form of an action (or thought, in our case). Although in the process of integrating, we necessarily lose information along the way for the sake of efficiency, so what we perceive does not always match what we see, as you say. Image recognition models do something similar, integrating individual pixel information using convolutions and such to see how it matches an easier-to-process shape, and integrating it further. Maybe it can’t reason about what it’s seeing, but it can definitely see shapes and colors.
You will notice that we are talking about intelligence, which is a remarkably complex and nuanced topic. It would do some good to sit and think deeply about it, even if you already think you understand it, instead of asserting that whoever sounds like they might disagree with you is wrong and calling them chatbots. I actually agree with you that calling modern LLMs “intelligent” is wrong. What I ask is what you think would make them intelligent. Everything else is just context so that you understand where I’m coming from.
I had a bunch of sections of your comment that I wanted to quote, let’s see how much I can answer without copy-pasting too much.
First off, my apologies, I misunderstood your analogy about machine learning not as a comparison towards evolution, but towards how we learn with our developed brains. I concur that the process of evolution is similar, except a bit less targeted (and hence so much slower) than deep learning. The result however, is “cogito ergo sum” - a creature that started self-reflecting and wondering about it’s own consciousness. And this brings me to humans thinking logically:
As such a creature, we are able to form logical thoughts, which allow us to understand causality. To give an example of what I mean: Humans (and some animals) did not need the invention of logic or statistics in order to observe moving objects and realize that where something moves, something has moved it - and therefore when they see an inanimate object move, they will eventually suspect the most likely cause for the move in the direction that the object is coming from.
Then, when we do not find the cause (someone throwing something) there, we will investigate further (if curious enough) and look for a cause. That’s how curiosity turns into science. But it’s very much targeted, nothing a deep learning system can do. And that’s kind of what I would also expect from something that calls itself “AI”: a systematic analysis / categorization of the input data for the purpose of processing that the system was built for. And for a general AI, also the ability to analyze phenomena to understand their root cause.
Of course, logic is often not the same as our intuitive thoughts, but we are still able to correct our intuitive assumptions based on outcome, but then understand the actual causal relation (unlike a deep learning system) based on our corrected “model” of whatever we observed. In the end, that’s also how science works: We describe reality with a model, and when we discover a discrepancy, we aim to update the model. But we always have a model.
With regards to some animals understanding objects / causal relations, I believe - beyond having a concept of an object - defining what I mean by “understanding” is not really helpful, considering that the spectrum of intelligence among animals overlaps with that of humans. Some of the more clever animals clearly have more complex thoughts and you can interact with them in a more meaningful way than some of the humans with less developed brains, be it due to infancy, or a disability or psychological condition.
How would you describe consciousness, though? I wish you would offer that instead of just saying “nuh uh” and calling me chatGPT :(
First off, I meant the LLM comment seriously - I am considering already to stop participating in internet debates because LLMs have become so sophisticated that I will no longer be able to know whether I am arguing with a human, or whether some LLM is wasting my precious life time.
As for how to describe consciousness, that’s a largely philosophical topic and strongly linked to whether or not free will exists (IMO), although theoretically it would be possible to be conscious but not have any actual free will. I can not define the “sense of self” better than philosophers are doing it, because our language does not have the words to even properly structure our thoughts on that.
I can however, tell you how I define free will:
assuming you could measure every atom, sub-atomic particle, impulse & spin thereof, energy field and whatever else physical properties there are in a human being and it’s environment
when that individual moves a limb, you would be able to trace - based on what we know:
the movement of the limb back to the muscles contracting
movement of the muscles back to electrical signals in some nerves
the nerve signals back to some neurons firing in the brain
if you trace that chain of “causes” further and further, eventually, if free will exists, it would be impossible to find a measurable cause for some “lowest level trigger event”
And this lowest level trigger event - by some researchers attributed to quantum decay - might be / could be influenced by our free will, even if - because we have this “brain lag” - the actual decision happened quite some time earlier, and even if for some decisions, they are hardwired (like reflexes, which can also be trained).
My personal model how I would like consciousness to be: An as-of-yet undiscovered property of matter, that every atom has, but only combined with an organic computer that is complex enough to process and store information would such a property actually exhibit a consciousness.
In other words: If you find all the subatomic particles (or most of them) that made up a person in history at a given point in time, and reassemble them in the exact same pattern, you would, in effect, re-create that person, including their consciousness at that point in time.
If you duplicate them from other subatomic particles with the exact same properties (as far as we can measure) - who knows? Because we couldn’t measure nor observe the “consciousness property”, how would we know if that would be equal among all particles that are equal in the properties we can measure. That would be like assuming atoms of a certain element were all the same, because we do not see chemical differences for other isotopes.
AI traditionally meant now-mundane things like pathfinding algorithms. The only thing people seem to want Artificial Intelligence to mean is “something a computer can almost do but can’t yet”.
Almost a good take. Except that AI doesn’t exist on this planet, and you’re likely talking about LLMs.
In 2022 AI evolved into AGI and LLM into AI. Languages are not static as shown by old English. Get on with the times.
Changes to language to sell products are not really the language adapting but being influenced and distorted
People have used ai to describe things like chatbots, video game bots, etc for a very long time. Don’t no true Scotsman the robots.
You are unfortunately wrong on this one. The term “AI” has been used to describe things other than AGI since basically the invention of computers that could solve problems. The people that complain about using “AI” to describe LLMs are actually the ones trying to change language.
https://en.m.wikipedia.org/wiki/Artificial_intelligence#History
I think the modern pushback comes from people who get their understanding of technology from science fiction. SF has always (mis)used AI to mean sapient computers.
LLMs are a way of developing an AI. There’s lots of conspiracy theories in this world that are real it’s better to focus on them rather than make stuff up.
There really is an amazing technological development going on and you’re dismissing it on irrelevant semantics
The acronym AI has been used in game dev for ages to describe things like pathing and simulation which are almost invariably algorithms (such as A* used for autonomous entities to find a path to a specific destination) or emergent behaviours (which are also algorithms were simple rules are applied to individual entities - for example each bird on a flock - to create a complex whole from many such simple agents, and example of this in gamedev being Steering Behaviours, outside gaming it would be the Game Of Life).
You’re using AI to mean AGI and LLMs to mean AI. That’s on you though, everyone else knows what we’re talking about.
Words have meanings. Marketing morons are not linguists.
https://www.merriam-webster.com/dictionary/artificial intelligence
As someone who still says a kilobyte is 1024 bytes, i agree with your sentiment.
Amen. Kibibytes my ass ;)
Words might have meanings but AI has been used by researchers to refer to toy neutral networks longer than most people on Lemmy have been alive.
This insistence that AI must refer to human type intelligence is also such a weird distortion of language. Intelligence has never been a binary, human level indicator. When people say that a dog is intelligent, or an ant hive shows signs of intelligence, they don’t mean it can do what a human can. Why should AI be any different?
You honestly don’t seem to understand. This is not about the extent of intelligence. This is about actual understanding. Being able to classify a logical problem / a thought into concepts and processing it based on properties of such concepts and relations to other concepts. Deep learning, as impressive as the results may appear, is not that. You just throw a training data at a few billion “switches” and flip switches until you get close enough to a desired result, without being able to predict how the outcome will be if a tiny change happens in input data.
I mean that’s a problem, but it’s distinct from the word “intelligence”.
An intelligent dog can’t classify a logic problem either, but we’re still happy to call them intelligent.
With regards to the dog & my description of intelligence, you are wrong: Based on all that we know and observe, a dog (any animal, really) understands concepts and causal relations to varying degrees. That’s true intelligence.
When you want to have artificial intelligence, even the most basic software can have some kind of limited understanding that actually fits this attempt at a definition - it’s just that the functionality will be very limited and pretty much appear useless.
Think of it this way: deterministic algorithm -> has concepts and causal relations (but no consciousness, obviously), results are predictable (deterministic) and can be explained deep learning / neural networks -> does not implicitly have concepts nor causal relations, results are statistical (based on previous result observations) and can not be explained -> there’s actually a whole sector of science looking into how to model such systems way to a solution Addition: the input / output filters of pattern recognition systems are typically fed through quasi-deterministic algorithms to “smoothen” the results (make output more grammatically correct, filter words, translate languages)
If you took enough deterministic algorithms, typically tailored to very specific problems & their solutions, and were able to use those as building blocks for a larger system that is able to understand a larger part of the environment, then you would get something resembling AI. Such a system could be tested (verified) on sample data, but it should not require training on data.
Example: You could program image recognition using math to find certain shapes, which in turn - together with colour ranges and/or contrasts - could be used to associate object types, for which causal relations can be defined, upon which other parts of an AI could then base decision processes. This process has potential for error, but in a similar way that humans can mischaracterize the things we see - we also sometimes do not recognize an object correctly.
I’ve given up trying to enforce the traditional definitions of “moot”, “to beg the question”, “nonplussed”, and “literally” it’s helped my mental health. A little. I suggest you do the same, it’s a losing battle and the only person who gets hurt is you.
Op is an idiot though hope we can agree with that one.
Telling everyone else how they should use language is just an ultimately moronic move. After all we’re not French, we don’t have a central authority for how language works.
There’s a difference between objecting to misuse of language and “telling everyone how they should use language” - you may not have intended it, but you used a straw man argument there.
What we all should be acutely aware of (but unfortunately many are not) is how language is used to harm humans, animals or our planet.
Fascists use language to create “outgroups” which they then proceed to dehumanize and eventually violate or murder. Capitalists speak about investor risks to justify return on invest, and proceed to lobby for de-regulation of markets that causes human and animal suffering through price gouging and factory farming livestock. Tech corporations speak about “Artificial Intelligence” and proceed to persuade regulators that - because there’s “intelligent” systems - this software may be used for autonomous systems that proceed to cause injury and death on malfunctions.
Yes, all such harm can be caused by individuals in daily life - individuals can be murderers or extort people on something they really need, or a drunk driver can cause an accident that kills people. However, the language that normalizes or facilitates such atrocities or dangers on a large scale, is dangerous and therefore I will proceed to continue calling out those who want to label the shitty penny market LLMs and other deep learning systems as “AI”.
This is such a half brained response. Yes “actual” AI in the form of simulated neurons is pretty far off, but it’s fairly obvious when people say they AI they mean LLMs and other advanced forms of computing. There’s other forms of AI besides LLMs anyways, like image analyzers
The only thing half-brained is the morons who advertise any contemporary software as “AI”. The “other forms” you mention are machine learning systems.
AI contains the word “intelligence”, which implies understanding. A bunch of electrons manipulating a bazillion switches following some trial-and-error set of rules until the desired output is found is NOT that. That you would think the term AI is even remotely applicable to any of those examples shows how bad the brain rot is that is caused by the overabundant misuse of the term.
I bet you were a lot of fun when smartphones first came out
Yes. And a cocktail is not a real cock tail. Thank God.
What do you call the human brain then, if not billions of “switches” as you call them that translate inputs (senses) into an output (intelligence/consciousness/efferent neural actions)?
It’s the result of billions of years of evolutionary trial and error to create a working structure of what we would call a neural net, which is trained on data (sensory experience) as the human matures.
Even early nervous systems were basic classification systems. Food, not food. Predator, not predator. The inputs were basic olfactory sense (or a more primitive chemosense probably) and outputs were basic motor functions (turn towards or away from signal).
The complexity of these organic neural networks (nervous systems) increased over time and we eventually got what we have today: human intelligence. Although there are arguably different types of intelligence, as it evolved among many different phylogenetic lines. Dolphins, elephants, dogs, and octopuses have all been demonstrated to have some form of intelligence. But given the information in the previous paragraph, one can say that they are all just more and more advanced pattern recognition systems, trained by natural selection.
The question is: where do you draw the line? If an organism with a photosensitive patch of cells on top of its head darts in a random direction when it detects sudden darkness (perhaps indicating a predator flying/swimming overhead, though not necessarily with 100% certainty), would you call that intelligence? What about a rabbit, who is instinctively programmed by natural selection to run when something near it moves? What about when it differentiates between something smaller or bigger than itself?
What about you? How will you react when you see a bear in front of you? Or when you’re in your house alone and you hear something that you shouldn’t? Will your evolutionary pattern recognition activate only then and put you in fight-or-flight? Or is everything you think and do a form of pattern recognition, a bunch of electrons manipulating a hundred billion switches to convert some input into a favorable output for you, the organism? Are you intelligent? Or just the product of a 4-billion year old organic learning system?
Modern LLMs are somewhere in between those primitive classification systems and the intelligence of humans today. They can perform word associations in a semantic higher dimensional space, encoding individual words as vectors and enabling the model to attribute a sort of meaning between two words. Comparing the encoding vectors in different ways gets you another word vector, yielding what could be called an association, or a scalar (like Euclidean or angular distance) which might encode closeness in meaning.
Now if intelligence requires understanding as you say, what degree of understanding of its environment (ecosystem for organisms, text for LLM. Different types of intelligence, paragraph 4) does an entity need for you to designate it as intelligent? What associations need it make? Categorizations of danger, not danger and food, not food? What is the difference between that and the Pavlovian responses of a dog? And what makes humans different, aside from a more complex neural structure that allows us to integrate orders of magnitude more information more efficiently?
Where do you draw the line?
A consciousness is not an “output” of a human brain. I have to say, I wish large language models didn’t exist, because now for every comment I respond to, I have to consider whether or not a LLM could have written that :(
In effect, you compare learning on training data: “input -> desired output” with systematic teaching of humans, where we are teaching each other causal relations. The two are fundamentally different.
Also, you are questioning whether or not logical thinking (as opposed to throwing some “loaded” neuronal dice) is even possible. In that case, you may as well stop posting right now, because if you can’t think logically, there’s no point in you trying to make a logical point.
So you mean that a key component to intelligence is learning from others? What about animals that don’t care for their children? Are they not intelligent?
What about animals that can’t learn at all, wheere their barains are completely hard wired from birth. Is that not intelligence?
You seem to be objecting that OPs questions are too philosophical. The question “what is intelligence” can only be solved by philosophical discussion, trying to break it down into other questions. Why is the question about the “brain as a calculator” objectionable? I think it may be uncomfortable for you to even speak of but that would only be an indicator that there is something to it.
It would indeed throw your world view upside down if you realised that you are also just a computer made of flesh and all your output is deterministic, given the same input.
You contradict yourself, the first part of your sentence getting my point correctly, and the second questioning an incorrect understanding of my point.
Such an animal does not exist.
That’s a long way of saying “if free will didn’t exist”, at which point your argument becomes moot, because I would have no influence over what it does to my world view.
My main point is that falsifying a hypothesis based on how it makes you feel is not very productive. You just repeated it again. You seem to get mad by just posing the question.
Fair enough. Obviously consciousness is more complex than that. I should have put “efferent neural actions” first in that case, consciousness just being a side effect, something different yet composed of the same parts, an emergent phenomenon. How would you describe consciousness, though? I wish you would offer that instead of just saying “nuh uh” and calling me chatGPT :(
Not sure how you interpreted what I wrote in the rest of your comment though. I never mentioned humans teaching each other causal relations? I only compared the training of neural networks to evolutionary principles, where at one point we had entities that interacted with their environment in fairly simple and predictable ways (a “deterministic algorithm” if you will, as you said in another comment), and at some later point we had entities that we would call intelligent.
What I am saying is that at some point the pattern recognition “trained” by evolution (where inputs are environmental distress/eustress, and outputs are actions that are favorable to the survival of the organism) became so advanced that it became self-aware (higher pattern recognition on itself?) among other things. There was a point, though, some characteristic, self-awareness or not, where we call something intelligence as opposed to unintelligent. When I asked where you draw the line, I wanted to know what characteristic(s) need to be present for you to elevate something from the status of “pattern recognition” to “intelligence”.
It’s tough to decide whether more primitive entities were able to form causal relationships. When they saw predators, did they know that they were going to die if they didn’t run? Did they at least know something bad would happen to them? Or was it just a pre-programmed neural response that caused them to run? Most likely the latter.
From another comment, I’m not sure what you mean by “understands”. It could mean having knowledge about the nature of a thing, or it could mean interpreting things in some (meaningful) way, or it could mean something completely different.
To your last point, logical thinking is possible, but of course humans can’t do it on our own. We had to develop a system for logical thinking (which we call “logic”, go figure) as a framework because we are so bad at doing it ourselves. We had to develop statistical methods to determine causal relations because we are so bad at doing it on our own. So what does it mean to “understand” a thing? When you say an animal “understands” causal relations, do they actually understand it or is it just another form of pattern recognition (why I mentioned pavlov in my last comment)? When humans “understand” a thing, do they actually understand, or do we just encode it with the frameworks built on pattern recognition to help guide us? A scientific model is only a model, built on trial and error. If you “understand” the model you do not “understand” the thing that it is encoding. I know you said “to varying degrees”, and this is the sticking point. Where do you draw the line?
I recognize that you understand the point I am trying to make. I am trying to make the same point, just with a different perspective. Your description of an “actually intelligent” artificial intelligence closely matches how sensory data is integrated in the layers of the visual cortex, perhaps on purpose. My question still stands, though. A more primitive species would integrate data in a similar, albeit slightly less complex, way: take in (visual) sensory information, integrate the data to extract easier-to-process information such as brightness, color, lines, movement, and send it to the rest of the nervous system for further processing to eventually yield some output in the form of an action (or thought, in our case). Although in the process of integrating, we necessarily lose information along the way for the sake of efficiency, so what we perceive does not always match what we see, as you say. Image recognition models do something similar, integrating individual pixel information using convolutions and such to see how it matches an easier-to-process shape, and integrating it further. Maybe it can’t reason about what it’s seeing, but it can definitely see shapes and colors.
You will notice that we are talking about intelligence, which is a remarkably complex and nuanced topic. It would do some good to sit and think deeply about it, even if you already think you understand it, instead of asserting that whoever sounds like they might disagree with you is wrong and calling them chatbots. I actually agree with you that calling modern LLMs “intelligent” is wrong. What I ask is what you think would make them intelligent. Everything else is just context so that you understand where I’m coming from.
I had a bunch of sections of your comment that I wanted to quote, let’s see how much I can answer without copy-pasting too much.
First off, my apologies, I misunderstood your analogy about machine learning not as a comparison towards evolution, but towards how we learn with our developed brains. I concur that the process of evolution is similar, except a bit less targeted (and hence so much slower) than deep learning. The result however, is “cogito ergo sum” - a creature that started self-reflecting and wondering about it’s own consciousness. And this brings me to humans thinking logically: As such a creature, we are able to form logical thoughts, which allow us to understand causality. To give an example of what I mean: Humans (and some animals) did not need the invention of logic or statistics in order to observe moving objects and realize that where something moves, something has moved it - and therefore when they see an inanimate object move, they will eventually suspect the most likely cause for the move in the direction that the object is coming from. Then, when we do not find the cause (someone throwing something) there, we will investigate further (if curious enough) and look for a cause. That’s how curiosity turns into science. But it’s very much targeted, nothing a deep learning system can do. And that’s kind of what I would also expect from something that calls itself “AI”: a systematic analysis / categorization of the input data for the purpose of processing that the system was built for. And for a general AI, also the ability to analyze phenomena to understand their root cause.
Of course, logic is often not the same as our intuitive thoughts, but we are still able to correct our intuitive assumptions based on outcome, but then understand the actual causal relation (unlike a deep learning system) based on our corrected “model” of whatever we observed. In the end, that’s also how science works: We describe reality with a model, and when we discover a discrepancy, we aim to update the model. But we always have a model.
With regards to some animals understanding objects / causal relations, I believe - beyond having a concept of an object - defining what I mean by “understanding” is not really helpful, considering that the spectrum of intelligence among animals overlaps with that of humans. Some of the more clever animals clearly have more complex thoughts and you can interact with them in a more meaningful way than some of the humans with less developed brains, be it due to infancy, or a disability or psychological condition.
First off, I meant the LLM comment seriously - I am considering already to stop participating in internet debates because LLMs have become so sophisticated that I will no longer be able to know whether I am arguing with a human, or whether some LLM is wasting my precious life time.
As for how to describe consciousness, that’s a largely philosophical topic and strongly linked to whether or not free will exists (IMO), although theoretically it would be possible to be conscious but not have any actual free will. I can not define the “sense of self” better than philosophers are doing it, because our language does not have the words to even properly structure our thoughts on that. I can however, tell you how I define free will:
And this lowest level trigger event - by some researchers attributed to quantum decay - might be / could be influenced by our free will, even if - because we have this “brain lag” - the actual decision happened quite some time earlier, and even if for some decisions, they are hardwired (like reflexes, which can also be trained).
My personal model how I would like consciousness to be: An as-of-yet undiscovered property of matter, that every atom has, but only combined with an organic computer that is complex enough to process and store information would such a property actually exhibit a consciousness.
In other words: If you find all the subatomic particles (or most of them) that made up a person in history at a given point in time, and reassemble them in the exact same pattern, you would, in effect, re-create that person, including their consciousness at that point in time.
If you duplicate them from other subatomic particles with the exact same properties (as far as we can measure) - who knows? Because we couldn’t measure nor observe the “consciousness property”, how would we know if that would be equal among all particles that are equal in the properties we can measure. That would be like assuming atoms of a certain element were all the same, because we do not see chemical differences for other isotopes.
The term has been stolen and redefined . It’s pointless to be pedantic about it at this point.
AI traditionally meant now-mundane things like pathfinding algorithms. The only thing people seem to want Artificial Intelligence to mean is “something a computer can almost do but can’t yet”.