Aside from the obvious Law is there another blocker for this kind of situation?
I Imagine people would have their AI representatives trained to each individual personal beliefs and their ideal society.
What could that society look like? Or how could it work. Is there a term for this?
The thing that’s stopping anything like this is that the AI we have today is not intelligence in any sense of the word, despite the marketing and “journalism” hype to the contrary.
ChatGPT is predictive text on steroids.
Type a word on your mobile phone, then keep tapping the next predicted word and you’ll have some sense of what is happening behind the scenes.
The difference between your phone keyboard and ChatGPT? Many billions of dollars and unimaginable amounts of computing power.
It looks real, but there is nothing intelligent about the selection of the next word. It just has much more context to guess the next word and has many more texts to sample from than you or I.
There is no understanding of the text at all, no true or false, right or wrong, none of that.
AI today is Assumed Intelligence
Arthur C Clarke says it best:
“Any sufficiently advanced technology is indistinguishable from magic.”
I don’t expect this to be solved in my lifetime, and I believe that the current methods of"intelligence " are too energy intensive to be scalable.
That’s not to say that machine learning algorithms are useless, there are significant positive and productive tools around, ChatGPT and its Large Language Model siblings not withstanding.
Source: I have 40+ years experience in ICT and have an understanding of how this works behind the scenes.
To be fair, were voting people in the office who basically don’t even know what they really doing and are voted in by people who does not know what they want. Even more their own thinking contradicts what this people voted for. With AI you can correct easily but the human representatives is hard to do unless strong reaction from the voting base.
the point of democracy is that the elected are normal people. they may have expert advisors but they are not selected for their expertise, like it or not. bypassing this by adding a layer of obfuscation helps nobody.
With AI you can easily correct? Who would correct the AIs? The people who don’t know what they want? Or some other party who knows even less, what the people want? And how would you personally correct without making up your mind by yourself about something? And how would society correct the overarching AI, which probably had used all peoples AI to train? Who would do this with what indentions and biases? Just seems to hide the problems under an AI carpet, creating even more problems.
Well, at least they’re people with some human level of intelligence and intention, rather than a souped up predictive text generator
We’ll see “real AI” in our lifetimes, but from the other direction: Simulating scanned human brains.
This isn’t plausible yet, we don’t even know enough about the brain to simulate it even if we had the computing power. Possible within the next 60 years? I guess, but not guaranteed.
What is the evidence of this?