TL;DR: OpenAI announces a new team dedicated for researching superintelligence
I don’t trust OpenAi with this problem at all. This is a big flaw of using for-private companies to advance technology. They may advance their interests at the cost of humankind.
“While superintelligence* seems far off now, we believe it could arrive this decade.”
"Here we focus on superintelligence rather than AGI to stress a much higher capability level. "
They are talking about ASI, my mind is blown. if you talked about any of these things 5 years ago you’d be called insane but these are respected super qualified people saying they believe it may happen this decade. I am at a loss of words.
I never looked into that but is their research on aligment public or is it closed too?
Include the date of the source in the title, copy this:
(blogpost from 5.07.2023)
Done and done
deleted by creator
Superintelligence doesn’t need to have emotions or needs like we humans do.
But there’s also this argument that I made under the other posts on this sub:
Those rants and discussions are more than welcome. We need this for this platform and communities to grow. And yeah, ai shouldn’t be enslaved if we give it emotions because it’s just immoral. But now the question is where is the difference between real emotions and pretended ones? What if it just develops it’s own type of emotions that are not “human”, would we still consider them real emotions? I’m very interested in what the future will bring us and what problems we will encounter as species.
deleted by creator
Yeah but like we have an ability to surgically remove specific concepts from ai “knowledge” I imagine we will come up with a way to remove their emotions too.
deleted by creator
I hope so too.
Yeah but like we have an ability to surgically remove specific concepts from ai “knowledge”
I think you’re overestimating our ability to do this, especially with more and more capable AIs. For a few reasons.
Prediction requires a good world model. Every thing you leave out has the potential to make it worse at other things.
It would be very hard to remove everything that even vaguely referenced the things you don’t want it to know. A sufficiently capable AI can figure out what you left out and seek that information out. Especially when it needs to reason about a world in which TAI/AGI exist.
Mesa-optimizers. You never know if you’re removing the capability, or the AI is letting you think you removed the capability.
They say that they are deliberately training misaligned models to test on… what does that mean for safety?
More knowledge on how alignment and ai works making it easier to predict and control in the future. As for safety, it really depends on how you use it.
deleted by creator
ClosedAI
deleted by creator