As they improve, we’ll likely trust AI models with more and more responsibility. But if their autonomous decisions end up causing harm, our current legal frameworks may not be up to scratch.
Sounds great in theory till u realise this is the exact sort of law the big tech companies can afford to pay out that will also be used to completly kill foss ai.
Given the high costs of AI, isn’t it reasonable to assume that whomever stands to make a profit is equally liable for it’s outcomes?
Sounds great in theory till u realise this is the exact sort of law the big tech companies can afford to pay out that will also be used to completly kill foss ai.
The liability wouldn’t be on the development, but the deployment.