Collaborative Research Crucial for AI Safety
Published Date : 10/01/2025
Collaborative research on AI safety is crucial to mitigate the risks posed by advanced AI technologies. Regulatory oversight and pre-market risk assessments are essential to ensure safe deployment.
Re Geoffrey Hinton’s concerns about the perils of artificial intelligence (‘Godfather of AI’ shortens odds of the technology wiping out humanity over next 30 years, 27 December), I believe these concerns can best be mitigated through collaborative research on AI safety, with a role for regulators at the table.
Currently, frontier AI is tested post-development using 'red teams' who try their best to elicit a negative outcome.
This approach will never be enough; AI needs to be designed for safety and evaluation—something that can be done by drawing on expertise and experience in well-established safety-related industries.
Hinton does not seem to think that the existential threat from AI is one which is deliberately being encoded—so why not enforce the deliberate avoidance of this scenario? While I don’t subscribe to his perspective about the level of risk facing humanity, the precautionary principle suggests that we must act now.
In traditional safety-critical domains, the need to build physical systems, such as aircraft, limits the rate at which safety can be impacted.
Frontier AI has no such physical 'rate-limiter' on deployment, and this is where regulation needs to play a role.
Ideally, there should be a risk assessment prior to deployment, but the current risk metrics are inadequate—for example, they don’t consider the application sector or scale of deployment.
Regulators need the power to 'recall' deployed models (and the big companies that develop them need to include mechanisms to stop particular uses) as well as supporting work on risk assessment, which will give leading indicators of risk, not just lagging indicators.
Put another way, the government needs to focus on post-market regulatory controls while supporting research that enables regulators to have the insights to enforce pre-market controls.
This is challenging, but imperative if Hinton is right about the level of risk facing humanity.
The Institute for Safe Autonomy at the University of York is dedicated to advancing research in this critical area.
Our mission is to ensure that AI systems are designed and deployed with the highest standards of safety and ethical considerations.
By collaborating with industry leaders, policymakers, and other researchers, we aim to create a safer and more responsible AI landscape.
Frequently Asked Questions (FAQS):
Q: What are the current methods used to test AI systems for safety?
A: Currently, AI systems are often tested post-development using 'red teams' that try to elicit negative outcomes. However, this approach is not sufficient and more comprehensive safety measures are needed.
Q: Why is collaborative research important for AI safety?
A: Collaborative research allows for the pooling of expertise and experience from various industries, leading to more robust and effective safety measures for AI systems.
Q: What role do regulators play in ensuring AI safety?
A: Regulators need the power to 'recall' deployed AI models and to enforce pre-market and post-market controls. They also support research that provides leading indicators of risk.
Q: What are the challenges in implementing effective AI safety measures?
A: Challenges include the lack of adequate risk metrics, the need for real-time risk assessment, and the rapid deployment of AI systems without physical 'rate-limiters.'
Q: Why is the precautionary principle important in AI safety?
A: The precautionary principle suggests that we must act now to mitigate potential risks, even if they are not fully understood, to protect against potential existential threats posed by AI.