Published Date : 23/10/2025
American political scientist Allaine Cerwonka, director of international works and partnerships at the Alan Turing Institute, visited Brazil in January to explore potential collaborations. The Turing Institute, a nonprofit organization funded by the British government and private institutions, is the UK’s national institute for artificial intelligence (AI) and data science. It leads and carries out research in partnership with universities, government agencies, and businesses, focusing on areas such as security, the environment, the economy, and climate change.
Cerwonka has a PhD in political science from the School of Social Sciences at the University of California, Irvine. She previously served as dean of the School of Social Sciences at the University of East London and founding director of the Science Studies Program at Central European University. In an interview via email and video call, she shared insights on her visit to Brazil.
What was the reason behind your visit to Brazil?
International collaborations and knowledge exchange are fundamental to the UK and our institute. Brazil is particularly interesting to us because its size and population make it an important country for any engagement with Latin America. It is at the center of important international discussions on the potential use of AI to address urgent global challenges, such as those discussed at the G20 in Rio de Janeiro in November 2024 and the upcoming COP30 in Belém, Pará. These meetings reflect Brazil’s leadership in issues like renewable energies, climate change, and health. At the Turing Institute, we are developing AI and new technologies to help address these challenges, and we see Brazil as a potential partner in this ambition. The Brazilian government, private sector, and civil society are actively engaged in discussions on AI regulation and responsible development.
Which institutions did you talk to?
We visited institutions and government agencies in São Paulo, Brasília, and Rio de Janeiro. We were impressed by the work of the MCTI [Ministry of Science, Technology, and Innovation], particularly the Brazilian artificial intelligence plan, which addresses data privacy, ethical and responsible AI development, and AI regulation to protect citizens without stifling innovation. We discussed efforts to improve public services through digital government. We also had fruitful discussions with the LNCC [National Laboratory for Scientific Computing] about potential areas of collaboration and researcher exchange. We invited the Brazilian government to send delegates to the Global Summit 2025 – AI Standards Hub in London. Additionally, we had productive discussions with leaders and researchers connected to FAPESP.
What did you discuss?
Going forward, Turing will explore with FAPESP how to connect the new AI Research Centers FAPESP has funded with centers of excellence in the UK within Turing’s network of universities. There is also scope to share the institute’s experiences with an innovation hub that FAPESP is setting up with the government of São Paulo.
What expertise in Brazilian science caught your attention the most?
I was excited about the work in sustainability and the environment. At the Turing Institute, some of our work involves the cybersecurity of wind turbines. A cyberattack in Germany last year disabled thousands of turbines. We cannot rely on renewable energies without considering security issues. Brazil is an important partner in this work due to its strength in renewable energies. Turing will continue discussions with the port of Açu in Rio de Janeiro about autonomous ships and offshore wind turbines. In the field of AI for the environment, we were impressed by INPO [National Institute for Oceanic Research]’s work on creating a digital twin of the South Atlantic.
Turing has led a program to implement AI and data science in priority areas across the UK. What were the most significant results?
The program, funded by UK Research & Innovation [UKRI], was responsible for around 100 projects, addressing the most important areas for the public and the economy. We produced strong work on digital twins, such as working with Rolls-Royce to increase efficiency in the aerospace industry. An example in the health sector is the SPARRA project [Scottish Patients at Risk of Re-Admission], which uses AI modeling to predict the likelihood of cardiac patients returning to hospital. For each area we worked in with the government, we developed white papers to brief members of parliament and relevant sectors of the civil service. We have around 75 in-house researchers who work mainly on issues like security and defense, where a lot of the work is classified. For less sensitive issues, we engage the best researchers from universities.
What are the biggest challenges today in developing ethical and responsible AI applications?
This is a very important question. Unfortunately, there is no fixed set of rules for producing ethical, responsible AI that professionals can simply memorize to avoid doing harm. At Turing, we believe that potential risks in AI research and output must be managed through continuous reflection throughout the research process. We have produced a handbook titled The Turing Way that addresses these challenges and helps apply the goals and practices of reproducible research to the relatively young field of machine learning. We have also created a protocol and training course for ethical, responsible AI called The Turing Commons. The course was developed to help train a new generation of AI researchers in identifying and building ethical frameworks for AI applications.
How can we balance AI regulation to ensure rights are protected without compromising innovation?
The UK has adopted a 'pro-innovation' approach to AI regulation, described and justified in a government white paper, which seeks a middle ground between the European Union’s AI Act and the approach currently taken by the USA. Recognizing that AI technologies are developing at a rapid pace, the UK government has tasked its existing regulatory bodies with developing standards and regulations within their own sectors. The Turing Institute has been part of this process in collaboration with the National Physical Laboratory and the British Standards Institution, contributing to the development of the AI Standards Hub. The Hub has created an online platform to identify all relevant regulations and standards for a given technology. This is crucial work for ensuring clarity for industry so they can innovate with confidence. The UK government is keen to deploy AI and other emerging technologies to increase the efficiency and standards of public services. Naturally, the standard of accountability for government use of AI must be higher than in other sectors.
Q: What is the Alan Turing Institute?
A: The Alan Turing Institute is the UK’s national institute for artificial intelligence (AI) and data science. It is a nonprofit organization funded by the British government and private institutions, leading and carrying out research in partnership with universities, government agencies, and businesses.
Q: Why is Brazil an important partner for the Alan Turing Institute?
A: Brazil is an important partner for the Alan Turing Institute due to its size, population, and leadership in international discussions on AI and global challenges such as renewable energies, climate change, and health. Brazil’s engagement in forums like the G20 and COP30 makes it a key player in these areas.
Q: What are some of the key areas of collaboration between the Alan Turing Institute and Brazil?
A: Key areas of collaboration include sustainability, cybersecurity of renewable energies, autonomous ships, and digital twins. The institute is also exploring connections between AI Research Centers funded by FAPESP and centers of excellence in the UK.
Q: What is the 'pro-innovation' approach to AI regulation in the UK?
A: The 'pro-innovation' approach to AI regulation in the UK seeks a middle ground between the European Union’s AI Act and the US approach. It involves tasking existing regulatory bodies with developing standards and regulations within their sectors to balance protection of rights with innovation.
Q: What is The Turing Way?
A: The Turing Way is a handbook produced by the Alan Turing Institute that addresses the challenges of ethical and responsible AI. It helps apply the goals and practices of reproducible research to the field of machine learning and is designed to guide researchers in managing potential risks throughout the research process.