Parliamentary Event Showcases BU's AI in Media Research

Published Date : 06/11/2024 

Bournemouth University (BU) researchers presented their findings on the use of generative artificial intelligence (AI) in media creation at a House of Lords event, emphasizing the need for responsible and ethical integration of AI in the media industry. 

Researchers from Bournemouth University (BU) have taken a significant step in discussing the implications of generative artificial intelligence (AI) in media creation. A recent Parliamentary event held at the House of Lords showcased the findings and recommendations from their project, 'The Shared Post-Human Imagination Human-AI Collaboration in Media Creation.' This project, funded by the Arts and Humanities Research Council (AHRC) as part of the Bridging Responsible AI Divides (BRAID) programme, explores the use of AI tools in media production and their impact on creativity, authorship, and ownership.


The project is a collaborative effort led by BU in partnership with the University of Michigan, USA, and Zhejiang University in China. The research highlights how users of generative AI can produce a wide array of media content, including stories, scripts, images, music, and even films, by simply prompting widely available AI models. This ease of use, however, brings about a host of moral, ethical, and legal challenges, particularly in terms of creativity, ownership, and bias.


Despite the growing use of AI in media production, there is a notable lack of guidance, regulation, and best practices for integrating these tools responsibly and ethically. To address this gap, the research team was invited to Parliament to share their insights and proposed legal and policy interventions. The event, held in partnership with think tank Policy Connect and chaired by Lord Tim Clement-Jones, featured a presentation from the BU team.


During the session, the research team outlined seven core principles to guide the responsible use of generative AI in media production. These principles include accountability, transparency, collaboration, and interdisciplinarity. Additionally, they emphasized the importance of using open datasets, addressing bias, and obtaining informed affirmative consent.


The event also included a discussion on the research project and its findings, focusing on the need for education on responsible AI use and its impact on intellectual property, labor, and accessibility. The issue of authorship and intellectual property rights was identified as crucial for ensuring fair and sustainable use of generative AI that benefits the media industry. Participants also stressed the importance of increasing diversity in AI development and improving access to education on responsible AI use in the media sector.


Dr. Szilvia Ruszev, a Senior Lecturer in Post Production at BU and the project leader, expressed her enthusiasm about the project's impact. 'It's incredibly rewarding to be part of translating research into meaningful, actionable outcomes that can positively impact people’s lives. We are looking forward to developing this research further,' she said.


Bournemouth University (BU) is a leading institution in the UK known for its innovative research and commitment to addressing real-world challenges. The university collaborates with various national and international partners to foster a culture of academic excellence and practical application of knowledge.


The findings from this project not only contribute to the academic discourse but also provide valuable insights for policymakers and industry professionals. By promoting responsible and ethical use of AI, the research aims to enhance the media industry's capabilities while ensuring fair and inclusive practices. 

Frequently Asked Questions (FAQS):

Q: What is the main focus of the 'The Shared Post-Human Imagination' project?

A: The project focuses on the use of generative AI tools in media creation and their impact on creativity, authorship, and ownership.


Q: Who are the partners involved in this project?

A: The project is led by Bournemouth University (BU) in partnership with the University of Michigan, USA, and Zhejiang University in China.


Q: What are the seven core principles outlined for the responsible use of generative AI in media production?

A: The seven core principles are accountability, transparency, collaboration, interdisciplinarity, use of open datasets, redress of bias, and obtaining informed affirmative consent.


Q: What are the key ethical and legal issues associated with the use of generative AI in media creation?

A: The key issues include questions around creativity, ownership, bias, intellectual property, and the need for responsible and ethical guidelines.


Q: Who chaired the Parliamentary event and what think tank was it held in partnership with?

A: The event was chaired by Lord Tim Clement-Jones and held in partnership with think tank Policy Connect. 

More Related Topics :