Published Date : 02/06/2025
The world is progressing at a pace that often outstrips human comprehension, with new discoveries and technological advancements reshaping our daily lives. One of the most significant areas of innovation is artificial intelligence (AI), which has become an integral part of modern society. AI technologies facilitate human activities, enhance work efficiency, and improve decision-making processes across various sectors, including healthcare and education. However, these advancements also bring ethical and legal challenges, such as increased unemployment, the proliferation of manipulative technologies like deepfakes, and the potential threat of uncontrollable AI systems.
The European Union has taken a significant step towards AI regulation by publishing the Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence on 21 April 2021. This initiative, which came into force as the EU AI Act on 1 August 2024, is a pioneering regulatory framework aimed at fostering the secure and transparent advancement of AI technologies. The Act seeks to protect users’ rights and freedoms, ensure ethical AI development, and safeguard personal data. It adopts a risk-based approach, classifying unacceptable risks and prohibiting AI applications that pose significant threats to fundamental rights.
Turkey, while lagging behind in AI regulation, is making strides towards integrating AI into its legal framework. In 2019, Presidential Decree No.48 established the Directorate of Artificial Intelligence Applications under the Presidency’s Digital Transformation Office. This institution is responsible for developing strategies related to AI applications and supporting the administrative and technical coordination of public institutions and organizations. The Draft Artificial Intelligence Law was presented to the Grand National Assembly of Turkey on 24 June 2024, following the approval of the EU AI Act. The revised National Artificial Intelligence Strategy 2024-2025 Action Plan outlines key objectives, including fostering AI research and improving access to high-quality data and innovation.
However, the Draft Law in Turkey is notably more limited in scope compared to the EU AI Act. It lacks a comprehensive risk assessment framework and fails to address fundamental rights and freedoms in detail. Aligning Turkey's legal framework with the EU’s approach and adopting a risk-based perspective will help ensure that Turkey’s AI regulations are compatible with those of its major trading partner. Given the legal differences, it is essential to expand the scope of the Draft Law and address the gaps in more detail.
AI is increasingly being used in various areas of the healthcare sector, where it facilitates tasks such as information synthesis, processing patient data, managing complex medical records, and ensuring effective data management. AI improves human performance by supporting clinicians in diagnosing uncommon diseases, minimizing mistakes, and handling complicated treatment interactions. However, the regulation of AI in healthcare presents unique challenges due to the sensitive nature of the data involved. AI applications in healthcare depend on private and sensitive data, and there are risks of covert monitoring, data leaks, and cyberattacks. Governments and researchers must anticipate and mitigate these potential abuses to ensure the safe and ethical use of AI in healthcare.
From a legal perspective, the protection of personal data is crucial, especially in the healthcare sector. According to Article 6(1) of Personal Data Protection Law No. 6698, personal data related to individuals’ health, sexual life, and genetic data is classified as sensitive data. The Turkish Institute for Health Data Research and AI Applications has been established to enhance Turkey’s competitive edge in health data research and AI applications. This institute aims to address the scientific and technological needs to improve the effectiveness of healthcare services and conduct innovative research.
The protection of personal data is both challenging and crucial, especially in the healthcare sector. Healthcare organizations should provide comprehensive training and conduct regular risk assessments to address security vulnerabilities. Using tools like virtual private networks (VPNs), limiting access to certified personnel, and implementing two-factor authentication and role-based access control systems can significantly improve data security and protect against cyberattacks and unauthorized access.
Technological advancements continuously build upon what has already been developed and implemented, extending towards unexplored areas. The crucial aspect lies in utilizing technology in ways that benefit human life while establishing appropriate legal frameworks and supporting regulations. As technology continues to outpace legal frameworks, regulatory strategies must evolve over time, either due to their inefficacy or in response to emerging challenges, such as new risks and risk creators. It is preferable to execute the right strategy imperfectly rather than perfecting the wrong one. Lawmaking should be proactive, not reactive, to ensure a safer future.
Q: What is the EU AI Act?
A: The EU AI Act is a regulatory framework that came into force on 1 August 2024. It aims to foster the secure and transparent advancement of AI technologies, protect users’ rights and freedoms, ensure ethical AI development, and safeguard personal data. It adopts a risk-based approach and prohibits AI applications that pose significant threats to fundamental rights.
Q: What is the role of AI in healthcare?
A: AI in healthcare facilitates tasks such as information synthesis, processing patient data, managing complex medical records, and ensuring effective data management. It improves human performance by supporting clinicians in diagnosing uncommon diseases, minimizing mistakes, and handling complicated treatment interactions.
Q: What are the challenges of regulating AI in healthcare?
A: The challenges of regulating AI in healthcare include the sensitive nature of the data involved, risks of covert monitoring, data leaks, and cyberattacks. Governments and researchers must anticipate and mitigate these potential abuses to ensure the safe and ethical use of AI in healthcare.
Q: What is Turkey's approach to AI regulation?
A: Turkey has established the Directorate of Artificial Intelligence Applications and presented the Draft Artificial Intelligence Law to the Grand National Assembly. The revised National Artificial Intelligence Strategy 2024-2025 Action Plan outlines key objectives, including fostering AI research and improving access to high-quality data and innovation. However, the Draft Law is more limited in scope compared to the EU AI Act.
Q: Why is proactive lawmaking important in the context of AI regulation?
A: Proactive lawmaking is crucial in the context of AI regulation to ensure that legal frameworks keep pace with technological advancements. It helps to anticipate and mitigate potential risks and challenges, ensuring that AI is used in ways that benefit human life while protecting fundamental rights and freedoms.