/dnpa/media/media_files/2025/07/04/mr-michael-mehamara-2025-07-04-14-59-58.png)
After the two insightful panel discussions,it was time to shift this discussion into the legal landscape. As the world witnesses this immense transformation, countries around the world are busy formulating policies and creating regulatory frameworks that can ensure that AI remains inclusive and democratic in its use.Taking the lead on this, the European Union has established the first comprehensive law defining AI systems and guidelines on safeguarding against potential vulnerabilities. To shed light on this, we were very pleased to have with us Mr. Michael McNamara, the Co-Chair of the European Parliament’s AI Working Group and one of the key architects who made the EU’s AI Act a reality.
Mr. McNamara opened by noting that few European Acts have sparked as much debate pre adoption as the AI Act, aside from GDPR. He emphasized that it sets a global precedent for AI governance, being one of the first comprehensive regulations.While the EU may seem behind the U.S. and China in AI adoption, this actually creates the ideal environment for structured implementation. Though some argue it stifles innovation, its true intent is to foster
AI development while ensuring public trust and societal protection.
Key Provisions of the EU’s AI Act
Mr. McNamara outlined how the EU Act categorizes AI systems based on risk factors:
UNACCEPTABLE RISK
AI applications that are deemed too harmful are outrightly banned.
This includes social scoring systems,manipulative AI practices and real-time biometric surveillance, such as facial recognition databases scraped from the internet
HIGH-RISK AI APPLICATIONS
When AI is used in critical sectors like healthcare, employment and law enforcement, it must meet strict regulations, including transparency requirements, risk assessments, and human oversight.
LIMITED RISK
Light transparency measures are in place for low-risk AI uses like chatbots and recommendation algorithms.
MINIMAL OR NO RISK
There are no regulatory obligations on low-risk use cases like spam filters and video game AI.
It was interesting to note, that in developing this AI Act, the EU has mandated that the primary onus for compliance is on the developers and providers, irrespective of whether they are based in the EU or not.
The EU understands that an Act holds value only when enforced. To implement the AI Act, they’ve established an AI Office within the European Commission to oversee compliance, regulate general-purpose AI,and promote international cooperation. A European Artificial Intelligence Board was also created to facilitate coordination among regulators. As Mr. McNamara highlighted, the Act aims to encourage innovation, supported by regulatory sandboxes that allow providers to test AI models in a controlled, legally compliant environment.
Mr. McNamara noted that the EU’s AI Act aligns with global AI ethics frameworks like UNESCO’s AI Ethics Recommendations and the OECD AI Principles, creating a robustregulation that fosters AI growth while safeguarding societal interests and privacy.He emphasized the need for such stringent frameworks, given AI’s transformative impact on decision-making, governance, and power structures—comparing its significance to the Industrial Revolution.
As India currently debates about its approach to AI regulation, Mr. McNamara is no stranger to the challenges in this situation, also having faced the issue of striking a balance between AI development and copyright protection.
Among upcoming laws and regulations in this sector that are relevant to this situation, Mr. McNamara spoke about the EU General Purpose AI Code of Conduct that establishes specific guidelines for general-purpose AI models, referred to as the Code of Practice. While it is still a work in progress, it is important to note that this Code is not legally binding but rather provides a strong incentive for AI developers to adhere to its guidelines.These efforts are being led by the AI Office that has assembled a team of independent academics and industry experts, whose sole focus is on transparency and copyright, risk mitigation and evaluation,and governance. Additionally, over a thousand stakeholders from civil society and industry have contributed to shaping each version of the Code.
He also shared a word of caution, about the rise of general-purpose AI models. He illustrated
by stating how for generations, people have relied on certain media outlets or specific news
sources, based on their judgement of these platforms and channels built over months and
years. However, general-purpose AI could undermine this trust as content creators,
particularly those who hold copyrights, are currently unable to determine what material was
used in training AI models. AI developers claim that they are using data mining exemptions
from 2019 EU copyright legislation, which allows information to be used for scientific
research.
Content creators argue that this loophole was never meant to apply to AI training data, and
they are currently unable to track or seek compensation for how their content is used in AI
training models.
This issue is being discussed globally and Mr.McNamara is hopeful that future legal rulings
may help shape better AI copyright laws. He concluded his talk, by mirroring two issues being
faced in almost every nation and raising two very thought-provoking questions:
How do we protect human creators while allowing AI to develop responsibly?
Can AI innovation continue without undermining content creators and trusted information
sources?