Powered by

Home DNPA Conclave 2025 Government perspectives on AI laws for the news industry.

Government perspectives on AI laws for the news industry.

A conversation with Shri S. Krishnan, Secretary, Ministry of Electronics & IT, and Mr. Michael McNamara, Member of the European Parliament, Co-chair of the AI Working Group

ByDNPA Team
New Update
Government Per

As India is pushing boundaries when it comes to AI and technology, it is also working towards creating the perfect balance between innovation under the ‘Make in India’ vision while ensuring AI serves its people responsibly, mitigating risks and maximizing its benefits. To shed light on the holistic regulatory frameworks that can help nudge AI-led changes towards assuming a constructive and ethical role, we organized a fireside chat with Shri S. Krishnan, Secretary, Ministry of Electronics and IT and Mr. Michael McNamara, Member of the European Parliament and Co-chair of European Parliament’s AI Working Group; moderated by, Mr. Anil Malhotra, Public and Regulatory Affairs Head at Zee Media Corporation.

Mr. Malhotra opened by asking Mr. McNamara about the timelines of implementing the AI Act and frameworks. He raised the important issue that as the internet is considered a free space, government regulations of any kind, especially those that may come across as limiting the freedom of the press, are often met with resistance from a few groups and invited Mr. McNamara to share his thoughts and experience on how to tackle situations like these.

Mr. McNamara outlined that these regulations are being implemented in phases - while the section on prohibited practices is already in place, the European Commission published the enforcement guidelines on February 4 and the full enforcement of these regulations is expected in August. On the topic of facing resistance, Mr. McNamara cited how in the EU a Code of Practice is being developed, and while not mandatory, the presumption of compliance with the AI Act, has already made this a contentious topic.

“Freedom is a cornerstone of all democracies. It's something that is dearly cherished across the European Union, and I think there is no desire to interfere with freedom of expression.

On the other hand, I think it is important that, because people have developed a trust in other human beings' decisions, when it comes to news and current affairs in particular, any AI-generated content is labeled as such. The Act requires this—it does not stop AI-generated content in news and current affairs, but it mandates that any AI-generated material, including deepfakes, must be labeled.

This ensures that people know the content is AI-generated rather than created by a human in whom they might place their trust. Likewise, if someone is interacting with AI rather than another human being, they must be made aware of this—though this requirement applies outside of news and current affairs as well”, he added.

On self-regulation, Mr. McNamara highlighted the debate over third-party inspections and independent verification in the Code of Practice. Developers fear industrial espionage and proprietary risks, while public trust in AI demands transparency and oversight. He addressed the criticism that policymakers lack deep technical understanding, noting that similar concerns arose in financial regulation. History has shown that complete self-regulation carries risks—while a self-regulatory system may work, he stressed the need for oversight to ensure compliance and accountability.

Mr. Malhotra then steered the conversation towards the implementation of these regulations in the Indian scenario - where members of the DNPA are licensed for their channels and portals; but the deepfakes and misinformation being generated by unregulated individuals are under no such obligation. In such a scenario, he asked Shri. S Krishnan about how they plan to control these issues.

Krishnan shared that deepfakes are a key concern in the matter of AI-led developments, but the general consensus in India about AI is that the benefits outweigh the possible harm, which is why the government’s approach to regulatory measures has to strike a balance in a way that promotes innovation while mitigating the possible risks.

He added that India is closely studying AI regulations in other jurisdictions, including the EU, to determine the right approach and timing for implementation. A committee under the Principal Scientific Advisor has been formed to examine AI governance and potential regulations. However, he emphasized that India already has robust laws that address many of the concerns raised.

"So far, the stance has been that the existing legal framework in India is, to some extent, capable of handling issues like deep fakes and misrepresentation. Both under the IT Act and the Bharatiya Nyaya Sanhita (BNS), which replaced the IPC, deep fakes are considered a form of misrepresentation. Misrepresentation is a crime, and action can be taken against it under existing law.

In exceptional cases—where it attracts Section 69A, which imposes constitutionally mandated restrictions on freedom of expression under Article 19(2) of the Constitution—let me remind you that Article 19(2) outlines nine specific grounds for restrictions. However, Section 69A covers only six of those, excluding categories like obscenity and defamation, which are included in Article 19(2). Instead, Section 69A primarily addresses issues related to state security, public order, and relations with friendly states.

For cases that fall under these categories, action can be taken immediately. Additionally, under the overall ambit of Section 79 of the IT Act, further legal action is possible. So, it is not that we are without protections—Indian law already provides safeguards, and these protections may, in fact, be more robust given India's social circumstances than those in certain other countries, where obtaining a specific court order is required. Here, the executive itself is empowered to act. To that extent, I believe there is already a legal defense available to address such issues, and it does not need to be a high-priority concern at this time.

At the same time, let me clarify that there are two key issues here. One is labeling, which was already addressed in the March 15, 2024, advisory, where we mandated that content must be labeled. We are empowered under the IT Act to issue such guidelines, and that guideline has already been implemented.”

He noted that global discourse on AI has also evolved—from AI safety at the Bletchley Summit (Nov 2023) to innovation at the Paris AI Action Summit (Feb 2024), which India co-chaired. He concluded that the global consensus is now on leveraging AI for innovation while addressing risks in a balanced way.

He emphasized that while stronger provisions may be introduced in the future, any major legislation from MeitY would undergo extensive stakeholder consultation. The focus remains on innovation over restriction, with an emphasis on clarifying liability laws rather than imposing additional regulations, as existing laws largely safeguard against AI-related risks.

He noted the global shift in AI perception, from a safety-centric focus at the Bletchley Summit in November 2023 to innovation-driven discussions by the Paris AI Action Summit in February 2024, which India co-chaired. He concluded that the consensus now is to harness AI’s potential while addressing risks in a balanced manner.

Mr. Malhotra highlighted the challenge of regulating India’s digital landscape, where fewer than 1,000 licensed channels contrast with over 120 million on YouTube. Monitoring such a vast ecosystem is complex, especially as violators often reappear under pseudonyms, making enforcement difficult. While Mr. McNamara acknowledged the scale of the issue, he shared insights into ongoing efforts in the EU to address similar challenges.

He said that he is hopeful that a workable agreement can be reached between copyright holders and AI developers. However, he did mention that the developers have cited that it is nearly impossible to disclose every single piece of copyrighted model used to train AI, as it simply might cover everything that exists on the internet. They have also voiced concerns over sharing such data, as they may be considered trade secrets. On the other hand, copyright holders are simply unable to discern if their content has been used and as such cannot claim compensation or seek protection under the protections provided under the Copyright Act.

He noted that if voluntary agreements fail, courts may need to intervene, potentially exposing gaps in existing laws. In Europe, AI developers are pushing back against the Code of Conduct, threatening to opt out if it remains unchanged. This could lead them to deploy AI systems in the U.S. instead, potentially widening the AI development gap between the U.S. and the EU.

Mr. Malhotra raises an important question next - whether it is deepfake or a copyright issue, the content is transmitted through established channels belonging to key players like Google and Meta; and as such could they play an important role in aiding the government in the implementation of these regulations?

Mr. McNamara brought back the point of labelling data and content - he cited the example of Spotify, which may host original tracks, covers and even AI-generated music. However, he strongly believes that if the AI Act is implemented properly, the onus would be on these tech platforms to correctly label their content.

Mr. Malhotra, echoing the concerns of the room and DNPA members, urged Shri S. Krishnan to ensure that upcoming regulations do not restrict press freedom or creative writing. He emphasized the need for effective implementation, cautioning that ineffective enforcement could lead to chaos. Striking the right balance between oversight and press freedom, he stressed, is crucial.

Shri S. Krishnan clarified that Indian media is not government-licensed, emphasizing its freedom and fairness. While creators can publish content freely, editorial policies play a crucial role in ensuring ethical, responsible, and verified reporting. He highlighted trust as the key factor, as audiences gravitate toward reliable media, ultimately allowing high-quality content to prevail. He also noted that diverse perspectives in democratic nations like India, the UK, the US, and parts of the EU make media more engaging, whereas heavily regulated jurisdictions often produce less compelling content.

“Freedom is a cornerstone of all democracies. It's something that is dearly cherished across
the European Union, and I think there is no desire to interfere with freedom of expression.

On the other hand, I think it is important that, because people have developed a trust in other
human beings' decisions, when it comes to news and current affairs in particular, any
AI-generated content is labeled as such. The Act requires this—it does not stop AI-generated
content in news and current affairs, but it mandates that any AI-generated material,
including deepfakes, must be labeled.

This ensures that people know the content is AI-generated rather than created by a human in
whom they might place their trust. Likewise, if someone is interacting with AI rather than
another human being, they must be made aware of this—though this requirement applies
outside of news and current affairs as well”, he added.

He concluded by reiterating about fair compensation to the media, and also mentioned that
restrictions in India, are imposed on very limited grounds, when it comes to maintaining the
freedom of the press, and they are always strictly with the laws of the constitution. The panel
discussion ended on these reassuring words of Shri S Krishnan, which many in the room
appreciated.

They also engaged with the audience and invited an open discussion on the matters that they
discussed.