Meity seeks proposals for development of tools to tackle deepfakes and ensure responsible AI
The Ministry of Electronics and Information Technology has invited proposals for watermarking and labelling tools to authenticate AI-generated content, ensuring it is traceable, secure, and free of harmful materials.
The Ministry of Electronics and Information Technology (Meity) has invited proposals from entities for the development of technology tools to create a trusted AI ecosystem.
According to information published on the Meity website, as part of IndiaAI mission, the 'Safe and Trusted AI' pillar envisages the development of indigenous tools and frameworks and self-assessment checklists for innovators to put in place adequate guardrails to advance the responsible adoption of AI.
"To spearhead this movement, IndiaAI is calling for expressions of interest (EOI) from individuals and organisations that want to lead AI development projects to foster accountability, mitigate AI harms and promote fairness in AI practices," the note for proposal said.
Meity has invited proposals for watermarking and labelling tools to authenticate AI-generated content, ensuring it is traceable, secure, and free of harmful materials.
The proposal from Meity calls for the need to establish AI frameworks that align with global standards, ensuring AI respects human values and promotes fairness. It also includes the creation of deepfake detection tools to enable real-time identification and mitigation of deepfakes, preventing misinformation and harm for a secure and trustworthy digital ecosystem.
Meity also seeks the creation of risk management tools and frameworks to enhance the safe deployment of AI in public services and stress-testing tools to evaluate how AI models perform under extreme scenarios, detect vulnerabilities, and build trust in AI for critical applications.
Edited by Swetha Kannan