Tuesday, November 29, 2022
adspot_img
HomeOpinionNITI Aayog’s Notion of Responsible AI

NITI Aayog’s Notion of Responsible AI

Due to the rising applicability of artificial intelligence, the government has decided to focus its efforts on several sectors that can embrace and benefit from the same. NITI Aayog is a government body developing itself as a state-of-the-art center for resources and skills. It promotes research and policy development for the government while dealing with contingency issues. This government body has embarked on making AI responsible in the country. 

In June 2018, NITI Aayog released a discussion paper named National Strategy for Artificial Intelligence (NSAI) as a part of its mandate entrusted in the Budget Speech of 2018-2019. The NSAI discussion paper highlighted the potential of artificial intelligence (AI) and its large-scale adoption in the country and made recommendations to ensure responsible utilization and management. The paper also included a roadmap for implementing AI in five public sectors and described “AI for All” as the guiding philosophy for upcoming AI design, development, and implementation in India.

Besides researching, NITI Aayog has also collaborated with companies like Amazon Web Series and Intel to establish innovation centers. It established the Experience Studio at the Delhi headquarters to facilitate innovation among industry experts, government stakeholders, and startups.

More recently, NITI Aayog has published a series of papers discussing “Responsible AI,” the practice of integrating good intent while leveraging artificial intelligence. The series was propagated using the hashtag #AIForAll. 

NITI Aayog published the first edition, “Principles for Responsible AI,” a two-part approach paper defining ethical design, development, and use of artificial intelligence in India and enforcement methods for putting these principles into practice. This edition mentions case studies and considerations in the context of ‘Narrow AI’ solutions categorized as ‘systems considerations’ and ‘societal considerations.’ The former category results from system designs and deployment methods, while the latter stems from ethical challenges arising from AI applications.

This edition also builds on Capgemini’s report highlighting that approximately 85% of organizations had ethical concerns about using AI. It discusses other relevant concerns like job loss due to automation, malicious intent that comes with technology, targeted propaganda, etc.

Read More: Google is Developing an AI App that Creates Images from Text

The second part of the Responsible AI series, “Operationalizing Principles for Responsible AI,” identifies a series of actions that organizations should adopt while embracing AI responsibly. Written in collaboration with the World Economic Forum Centre for the Fourth Industrial Revolution, the book bifurcates all necessary actions between the government and private sector. A particular focus has been placed on the government’s role in ensuring responsible AI adoption and managing the actions of the private sector. 

NITI Aayog recently released the third edition of the series, “Responsible AI for All: Adopting the Framework – A use case approach on Facial Recognition Technology.” To document this paper, NITI Aayog collaborated with Vidhi Centre for Legal Policy and tested the principles and actions mentioned in the previous two releases in the first use case, Facial Recognition Technology (FRT). The book describes FRT as a concept with some common uses across the country.

However, the mentioned technology is a debatable topic on both domestic and international scales because of its hazards to human rights, like privacy. Therefore, as part of the organization’s effort to make AI more responsible, it will work closely with the Ministry of Civil Aviation to launch the Digi Yatra Program. This program will incorporate facial recognition (FR) and facial verification (FV) technologies to enhance the travel experience. FVT will also be used at various airports for passenger identification verification, ticket validation, and other checks as necessary from time to time, depending on the operational requirements.

Working in such a technical field and realizing its potential is a significant step forward for the government. The public sector is also embracing it, not only to ensure that such technologies are ethically used responsibly in the private sector but also for the enhancement of its own. There are many use cases where governments can use artificial intelligence. It can benefit emergency services, public interaction, virtual assistants, and many others.

Subscribe to our newsletter

Subscribe and never miss out on such trending AI-related articles.

We will never sell your data

Join our Telegram and WhatsApp group to be a part of an engaging community.

Disha Chopra
Disha Chopra
Disha Chopra is a content enthusiast! She is an Economics graduate pursuing her PG in the same field along with Data Sciences. Disha enjoys the ever-demanding world of content and the flexibility that comes with it. She can be found listening to music or simply asleep when not working!

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img
spot_img

Most Popular