Saturday, November 23, 2024
ad
HomeMiscellaneousSingapore Teases Transparent, Explainable, Fair, and Ethical AI with the Announcement of...

Singapore Teases Transparent, Explainable, Fair, and Ethical AI with the Announcement of A.I. Verify

Will A.I. Verify toolkit from Singapore successfully prove its mettle in ensuring companies' adherence to Ethical AI practices?

The artificial intelligence (AI) adoption rate has been growing exponentially in the past few years. While developed countries like the USA, China, and Japan are already at the forefront of adopting this technology, countries like Singapore are not trailing behind either. Most recently, Minister for Communications and Information, Josephine Teo, announced piloting the world’s first AI Governance Testing Framework and Toolkit, at the World Economic Forum Annual Meeting in Davos in May this year. A.I. Verify provides a means for companies to measure and demonstrate how safe and reliable their AI products and services are.

The Infocomm Media Development Authority (IMDA) and the Personal Data Protection Commission (PDPC), which oversees the country’s Personal Data Protection Act, created the new toolkit to bolster the nation’s commitment to encouraging the ethical use of AI. This development adheres to the guidelines imposed in the Model AI Governance Framework in 2020 and the core themes of the National AI Strategy in 2019. Through self-conducted specialized testing and process inspections, A.I. Verify aims to improve transparency in the usage of AI between organizations and their stakeholders. Being the first of its type, A.I. Verify is ready to help organizations navigate the complex ethical issues that arise when AI technology and solutions are used.

IMDA also mentioned that A.I. Verify abided with globally established AI ethical standards and norms, including those from Europe and the OECD, and covered critical aspects such as repeatability, robustness, fairness, and social and environmental wellness. Testing and certification regimes that included components like cybersecurity and data governance were also included in the framework.

The new toolkit is now available as a minimum viable product (MVP), which includes ‘basic’ capabilities for early users to test and provide feedback for product development. It performs technical testing based on three principles: “fairness, explainability, and robustness,” combining widely used open-source libraries into a self-assessment toolbox. For explainability, there’s SHAP (SHapley Additive exPlanations), for adversarial robustness, there’s Adversarial Robustness Toolkit, and for fairness testing, there’s AIF360 and Fairlearn. 

The MVP Testing Framework tackles five Pillars of concern for AI systems, which encompass 11 widely recognized AI ethical principles, namely, transparency, explainability, repeatability or reproducibility, safety, security, robustness, fairness, data governance, accountability, human agency, and oversight, and inclusive growth, social and environmental well-being.

The five pillars are as follows:

  • transparency in the usage of AI and its systems; 
  • knowing how an AI model makes a decision; 
  • guaranteeing AI system safety and resilience; 
  • ensuring fairness and no inadvertent discrimination by AI; 
  • and providing adequate management and monitoring of AI systems.

Organizations can participate in the MVP piloting program, gaining early access to the MVP and using it to self-test their AI systems and models. This also allows for developing international standards and creating an internationally applicable MVP to reflect industry needs.

Finally, A.I. Verify intends to analyze deployment transparency, support organizations in AI-related enterprises, evaluate goods or services to be offered to the public, and assist prospective AI investors through AI’s advantages, dangers, and limits.

The pilot toolkit also creates reports for developers, managers, and business partners, covering essential areas that influence AI performance and putting the AI model to the test. It’s packed as a Docker container, so it is quickly installed in the user’s environment. The toolkit currently supports binary classification and regression algorithms from popular frameworks such as scikit-learn, Tensorflow, and XGBoost, among others.

The test framework and tools, according to IMDA, will even allow AI system developers to undertake self-testing not only to ensure that the product meets market criteria but also to provide a common platform for displaying the test results. Overall, A.I. Verify attempted to authenticate claims made by AI system developers regarding their AI use along with the performance of their AI products, rather than defining ethical norms.

Read More: UNESCO unveils First Global Agreement On Ethics Of Artificial Intelligence: What Next?

However, there is a flaw in this innovation. The toolbox would not ensure that the AI system under examination was free of biases or security issues, according to IMDA. Furthermore, the MVP is unable to specify ethical criteria and can only verify statements made by AI system creators or owners on the AI systems’ methodology, usage, and verified performance.

Because of such constraints, it’s difficult to say how AI Verify will aid stakeholders and industry participants in the long term. For now, there is no knowledge about how developers will ensure that the information provided in the toolkit prior to self-assessment is correct and not based on speculations. This is quite a technological challenge, A.I. Verify will have to overcome.

Singapore intends to cooperate with AI system owners or developers throughout the world in the future to collect and produce industry benchmarks for the creation of worldwide AI governance standards. Singapore has engaged in ISO/IEC JTC1/SC 42 on AI for the interoperability of AI governance frameworks and the creation of international standards on AI, and is working with the US Department of Commerce and other like-minded governments and partners. For instance, Singapore collaborated with the US Department of Commerce to guarantee interoperability between their AI governance frameworks.

According to IMDA, several organizations have already tested and provided comments on the new toolset, including Google, Meta, Microsoft, Singapore Airlines, and Standard Chartered Bank. With industry input and comments, more functionalities will be gradually introduced.

Singapore hopes to strengthen its position as a leading digital economy and AI-empowered nation by introducing the toolkit as it continues investing in and developing AI capabilities. While Singapore aspires to be a leader in creating and implementing scalable and impactful AI solutions by 2030, it is evident that the country places a substantial value on encouraging ethical AI practices.

Subscribe to our newsletter

Subscribe and never miss out on such trending AI-related articles.

We will never sell your data

Join our WhatsApp Channel and Discord Server to be a part of an engaging community.

Preetipadma K
Preetipadma K
Preeti is an Artificial Intelligence aficionado and a geek at heart. When she is not busy reading about the latest tech stories, she will be binge-watching Netflix or F1 races!

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular