In order to safeguard personal data when it is used to develop artificial intelligence models and systems, Singapore recently announced a draft of guidelines. The document describes how the Personal Data Protection Act (PDPA) of the nation will apply in these circumstances.
The objectives of the rules are to guarantee responsibility and transparency in the use of personal data for AI training. The guidelines offer firms best practices for establishing transparency around the use of personal data by AI systems in making judgements, predictions, and recommendations.
According to the guidelines, when gathering personal data for AI systems, the principles cover consent, accountability, and notification responsibilities. They also identify two exceptions: business development and research.
The guidelines advise performing an impact analysis on data protection for AI systems that use personal information. The effectiveness of risk reduction and data remediation measures is evaluated by this assessment.
When creating, educating, and maintaining AI systems, organizations should implement the necessary technological procedures and legal safeguards to ensure data safety. In order to train and enhance the AI system, they should also practice minimizing data, employing only the essential personal data features.
By August 31st, the Personal Data Protection Commission (PDPC) is asking the public for suggestions and comments on the draft rules.