Home Who Decides How Personal Data Is Used in Training AI Systems? PDPC Issues guidelines

Who Decides How Personal Data Is Used in Training AI Systems? PDPC Issues guidelines

by ccadm



In a move to ensure transparency and accountability in the realm of artificial intelligence (AI), new guidelines issued by the Personal Data Protection Commission (PDPC) have set forth measures to inform consumers about the utilization of their personal data in training AI systems. These guidelines, titled Advisory Guidelines on use of Personal Data in AI Recommendation and Decision Systems, aim to address concerns regarding data privacy and the ethical implications of AI technology.

Training AI systems – Guidelines for transparent data use

The guidelines, published on March 1, emphasize the importance of informing users about the rationale behind using their personal data and the specific ways it contributes to the functionality of AI systems. Companies are required to disclose to consumers the relevance of their data to the services provided and elucidate the indicators influencing AI-driven decisions. For instance, individuals using streaming services should be apprised that their viewing history data is utilized to enhance movie recommendations, tailored to their preferences based on genres or frequently watched films.

These guidelines shed light on the permissible use of personal data without seeking additional consent once it has been collected in accordance with the Personal Data Protection Act. Companies may employ such data for various purposes, including research and business improvement endeavors, such as refining AI models to understand customer preferences or optimizing human resources systems for candidate recommendations. Importantly, the guidelines stress the necessity for data anonymization and minimization to mitigate cybersecurity risks and safeguard user privacy.

Also, the guidelines underscore the importance of ongoing monitoring and review processes to ensure compliance with data protection principles and evolving best practices. Companies are encouraged to regularly assess the effectiveness of their data handling procedures, particularly concerning AI systems, and make necessary adjustments to uphold user privacy and trust.

Addressing industry concerns and suggestions

The issuance of these guidelines comes in response to industry apprehensions voiced during the Singapore Conference on AI in December 2023. Stakeholders, including tech, legal, and financial entities, expressed concerns regarding data privacy in AI, prompting a public consultation led by the PDPC, which concluded in August 2023. Notably, cybersecurity firm Kaspersky highlighted the general lack of consumer awareness regarding data collection for AI training purposes. They recommended seeking explicit consent during the development and testing stages of AI models, along with providing users with the option to opt out of their data being used for AI training.

Also, industry players have welcomed the guidelines as a step towards building greater trust and transparency in AI systems. Companies are now more equipped to navigate the complexities of data usage in AI ethically and responsibly, fostering a culture of accountability and consumer empowerment.

As AI continues to permeate various sectors, ensuring transparency and accountability in the utilization of personal data remains paramount. With the issuance of these guidelines, the PDPC endeavors to strike a balance between fostering AI innovation and safeguarding user privacy. However, challenges persist in effectively educating consumers about the intricacies of AI data usage. How can companies enhance user understanding and consent regarding the utilization of personal data in training AI systems while maintaining user trust and confidence?



Source link