The Asia-Pacific AI Harm Remedy Tracker catalogues public information about AI and related algorithmic harms suffered by individuals and communities in the broader Asia-Pacific region. This includes instances of litigation, administrative actions by government authorities, legislative and regulatory proposals addressing AI-related liability, and includes soft policy approaches and initiatives by governments and civil society in the region to seek redress and remedy for harms created by AI and related systems. Additionally, we track proactive instances of harm avoidance, including incentives to prevent harms, such as providing financial assistance to organizations to prevent harms, or by providing material support for organizations to avoid AI-related harms.
The Tracker serves to organize and collect information regarding AI-related harms in order to provide insight into AI liability in the diverse jurisdictions of the Asia-Pacific region. We aim to create a community of individuals and organizations that seek to spread best-practices in remedy for AI harms to create a robust region supporting the development of robust and accountable AI governance and AI industry.
Australia, 2024. Privacy regulators found a company violated privacy rights in the use of facial recognition technology by collecting biometric data via CCTV cameras in multiple store locations.
Hong Kong, 2024. Following an inquiry from the Privacy Commissioner, LinkedIn ceased AI training on data from the territory.
Australia, 2018. Man sues major search engine regarding auto-complete function, claiming defamation.
Japan, 2013. Court orders a search engine to change auto-complete and issues JPY 300,000 fine involving claims of defamation.
China, 2024. Court issues a fine and injunction against genAI provider for infringing copyright of a famous Japanese manga character Ultraman.
China, 2024. Court finds against a genAI provider for violating the right of publicity in the voice of an individual whose voice was replicated by the service after being trained on recordings.
India, 2024. Publishers sued OpenAI for copyright infringement claiming use of their works in training the LLM.
Asia, 2024. The Asia-Pacific Foundation of Canada issued an insights brief, "The Promise and Perils of Using Technology to Address Asia’s Climate Crisis."
Singapore, 2024. The government established the Green Compute Funding Initiative with SGD 30m to support green AI development.
South Korea, 2024. The Korean Communications Commission investigated Telegram for distributing deepfake pornography, requesting establishment of a youth safety manager.
South Korea, 2024. President of South Korea announced a 7-month special crackdown on deepfake pornography by police.
Hong Kong, 2024. Deepfake video scam defrauds company of HKD 200m.
India 2024. The Bombay High Court ruled on case involving the state’s Fact Check Unit. The ruling said that the FCU infringed on freedom of speech and its notifications were vague and arbitrary.
Asia, 2024. Microsoft Threat Intelligence report identifies East Asian threat actors using generative AI tools to aid cyber attacks.
Singapore, 2024. The Cybersecurity Administration issued guidelines on securing AI systems across the AI lifecycle.
Singapore, 2024. Parliament passed amendments to the Elections Act prohibiting the use of deepfake videos during elections with penalties of up to SGD 1000 or 12 months imprisonment, or SGD 1m for social media platforms.
South Korea, 2024. Deepfake videos circulate before parliamentary elections, in violation of updated election law with penalties including KRW 10m or up to 7 years in prison.
Philippines, 2024. The Commission on Elections issued a rule on the use of generative AI in elections. Failure to disclose use of AI material may result in takedowns.
Copyright © 2024 Digital Governance Asia - All Rights Reserved.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.