| APrIGF 2025 Session Proposal Submission Form | |||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Part 1 - Lead Organizer | |||||||||||||||
| Contact Person | |||||||||||||||
| Ms. Shradhanjali Sarma | |||||||||||||||
| Organization / Affiliation (Please state "Individual" if appropriate) * | |||||||||||||||
| CCAOI | |||||||||||||||
| Designation | |||||||||||||||
| Legal and Policy Consultant | |||||||||||||||
| Gender | |||||||||||||||
| Female | |||||||||||||||
| Economy of Residence | |||||||||||||||
| India | |||||||||||||||
| Stakeholder Group | |||||||||||||||
| Civil Society | |||||||||||||||
| Part 2 - Session Proposal | |||||||||||||||
| Session Title | |||||||||||||||
| Regulating AI Beyond the Hype: Evidence, Equity, and Policy Priorities for the Asia-Pacific | |||||||||||||||
| Thematic Track of Your Session | |||||||||||||||
|
|||||||||||||||
| Description of Session Formats | |||||||||||||||
| Panel Discussion (60 minutes) | |||||||||||||||
| Where do you plan to organize your session? | |||||||||||||||
| Onsite at the venue (with online moderator for questions and comments from remote participants) | |||||||||||||||
| Specific Issues for Discussion | |||||||||||||||
| This session explores pressing regulatory, ethical, and governance challenges surrounding the rapid development and deployment of AI technologies in the Asia-Pacific region. It will address concerns ranging from disinformation and surveillance to regulatory gaps in tackling algorithmic bias and managing open-source AI. Key questions include whether we are overestimating or underestimating the risks of current-generation AI; how regulation can keep pace with technological innovation without stifling progress; what unique regulatory risks arise from open-source AI in multilingual, low-supervision contexts; and how countries in the region can balance AI development with civil liberties and social equity. The session will also examine the feasibility of establishing a regional AI governance framework, akin to the EU AI Act but tailored for the Global South, and the role that industry, academia, and government can play in co-developing safety standards, public datasets, and governance infrastructure. This session will also explore the real capabilities and limitations of current-generation AI systems, moving beyond inflated expectations to assess where AI can meaningfully deliver impact and where it remains unreliable or inappropriate. Many AI tools, particularly in sensitive areas like criminal justice, healthcare, and governance, are being deployed without sufficient scientific evidence of their effectiveness. This poses serious ethical and social risks, especially in diverse, low-resource, or digitally excluded contexts. Grounding the discussion in both scientific realities and real-world constraints, the session will seek to separate practical applications from speculative hype and ensure that policy responses are proportionate, feasible, and rooted in public interest. |
|||||||||||||||
| Describe the Relevance of Your Session to APrIGF | |||||||||||||||
| This session aligns with the Security & Trust and Innovation & Emerging Technologies tracks of APrIGF 2025. It contributes to the overarching theme of fostering responsible and resilient digital ecosystems in the APAC region by examining the region’s unique vulnerabilities and policy gaps around AI deployment; highlighting regulatory, infrastructural, and capacity-building needs in low-resource economies; exploring principles of co-regulation and adaptive governance in emerging technologies; and fostering South-South collaboration while centering global majority perspectives in AI policy discourse. The session aims to generate concrete policy insights and inform a potential roadmap toward an APAC-specific AI governance framework. | |||||||||||||||
| Methodology / Agenda (Please add rows by clicking "+" on the right) | |||||||||||||||
|
|||||||||||||||
| Moderators & Speakers Info (Please complete where possible) - (Required) | |||||||||||||||
|
|||||||||||||||
| Please explain the rationale for choosing each of the above contributors to the session. | |||||||||||||||
| 1. Aadhesh Khadka: Represents government leadership from a South Asian country, bringing valuable insight into policy feasibility, regulatory capacity, and digital inclusion. His perspective ensures the session stays grounded in practical governance challenges across developing economies. 2. Aishwarya Salvi: Brings a development-sector lens to AI governance, with expertise in ethical digital transformation, institutional capacity building, and inclusive tech ecosystems. 3. Sunil Abraham: Combines experience from civil society and big tech, offering unique insights on open-source AI risks, algorithmic accountability, and content governance. He helps frame balanced, evidence-driven approaches to innovation, self-regulation, and platform responsibility. 4. Shita Laksmi: Contributes a rights-based, gender-inclusive viewpoint rooted in Southeast Asian civil society and digital governance practice. |
|||||||||||||||
| Please declare if you have any potential conflict of interest with the Program Committee 2025. | |||||||||||||||
| No | |||||||||||||||
| Are you or other session contributors planning to apply for the APrIGF Fellowship Program 2025? | |||||||||||||||
| Yes | |||||||||||||||
| Upon evaluation by the Program Committee, your session proposal may only be selected under the condition that you will accept the suggestion of merging with another proposal with similar topics. Please state your preference below: | |||||||||||||||
| Yes, I am willing to work with another session proposer on a suggested merger. | |||||||||||||||
| Brief Summary of Your Session | |||||||||||||||
| The session titled “Regulating AI Beyond the Hype: Evidence, Equity, and Policy Priorities for the Asia-Pacific” explored the complex relationship between innovation and regulation in the context of artificial intelligence governance across the Asia-Pacific region. The discussion sought to move beyond the inflated narratives surrounding AI, both in terms of its capabilities and risks and emphasized the need for evidence-based, equitable, and inclusive policy frameworks. Panelists representing diverse stakeholder communities and backgrounds (policy, civil society, and technology), highlighted that while AI offers immense potential for innovation and social benefit, unregulated or poorly regulated deployment may deepen inequality, exclusion, and harm to marginalized communities. Discussions centered around balancing innovation and regulation, addressing institutional capacity gaps in South and Southeast Asia, mitigating algorithmic bias, and enhancing multilingual inclusion. Examples were drawn from national experiences such as Nepal’s nascent AI policy, Indonesia’s AI roadmap, and India’s collaborative AI procurement initiatives. The conversation also examined the role of regulatory sandboxes, open data ecosystems, and human rights frameworks in ensuring responsible innovation. Finally, the dialogue underscored the importance of digital literacy, public procurement standards, and participatory policymaking in shaping AI governance that serves public interest. |
|||||||||||||||
| Substantive Summary of the Key Issues Raised and the Discussion | |||||||||||||||
| 1. Balancing Regulation and Innovation: All panelists recognized a delicate equilibrium between encouraging AI innovation and ensuring accountability. Regulations should neither stifle creativity nor allow unchecked harm. Aishwarya and Adesh emphasized that well-designed, multi-stakeholder regulations build trust, whereas Sunil pointed out that overly restrictive privacy or copyright laws could impede access to datasets essential for innovation. 2. Institutional and Capacity Challenges: Adesh highlighted the limited institutional capacity in South Asian countries, noting that governments often lack both technical expertise and regulatory infrastructure. Building dual capacity within new AI sectors and existing regulatory bodies is critical. Tools such as regulatory sandboxes were suggested as early mechanisms for iterative governance and policy learning. 3. Algorithmic Accountability and Disinformation: Sunil argued that concerns about disinformation are often overstated and that misinformation predates AI. He emphasized the need for user literacy and transparency rather than blanket regulation. Meta’s regional initiatives to enhance digital literacy in countries such as Vietnam, Indonesia, and Bangladesh were cited as proactive models. 4. Inclusion, Equity, and Multilingualism: Aishwarya underlined that most AI systems disproportionately favor dominant languages, marginalizing smaller linguistic groups. She advocated for open-source voice technologies and collaborative data projects (e.g., Fair Forward with IISc) to develop datasets for low-resource languages. 5. Participatory Policymaking and Human Rights Frameworks: Shita Laksmi called for embedding human rights principles in AI policy to simultaneously safeguard individuals and encourage innovation. She emphasized capacity-building among non-technical policymakers to ensure inclusive and informed AI governance. Global Cooperation and Local Realities: The panel agreed that Asia-Pacific AI regulation must be regionally contextualized, with local datasets, open collaboration, and evidence-based governance reflecting social and cultural diversity. |
|||||||||||||||
| Conclusions and Suggestions of Way Forward | |||||||||||||||
| 1. Evidence over Hype: Policymaking in AI should be guided by empirical understanding of risks and opportunities, not inflated narratives of fear or over-optimism. 2. Balanced Regulation: A calibrated approach neither over-regulation nor laissez-faire can enable innovation while ensuring accountability. 3. Institutional Strengthening: Building regulatory, technical, and human capacity is essential for countries with emerging AI ecosystems. 4. Human-Centric Governance: A human rights-based approach can provide an ethical anchor for AI policy, particularly to protect vulnerable populations. 5. Digital Literacy & Transparency: Public understanding of AI systems and transparent algorithmic processes are key to mitigating harm and misinformation. 6. Inclusive and Multilingual Design: AI systems must reflect linguistic and cultural diversity, promoting accessibility for marginalized communities. 7. Collaborative Frameworks: Policymaking should involve governments, civil society, academia, and industry recognizing the value of public procurement and open-source tools. 8. Regional Synergy: Asia-Pacific nations must share data, best practices, and research to build interoperable and equitable AI governance systems. |
|||||||||||||||
| Number of Attendees (Please fill in numbers) | |||||||||||||||
|
|||||||||||||||
| Gender Balance in Moderators/Speakers (Please fill in numbers) | |||||||||||||||
|
|||||||||||||||
| How were gender perspectives, equality, inclusion or empowerment discussed? Please provide details and context. | |||||||||||||||
| We discussed how AI, despite its transformative potential, can deepen existing inequalities in disadvantaged societies across the Asia–Pacific. The region’s multilingual and culturally diverse nature means that most AI systems trained primarily on English and a few dominant regional languages often fail to serve speakers of underrepresented or indigenous languages. This creates barriers to access in essential areas such as education, healthcare, and digital governance. Moreover, algorithmic systems trained on biased data tend to replicate existing social hierarchies, favoring urban, affluent, and digitally literate populations while misrepresenting or excluding marginalized communities. Facial recognition errors, flawed translation outputs, and language-based exclusion from AI-driven services all reflect how structural and linguistic biases are embedded in current AI tools. We also noted that this exclusion is not merely technological but cultural. | |||||||||||||||
| Consent | |||||||||||||||
I agree that my data can be submitted to forms.for.asia and processed by APrIGF organizers for the program selection of APrIGF 2025. |
|||||||||||||||
I agree that my data can be submitted to forms.for.asia and processed by APrIGF organizers for the program selection of APrIGF 2025.