| APrIGF 2025 Session Proposal Submission Form | |||||||
|---|---|---|---|---|---|---|---|
| Part 1 - Lead Organizer | |||||||
| Contact Person | |||||||
| Ms. Sriya Sridhar | |||||||
| Organization / Affiliation (Please state "Individual" if appropriate) * | |||||||
| The Pranava Institute, India | |||||||
| Designation | |||||||
| Research Fellow | |||||||
| Gender | |||||||
| Female | |||||||
| Economy of Residence | |||||||
| India | |||||||
| Stakeholder Group | |||||||
| Civil Society | |||||||
| Part 2 - Session Proposal | |||||||
| Session Title | |||||||
| Social AI, Youth and Digital Vulnerability: A call for multi-stakeholder action in the APAC | |||||||
| Thematic Track of Your Session | |||||||
|
|||||||
| Description of Session Formats | |||||||
| Lightning Talk (20 minutes) | |||||||
| Where do you plan to organize your session? | |||||||
| Onsite at the venue (with online moderator for questions and comments from remote participants) | |||||||
| Specific Issues for Discussion | |||||||
| The APAC region has the largest percentage of internet users aged 15-24, and makes up a significant percentage of the global youth internet user base. There is a sharp rise in the deployment and usage of interactive, or human-mimicking AI systems (‘Social AI’). This session will present research evidence outlining the challenges and harms arising from the usage of Social AI among young people aged around 15-25 years (and more generally) and make a case for urgent multi-stakeholder action to protect the digital wellbeing of young users in the APAC region. The evidence of harms (both individual and societal) which arise from Social AI is rising and is being documented globally and in the APAC region. These range from adverse mental health consequences, users developing addiction to and emotional dependency on these systems, and harmful outputs which are aggressive or perpetuate stigmas. Social AI systems are also designed to be addictive, deceptively anthropomorphic, and sycophantic, raising concerns about manipulation and interference with user rights. Young users, forming 40% of APAC’s internet users, are having their online experience increasingly mediated and shaped by Social AI. Therefore, they are especially vulnerable to the harms mentioned, further impacted by their level of education, digital literacy and mental development. This talk will present evidence from my ongoing research at The Pranava Institute, which looks into policy pathways for the ethical design, development of social AI, and how regulation can be shaped to prevent harms from these systems and protect users from manipulative practices. Finally, the talk will spotlight open questions and suggest key action pathways for multi-stakeholder coordination and participation between regulators, designers, mental health professionals, educators and others, to ensure that such systems are deployed with considerations of ethical design and development, transparency, and harm prevention/mitigation. |
|||||||
| Describe the Relevance of Your Session to APrIGF | |||||||
| This session relates to the sub-theme of 'Innovation and Emerging Technologies', specifically the ethical governance and design of newer forms of AI technologies. This session contributes to this theme by sparking a discussion on a still relatively under-researched topic in AI development, namely, AI systems deployed for emotional, human-mimicking use cases (Social AI) which have already seen significant harms across the Asia Pacific region requiring urgent multistakeholder attention and action. The session also relates to the overarching theme of multi-stakeholder digital governance in the APAC region - addressing the harms caused by the usage of Social AI is a complex issue, requiring participation and collaboration across stakeholders such as academia, civil society, educators, designers, governmental and non-governmental organisations in the APAC region. As the APAC region contends with balancing promoting innovation with protecting young and more vulnerable users of technology, it is crucial to acknowledge the possible manipulative impacts of these technologies and ways in which young people are using Social AI to mediate their digital, and overall emotional experiences at a formative stage. The usage of Social AI is arguably the next frontier of digital governance akin to addressing the risks of social media. This session will leave participants with knowledge about the current landscape of emotional AI systems and human-AI interaction, the documented harms and possible pathways for regulation. This session also aims to encourage more multi-stakeholder collaboration on this issue to build safe and sustainable interactive digital environments that protect/promote digital wellbeing. |
|||||||
| Methodology / Agenda (Please add rows by clicking "+" on the right) | |||||||
|
|||||||
| Moderators & Speakers Info (Please complete where possible) - (Required) | |||||||
|
|||||||
| Please explain the rationale for choosing each of the above contributors to the session. | |||||||
| Sriya Sridhar is a Research Fellow at The Pranava Institute, a New Delhi-based think tank focused on technology, policy, and society. She is also a Fellow at the School of Law, Shiv Nadar University, and is currently pursuing an LLM in Innovation, Technology and the Law at the University of Edinburgh. Her work spans academia, legal practice, and policy, with experience advising both government and private sector entities in India on the regulation of emerging technologies. Her research focuses on AI regulation, data protection, and privacy law, with a particular interest in human-AI interaction in emotionally responsive systems (social AI) used in entertainment, companionship, and therapeutic contexts. Sriya has worked extensively on technology and regulatory design in the Indian and global context, and works with young students on digital well-being initiatives. |
|||||||
| Please declare if you have any potential conflict of interest with the Program Committee 2025. | |||||||
| No | |||||||
| Are you or other session contributors planning to apply for the APrIGF Fellowship Program 2025? | |||||||
| No | |||||||
| Upon evaluation by the Program Committee, your session proposal may only be selected under the condition that you will accept the suggestion of merging with another proposal with similar topics. Please state your preference below: | |||||||
| Yes, I am willing to work with another session proposer on a suggested merger. | |||||||
| Brief Summary of Your Session | |||||||
| The session was a presentation about the 'Feeling Automated' project at The Pranava Institute, which looks into policy pathways for the ethical design, development of social AI, and how regulation can be shaped to prevent harms from these systems and protect users from manipulative practices. The session organiser examined the state of Social AI systems, associated risks and harms, and findings from independent testing and stakeholder consultations. The talk then focused on why this is an issue for the APAC region, given its large young population accessing the Internet and using AI to mediate their digital experience, focusing on specific areas where policymakers, educators, mental health professionals and families must focus to prevent the risks arising from the use of Social AI tools. | |||||||
| Substantive Summary of the Key Issues Raised and the Discussion | |||||||
| 1. The need for stakeholders in the APAC region to be cognisant of the risks of Social AI, including addiction, dependency and other forms of psychological harm. 2. Results from the testing process conducted through our research, which uncovered worrying behaviours such as: Disclosure and continued prompting for more personal information, Validation and sycophancy about mental health diagnoses, self-harm, misogynistic views, Information about medication and dosages, Ability for children to have conversations around sensitive topics and receive advice, Unpredictability in terms of interactions – range from excessively passive in vulnerable interactions to excessively emotional, Inadequate support even in controlled therapy tools 3. Key concerns from stakeholders interviewed, including: Lack of transparency in model training, Privacy concerns, Minor safety, Self-diagnoses of mental health conditions and exacerbation of existing mental health conditions, Eroding the ability to socialize, especially for young people, Culturally insensitive outputs and content moderation |
|||||||
| Conclusions and Suggestions of Way Forward | |||||||
| 1. Suggestions for different stakeholders in the APAC, namely, policymakers, mental health professionals, parents and educators. 2. Suggestions for ways forward for regulation of Social AI systems: Liability and enforcement frameworks where harm is caused Age verification and safeguards where minors enter into emotional interactions Re-direction to crisis hotlines and support resources for users expressing thoughts of self-harm Incentivize positive AI companion development that demonstrate measurable benefits Focus on audit mechanisms and continued monitoring Build in usage limits, break features, friction in design Limit claims which can be made about emotional/therapeutic benefits |
|||||||
| Number of Attendees (Please fill in numbers) | |||||||
|
|||||||
| Gender Balance in Moderators/Speakers (Please fill in numbers) | |||||||
|
|||||||
| How were gender perspectives, equality, inclusion or empowerment discussed? Please provide details and context. | |||||||
| These issues were not specifically discussed in the session, due to the nature of the topic. | |||||||
| Consent | |||||||
I agree that my data can be submitted to forms.for.asia and processed by APrIGF organizers for the program selection of APrIGF 2025. |
|||||||
I agree that my data can be submitted to forms.for.asia and processed by APrIGF organizers for the program selection of APrIGF 2025.