Proposal

APrIGF 2024 Session Proposal Submission Form
Part 1 - Lead Organizer
Contact Person
Ms. Jhalak M. Kakkar
Email
Organization / Affiliation (Please state "Individual" if appropriate) *
Centre for Communication Governance (CCG) at National Law University Delhi
Designation
Executive Director
Gender
Female
Economy of Residence
India
Primary Stakeholder Group
Academia
List Your Organizing Partners (if any)
Jason Grant Allen, Centre for AI and Data Governance (CAIDG) at Singapore Management University (SMU), jgallen@smu.edu.sg
Part 2 - Session Proposal
Session Title
Contextualising Fairness: AI Governance in Asia
Session Format
Tutorial (30 minutes)
Where do you plan to organize your session?
Online only (with onsite facilitator who will help with questions or comments from the floor)
Specific Issues for Discussion
In this tutorial session, we will discuss the challenges with fairness in AI systems and cover key components of the principle of fairness (equality, bias and non-discrimination, inclusivity, and reliability). These components are relevant globally, but are interpreted differently across jurisdictions. For eg. caste may not be relevant in the US or Europe but is a key aspect of non-discrimination in India. Understanding these components is essential to develop and deploy AI systems that uphold ethical principles and prevent potential harm, while enabling continued innovation.
We will discuss how ‘fairness’ is subjective in nature and must be tailored to specific regional and social contexts. Fairness in AI and related ethical norms and regulatory frameworks haves often been examined and developed with primary focus on the US and Europe. However, there are unique socio-cultural contexts across the APAC region that affect the notion of ‘fairness’ , making it ineffective to adopt existing metrics of fairness and apply them universally. Hence, in our tutorial, we will discuss what fairness entails in India, Taiwan and Singapore to showcase how the concept varies even within various jurisdictions across Asia.
Further, we will discuss case studies like the biassed AI job recommendation system in Indonesia to illustrate the complexities of fairness in AI. The tutorial will end with an open discussion to gain perspectives from the participants on fairness metrics in their own countries and analyse how fairness as a concept differs based on socio-cultural contexts.
Recently, through a collaborative regional dialogue with SMU, we brought together diverse stakeholders from the APAC region to discuss the multifaceted concept of fairness in AI. This session aims to leverage the learnings from the dialogue.
Describe the Relevance of Your Session to APrIGF
The session relates to the theme of ethical governance of emerging technologies in the context of AI development and deployment. The increasing use of AI in sectors such as healthcare, finance, law, etc. has raised concerns about societal risks, necessitating ethical governance. The ethical governance of AI through the use of focused principles has gained traction over the last few years- one such key principle is that of fairness. In this tutorial session, we will look at the gap between the existing understanding of fairness as a principle and its implementation, and discuss the need for an APAC focused approach towards operationalising fairness, while accounting for the variation in the conception of what fairness entails even within the region.
Through a comparative analysis of fairness in India, Singapore and Taiwan, participants will explore the varying interpretations of fairness and will learn about the need for context-specific metrics of fairness to accommodate for regional, social, and local contexts. The open discussion will allow participants to discuss and analyse different aspects of fairness in their own countries.
As an outcome of the session, the participants will have learned about what fairness entails, how the concept varies across jurisdictions and the need for context-specific fairness metrics. The participants will be able to identify bespoke metrics to evaluate fairness in AI in their socio-cultural contexts and contribute to future conversations on AI governance. Building upon the insights from the session, we will upload a post on the CCG Blog and produce a podcast on the CCG Tech Podcast which will discuss key takeaways from the session.
Methodology / Agenda (Please add rows by clicking "+" on the right)
Time frame (e.g. 5 minutes, 20 minutes, should add up to 60 minutes) Description
1 minute Introduction to the session (by the primary moderator)
4 minutes Overview of Fairness in AI (by the primary moderator)
21 minutes sectioned as 7 minutes per speaker (3) Fairness in Taiwan, Singapore and India (by three speakers)
4 minutes Reflection by Speakers and Closing
Moderators & Speakers Info (Please complete where possible)
  • Moderator (Primary)

    • Name: Tejaswita Kharel
    • Organization: Centre for Communication Governance at National Law University Delhi
    • Designation: Project Officer
    • Gender: Female
    • Economy / Country of Residence: India
    • Stakeholder Group: Academia
    • Expected Presence: Online
    • Status of Confirmation: Confirmed
    • Link of Bio (URL only): https://ccgdelhi.org/meet-people/tejaswita-kharel
  • Moderator (Facilitator)

    • Name: Isabel Hou
    • Organization: Taiwan AI Academy Foundation
    • Designation: Secretary General
    • Gender: Secretary General
    • Economy / Country of Residence: Taiwan
    • Stakeholder Group: Academia
    • Expected Presence: In-person
    • Status of Confirmation: Invited
    • Link of Bio (URL only): https://tictec.mysociety.org/tictec-archive/2019/speaker/isabel-hou
  • Speaker 1

    • Name: Jason Grant Allen
    • Organization: Centre for AI and Data Governance, Singapore Management University
    • Designation: Director
    • Gender: Male
    • Economy / Country of Residence: Singapore
    • Stakeholder Group: Academia
    • Expected Presence: Online
    • Status of Confirmation: Confirmed
    • Link of Bio (URL only): https://faculty.smu.edu.sg/profile/jason-grant-allen-6551
  • Speaker 2

    • Name: Nidhi Singh
    • Organization: Centre for Communication Governance at National Law University Delhi
    • Designation: Project Manager
    • Gender: Female
    • Economy / Country of Residence: India
    • Stakeholder Group: Academia
    • Expected Presence: Online
    • Status of Confirmation: Confirmed
    • Link of Bio (URL only): https://ccgdelhi.org/meet-people/nidhi-singh
  • Speaker 3

    • Name: Isabel Hou
    • Organization: Taiwan AI Academy Foundation
    • Designation: Secretary General
    • Gender: Female
    • Economy / Country of Residence: Taiwan
    • Stakeholder Group: Academia
    • Expected Presence: In-person
    • Status of Confirmation: Confirmed
    • Link of Bio (URL only): https://tictec.mysociety.org/tictec-archive/2019/speaker/isabel-hou
Please explain the rationale for choosing each of the above contributors to the session.
Jason Grant Allen is the Director of the Centre for AI & Data Governance at the Singapore Management University. He has extensive expertise and experience in the intersection of law and emerging technology, especially in AI. As a seasoned academic, lawyer, and researcher, Jason offers valuable insights and perspectives on promoting fairness and accountability in AI systems.
Isabel Hou is an experienced lawyer specializing in innovative technology since 2000. She currently serves as the Secretary General of the Taiwan AI Academy Foundation, which is dedicated to democratizing AI technology through comprehensive training programs for Taiwan's workforce. Since 2017, under her leadership, the academy has successfully trained over 11,000 engineers and managers from more than 2,000 companies.
Nidhi Singh is a Project Manager at the Centre for Communication Governance (CCG), National Law University Delhi. She works extensively on AI regulation and data governance, providing policy comments to the Indian government and international organisations like OHCHR on the design of AI regulation and data governance. She is also part of the Asian Dialogue on AI Governance (collaboration between Singapore Management University and Microsoft) and engages in discussions around principle and regulatory design around AI in countries like India, Singapore, South Korea, and New Zealand.
Tejaswita Kharel is a Project Officer at the Centre for Communication Governance (CCG) at National Law University Delhi. As a lawyer and researcher, her work relates to various aspects of information and technology law and policy including data protection, privacy and emerging technologies such as artificial intelligence and blockchain. Her work on the ethical governance and regulation of technology is guided by human rights based perspectives, democratic values, and constitutional principles.
If you need assistance to find a suitable speaker to contribute to your session, or an onsite facilitator for your online-only session, please specify your request with details of what you are looking for.
We are currently looking for an onsite facilitator to help manage out online-only session.
Please declare if you have any potential conflict of interest with the Program Committee 2024.
No
Are you or other session contributors planning to apply for the APrIGF Fellowship Program 2024?
No
APrIGF offers live transcript in English for all sessions. Do you need any other translation support or any disability related requests for your session? APrIGF makes every effort to be a fully inclusive and accessible event, and will do the best to fulfill your needs.
In order to enable inclusive participation, we request for live transcription of our session. Additionally, since we aim to use interactive tools such as mentimeter, white-board, polling, etc. to engage with both the online and onsite participants, we request APrIGF to ensure that the online platform allows for online access for the use of these tools.
Brief Summary of Your Session
The session provided a comprehensive exploration of fairness as a critical component of AI ethics. It highlighted that fairness in AI cannot be universally defined, as it must be contextualised within different regional and societal frameworks. The session began with a brief explanation on what fairness in AI entails and why it is important to localise AI fairness principles, considering diverse realities across jurisdictions.

The first speaker, Nidhi Singh (Project Manager, Centre for Communication Governance at National Law University Delhi), spoke on the Indian context, highlighting the challenges posed by India’s vast and diverse population. Key aspects of fairness including equality, non-discrimination, and inclusivity were discussed, noting that these principles manifest differently across India's diverse society. She stressed the importance of transparent and inclusive AI systems that consider the country’s unique social dynamics.

Our second speaker, Isabel Hou (Secretary General, Taiwan AI Academy), discussed fairness in Taiwan, focusing on the draft legislation and guidelines aimed at preventing bias in AI systems. She highlighted the importance of accurate and reasonable decision-making, avoiding discrimination based on factors such as religion, gender, or ethnicity. Isabel also emphasised the need for professionals to review and regulate AI decision models to ensure fairness.

Our final speaker, Jason Grant Allen (Director, Centre for AI & Data Governance), addressed AI fairness in Singapore, discussing the IMDA’s model framework for AI governance. He highlighted the need to adapt fairness principles to Singapore’s unique legal and administrative culture, which is shaped by its multi-lingual and diverse society.

The session concluded with a Q&A, where the speakers discussed how the principle of fairness in AI could be operationalised. The speakers discussed the importance of developing tools and testing mechanisms to assess AI models to ensure fairness, and the importance of involvement by experts from non-technical backgrounds in the development and deployment of AI tools and systems. They also highlighted the need to establish effective redressal mechanisms and the reviewing of public decision-making processes to ensure fairness in data-driven systems.
Substantive Summary of the Key Issues Raised and the Discussion
The importance of contextualisation of AI ethics:
The session started with an explainer on the importance of the principle of fairness in AI. It was discussed how the application of the principle of fairness will vary significantly depending on the regional and societal context in which AI is deployed. It was highlighted that fairness as a concept in AI ethics could not be universally defined due to the diverse realities across different jurisdictions, necessitating a localised approach. This discussion highlighted the need to move away from the Global North interpretation of AI ethics principles and the need for contextual application of AI ethics principles.

AI Fairness in India:
In India, the concept of AI fairness is deeply intertwined with the country’s large and diverse population. Key principles such as equality, non-discrimination, and inclusivity are crucial, yet they manifest differently depending on factors such as religion, race, caste, and gender. Government bodies such as NITI Aayog, have emphasised ethical governance in AI, particularly through the Responsible AI for All report. The Indian approach to fairness in AI is heavily influenced by constitutional values, aiming to ensure equal access to technology and opportunities, while also considering the complex social fabric of the country. The approach underscores the need for transparency and inclusivity in AI systems to avoid discrimination and ensure fairness across different societal segments.

AI Fairness in Taiwan:
In Taiwan, fairness in AI is guided by draft legislation and specific guidelines that emphasise avoiding bias in AI systems, particularly in sensitive sectors like finance. The Taiwanese approach to fairness extends beyond algorithms to include the outcomes and decisions generated by AI systems. These guidelines advocate for accurate, reasonable, and non-discriminatory decision-making, stressing the importance of reviewing and regulating AI models to minimise bias. Taiwan’s regulatory framework also calls for involving professionals in the design and planning of AI systems to ensure that they do not discriminate based on factors such as religion, gender, ethnicity, or other protected characteristics.

AI Fairness in Singapore:
Singapore's approach to AI fairness is built on the principles of equal treatment, non-discrimination, and freedom from bias, which are deeply rooted in the country’s legal and administrative culture. The IMDA's model framework for AI governance exemplifies how fairness is applied in different sectors, depending on the demographics and historical context of society. In Singapore, fairness also involves addressing the challenges posed by a multilingual society. Singapore’s AI fairness principles are not seen as mere checkboxes but as valuable, context-specific standards that must be applied thoughtfully in practice.
Conclusions and Suggestions of Way Forward
Through the discussions on the varying interpretations of fairness in AI across India, Taiwan, and Singapore, the session highlighted the critical need to contextualise AI ethics principles to specific regional, social and local contexts. The dialogue underscored that concepts such as equality, inclusivity, and non-discrimination are essential to the development of ethical AI systems, but these principles must be tailored to fit the unique cultural, legal, and social dynamics of each region. The session emphasised that a one-size-fits-all approach to fairness and AI ethics is not effective, particularly in diverse regions like the Asia-Pacific that demand localised approaches to AI governance.

The session facilitated a rich exchange of ideas and perspectives, emphasising the imperative of developing and deploying AI in ways that are fair, inclusive, and ethical. The discussions also acknowledged the practical challenges associated with operationalising fairness in AI. There was a consensus on the need for responsible and ethical design principles, robust human oversight, and well-defined AI governance mechanisms to navigate these challenges.
To operationalize fairness in AI effectively, the session proposed several key recommendations:

Development of Tools for Fairness Assessment: It is crucial to develop and implement tools and testing mechanisms that can help assess fairness in AI models across different contexts. These tools should be adaptable to the unique societal and cultural nuances of various regions.

Interdisciplinary Collaboration: The involvement of experts from non-technical backgrounds—such as social scientists, ethicists, and legal professionals—is essential in the development and deployment of AI systems. This collaboration ensures that diverse perspectives are integrated into AI governance.

Establishing Redressal Mechanisms: Effective mechanisms for redressal must be created and operationalised to address grievances related to AI decision-making processes.

Continuous Review and Adaptation: AI systems and tools must undergo regular reviews and adaptations to ensure they remain fair, unbiased, and inclusive.
By implementing these recommendations, stakeholders can work towards creating AI systems that are ethical and are aligned with the diverse realities of different regions.
Number of Attendees (Please fill in numbers)
    • On-site: 21
    • Online: 28
Gender Balance in Moderators/Speakers (Please fill in numbers)
  • Moderators

    • Female: 1
  • Speakers

    • Male: 1
    • Female: 2
How were gender perspectives, equality, inclusion or empowerment discussed? Please provide details and context.
These perspectives were discussed within the broader context of AI ethics, focusing on fairness in AI across India, Taiwan, and Singapore. In India, fairness is closely linked to principles of equality, non-discrimination, and inclusivity, with these principles manifesting in areas such as gender and religion. The Indian approach seeks to ensure equal access to technology and opportunities while considering the country’s complex social fabric. In Taiwan, AI fairness guidelines emphasise non-discriminatory decision-making, specifically identifying gender as a protected characteristic. Similarly, Singapore’s approach is grounded in equal treatment and non-discrimination, inherently including gender equality.

The discussion centred on the general principles of fairness, equality, and non-discrimination in AI systems across these jurisdictions, with gender being one of several key considerations.
Consent
I agree that my data can be submitted to forms.for.asia and processed by APrIGF organizers for the program selection of APrIGF 2024.