| APrIGF 2025 Session Proposal Submission Form | |||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| Part 1 - Lead Organizer | |||||||||||
| Contact Person | |||||||||||
| Mr. Pranjal Timsina | |||||||||||
| Organization / Affiliation (Please state "Individual" if appropriate) * | |||||||||||
| AI for Justice Initiatives | |||||||||||
| Designation | |||||||||||
| Tech Lead | |||||||||||
| Gender | |||||||||||
| Male | |||||||||||
| Economy of Residence | |||||||||||
| Nepal | |||||||||||
| Stakeholder Group | |||||||||||
| Technical Community | |||||||||||
| Part 2 - Session Proposal | |||||||||||
| Session Title | |||||||||||
| AI4Justice: Bridging the Access Gap or Deepening the Digital Divide in the Asia-Pacific? | |||||||||||
| Thematic Track of Your Session | |||||||||||
|
|||||||||||
| Description of Session Formats | |||||||||||
| Panel Discussion (60 minutes) | |||||||||||
| Where do you plan to organize your session? | |||||||||||
| Onsite at the venue (with online moderator for questions and comments from remote participants) | |||||||||||
| Specific Issues for Discussion | |||||||||||
| Algorithmic Bias: AI models trained on historical data can perpetuate and even amplify existing biases against ethnic minorities, women, and low-income groups. How do we prevent "justice by algorithm" from becoming biased justice? Transparency & "Black Box" Problem: Many AI systems are opaque. How can a defendant challenge a decision if the logic behind an AI-driven recommendation for bail or sentencing is not explainable? Accountability Gap: If an AI system makes a flawed recommendation leading to an unjust outcome, who is accountable? The developer, the judicial officer who used the tool, or the government agency that procured it? Data Privacy: Justice systems handle highly sensitive personal data. How can this data be used to train AI models without violating citizens' right to privacy? The Digital Divide: If justice becomes increasingly digital, how do we ensure that those without digital literacy or access to technology are not left behind, further widening the access to justice gap? Guiding Questions for the Session: What are the most promising applications and riskiest deployments of AI in the justice sector within the APAC region today? How can we build accountability and transparency mechanisms (e.g., algorithmic audits, explainability standards) into the design and procurement of justice technologies? What specific safeguards are needed to protect the rights of vulnerable and marginalized communities in an era of automated justice? What kind of multi-stakeholder collaboration (between judiciary, government, tech companies, civil society, and academia) is required to develop context-aware AI governance frameworks for the diverse legal systems of the Asia-Pacific? |
|||||||||||
| Describe the Relevance of Your Session to APrIGF | |||||||||||
| The APAC region is a microcosm of the global challenges and opportunities of AI4Justice. It is home to: 1. Diverse Legal Systems: Common law, civil law, religious law, and customary law systems, each presenting unique challenges for "one-size-fits-all" AI solutions. 2. Varying Levels of Development: The concerns of a technologically advanced nation like Japan or Australia differ vastly from those in developing nations where basic digital infrastructure is still a challenge. 3. A Hub of Tech Innovation: The region is a global leader in AI development, making it ground zero for both innovation and the need for responsible governance. This workshop will specifically focus on these regional nuances, avoiding generic, Western-centric discussions and instead fostering a dialogue that is directly relevant to policymakers, judges, lawyers, and activists in the Asia-Pacific. |
|||||||||||
| Methodology / Agenda (Please add rows by clicking "+" on the right) | |||||||||||
|
|||||||||||
| Moderators & Speakers Info (Please complete where possible) - (Required) | |||||||||||
|
|||||||||||
| Please explain the rationale for choosing each of the above contributors to the session. | |||||||||||
| Stakeholder Groups: Government, Private Sector, Civil Society, and Academia are all represented. Geography: Representation from Southeast Asia, East Asia, South Asia, and Oceania, ensuring a broad regional perspective. Expertise: The panel includes legal, technical, policy, and human rights expertise. Gender: 1 female, 3 male speakers, plus a moderator. |
|||||||||||
| Please declare if you have any potential conflict of interest with the Program Committee 2025. | |||||||||||
| No | |||||||||||
| Are you or other session contributors planning to apply for the APrIGF Fellowship Program 2025? | |||||||||||
| Yes | |||||||||||
| Upon evaluation by the Program Committee, your session proposal may only be selected under the condition that you will accept the suggestion of merging with another proposal with similar topics. Please state your preference below: | |||||||||||
| Yes, I am willing to work with another session proposer on a suggested merger. | |||||||||||
| Brief Summary of Your Session | |||||||||||
| The session titled “AI4Justice: Bridging the Access Gap or Deepening the Digital Divide in the Asia-Pacific?” was a multi-stakeholder panel discussion that brought together representatives from the judiciary, international organizations, academia, civil society, and the private sector. The session was moderated by Mr. Amrit Uprety, Joint Registrar of the Supreme Court of Nepal, with speakers Hon. Justice Dr. Nahakul Subedi, Supreme Court of Nepal; Mr. Jaco Du Toit, UNESCO Country Representative to Nepal; Prof. Dr. Durgambini Patel, Dean of Kirti P. Mehta School of Law; and Mr. Pranjal Timsina, Tech Lead at Jurimatics. The discussion revolved around the promise and perils of Artificial Intelligence in justice systems, ethical and constitutional frameworks for its use, and the need to ensure inclusion while integrating technology. The first two addresses - by Mr. Pranjal on the promise of AI and Dr. Patel on its perils - set the tone for the discussion, establishing a balance between optimism and caution. Subsequent dialogue explored how ethical standards, governance mechanisms, and shared accountability models can ensure AI supports rather than replaces the judiciary. The speakers also discussed the challenges of bridging the digital divide, emphasizing that digital inclusion must precede automation. Audience interactions further deepened the discussion, touching on human oversight, rights of citizens in low-connectivity areas, and practical pathways for inclusion in AI-enabled justice systems. |
|||||||||||
| Substantive Summary of the Key Issues Raised and the Discussion | |||||||||||
| The central theme was the responsible and inclusive adoption of AI in justice systems. Mr. Pranjal Timsina outlined how technology can enhance efficiency and access to justice when designed to assist, not replace, judges. He emphasized accountability, fairness, and the shared responsibility of developers, enablers, and users in managing AI outcomes. Prof. Dr. Durgambini Patel underscored the irreplaceable role of human conscience and empathy in justice. Her remarks on accountability questioned who bears responsibility for AI errors. She cautioned that delegating final judicial authority to AI would erode the moral foundation of justice. Mr. Jaco Du Toit presented six ethical principles from UNESCO’s AI guidelines: human oversight, accountability, bias mitigation, inclusivity, accessibility, and contextual adaptation—highlighting that public trust hinges on adherence to these principles. Hon. Justice Dr. Nahakul Subedi stated that the Supreme Court of Nepal is open to technological integration but reaffirmed that AI should remain an assistant, not a substitute, to human judges. He identified six potential areas of AI use: case management, legal research, evidence analysis, tentative case judgment, report preparation, and execution of judgments. Discussions also centered on the digital divide, inclusion, and accountability. Justice Subedi emphasized that integration goes beyond installation, requiring digital literacy and equitable access. Prof. Patel reiterated that inclusion demands government investment in infrastructure and education. Mr. Pranjal stressed the sequence of inclusion before automation to avoid “automating exclusion.” |
|||||||||||
| Conclusions and Suggestions of Way Forward | |||||||||||
| The session concluded with broad consensus that AI can enhance access to justice if implemented ethically, inclusively, and with strong governance. The shared accountability model, where developers, enablers, and users each bear defined responsibilities, was acknowledged as a practical framework. Speakers agreed that human oversight must remain central. AI should support judges by managing data, research, and reporting, but not make final judgments. Strengthening digital foundations through infrastructure, education, and legal reforms was identified as essential for meaningful integration. Justice Subedi’s distinction between installation and integration captured the essence of sustainable AI adoption: true integration requires inclusivity, literacy, and equal opportunity. Prof. Patel emphasized that access to justice is a fundamental right, comparable to food, water, and clothing, calling for state-led efforts in funding and reform. The key takeaway was to prioritize inclusion and accountability before automation. Ethical standards, constitutional alignment, and multi-stakeholder cooperation were seen as the way forward for ensuring that AI in justice bridges, rather than deepens, existing divides. |
|||||||||||
| Number of Attendees (Please fill in numbers) | |||||||||||
| Gender Balance in Moderators/Speakers (Please fill in numbers) | |||||||||||
|
|||||||||||
| How were gender perspectives, equality, inclusion or empowerment discussed? Please provide details and context. | |||||||||||
| Inclusion and equality were discussed as foundational principles rather than afterthoughts. The panel recognized that AI and digital systems risk amplifying existing inequalities if inclusion is not designed from the start. Justice Subedi highlighted that true integration requires equal digital access and literacy. Prof. Patel linked inclusion to empowerment, emphasizing that governments must allocate resources for infrastructure and education so that no group is left behind. Mr. Pranjal reinforced this with the idea of “inclusion first, automation second,” warning that neglecting inclusion would automate exclusion. While gender was not discussed separately, the broader theme of digital inclusion encompassed gender equality and empowerment within its scope. | |||||||||||
| Consent | |||||||||||
I agree that my data can be submitted to forms.for.asia and processed by APrIGF organizers for the program selection of APrIGF 2025. |
|||||||||||
I agree that my data can be submitted to forms.for.asia and processed by APrIGF organizers for the program selection of APrIGF 2025.