Key research themes
1. How do various socio-technical factors contribute to AI bias and what comprehensive frameworks can effectively identify and mitigate these biases?
This research focus addresses the multifaceted sources of AI bias, not limited to dataset imbalance or algorithm design but extending to systemic, human, institutional, and societal factors. Understanding the interplay between these factors is crucial for developing socio-technical frameworks that move beyond merely computational fixes to managing AI bias holistically, thereby enhancing trustworthiness and fairness in AI deployment.
2. What are the specific sectors and contexts in which AI biases manifest, and how do these biases impact marginalized groups including non-human entities?
This theme explores documented instances and forms of AI bias affecting diverse populations and stakeholders, including overlooked categories such as animals, people with disabilities, and intersectional identities. These studies illuminate the socio-ethical implications of bias in practical deployments—from healthcare and hiring to juridical risk tools and recognition technologies—underscoring the necessity of inclusive fairness considerations beyond human-centric views.
3. What are the emerging ethical, legal, and governance challenges of AI bias, and how can policy and regulatory frameworks be aligned to ensure AI fairness and societal trust?
This theme addresses how legal doctrines, policy framings, and governance mechanisms influence AI bias mitigation, accountability, and public perception. It includes critical examination of copyright's impact on data access for bias mitigation, the power dynamics embedded in AI policy narratives, regional regulatory readiness, and calls for participatory governance to handle AI as a wicked problem—balancing technological capabilities with human rights and social equity.