Key research themes
1. How do generative models incorporate structural knowledge to improve content generation?
This theme explores advances in generative AI models that integrate explicit structural or symbolic representations into deep generative networks to better capture complex global properties and global coherence in generated outputs, particularly for high-dimensional data like images. Such approaches overcome limitations of conventional deep generative methods that struggle with structural regularities and spatial symmetries inherent in many data domains. Combining program synthesis or neurosymbolic constructs with neural architectures yields improvements in generation quality, completion, and interpretability.
2. What are the challenges and impacts of generative AI adoption in higher education and academic integrity?
This research theme centers on the multifaceted implications of generative AI for higher education, including shifting pedagogical roles, cultural considerations, opportunities for learning enhancement, and concerns around academic honesty. Investigations probe attitudes of educators and students about AI tools, the need for policy adaptations, and the nuances presented by different cultural contexts such as international student populations and academic disciplines. Emerging scholarship also evaluates technological integration to support customized learning while managing ethical and institutional challenges.
3. How do hallucinations and failures in generative AI models inform their reliability, interpretability, and future improvement?
This line of inquiry investigates the prevalence, causes, and ramifications of hallucinations—outputs appearing plausible but false—in large language models and generative AI. It recognizes hallucination as inherent to the probabilistic generative processes but critically examines its impact on application reliability in sensitive domains such as healthcare and legal systems. Explorations into deliberate misuse or induced failures reveal underlying statistical mechanisms, informing AI literacy and highlighting the importance of transparency and domain adaptation for trustworthy deployment.