Academia.eduAcademia.edu

Generative AI

description1,484 papers
group2,480 followers
lightbulbAbout this topic
Generative AI refers to a subset of artificial intelligence that focuses on creating new content, such as text, images, or music, by learning patterns from existing data. It employs algorithms, particularly deep learning models, to generate outputs that mimic human-like creativity and originality.
lightbulbAbout this topic
Generative AI refers to a subset of artificial intelligence that focuses on creating new content, such as text, images, or music, by learning patterns from existing data. It employs algorithms, particularly deep learning models, to generate outputs that mimic human-like creativity and originality.

Key research themes

1. How do generative models incorporate structural knowledge to improve content generation?

This theme explores advances in generative AI models that integrate explicit structural or symbolic representations into deep generative networks to better capture complex global properties and global coherence in generated outputs, particularly for high-dimensional data like images. Such approaches overcome limitations of conventional deep generative methods that struggle with structural regularities and spatial symmetries inherent in many data domains. Combining program synthesis or neurosymbolic constructs with neural architectures yields improvements in generation quality, completion, and interpretability.

Key finding: Proposes a two-phase generative model combining programmatic structure (e.g., 2D for-loops encoding spatial repetitions) with deep generative models to capture complex global patterns such as windows on building facades;... Read more
Key finding: Provides an in-depth overview of GANs as a game-theoretic approach for learning implicit generative models, highlighting their ability to produce realistic high-resolution images; discusses challenges in training stability... Read more
Key finding: Introduces the original GAN framework, wherein a generator and discriminator engage in a minimax game, enabling the generator to implicitly learn the data distribution. Demonstrates generation of high-quality images via... Read more

2. What are the challenges and impacts of generative AI adoption in higher education and academic integrity?

This research theme centers on the multifaceted implications of generative AI for higher education, including shifting pedagogical roles, cultural considerations, opportunities for learning enhancement, and concerns around academic honesty. Investigations probe attitudes of educators and students about AI tools, the need for policy adaptations, and the nuances presented by different cultural contexts such as international student populations and academic disciplines. Emerging scholarship also evaluates technological integration to support customized learning while managing ethical and institutional challenges.

Key finding: Explores the dual nature of generative AI in Western higher education, balancing enthusiasm for transformative potential across diverse models (e.g., ChatGPT, Claude) with concerns about academic integrity, especially for... Read more
Key finding: Through mixed methods involving Chinese postgraduate students, reveals ambivalence toward AI usage for academic success, recognizing benefits for planning and text refinement but concerns over superficial competence.... Read more
Key finding: Through focus groups with journalism and mass communication students and faculty, finds consensus that AI serves as a valuable initial aid for research and learning yet risks becoming a crutch that impedes skill acquisition.... Read more
Key finding: Analyzes how ChatGPT, as a representative large language model (LLM), has popularized generative AI applications across diverse domains but also spurred debates on limitations and expectations. Surveys major competing... Read more
Key finding: Evaluates Google NotebookLM’s retrieval-augmented generation architecture that grounds AI responses explicitly in user-provided documents to reduce hallucinations, facilitating personalised, contextualised learning and... Read more

3. How do hallucinations and failures in generative AI models inform their reliability, interpretability, and future improvement?

This line of inquiry investigates the prevalence, causes, and ramifications of hallucinations—outputs appearing plausible but false—in large language models and generative AI. It recognizes hallucination as inherent to the probabilistic generative processes but critically examines its impact on application reliability in sensitive domains such as healthcare and legal systems. Explorations into deliberate misuse or induced failures reveal underlying statistical mechanisms, informing AI literacy and highlighting the importance of transparency and domain adaptation for trustworthy deployment.

Key finding: Offers a comprehensive survey detailing how hallucinations pervade LLM outputs across domains such as healthcare, law, and finance, compromising trustworthiness in critical applications. It refines taxonomies of hallucination... Read more
Key finding: Introduces the concept of the 'Slopocene' to describe the overproduction of low-quality or hallucinated AI content, arguing that hallucinations are intrinsic to LLM generative dynamics rather than mere bugs. By intentionally... Read more

All papers in Generative AI

There aren't any papers tagged with Generative AI yet

Download research papers for free!