Key research themes
1. How can the architecture of fully connected layers be optimized for effective transfer learning in CNNs?
This theme explores methods to automatically learn and tune the structure and hyperparameters of fully connected (FC) layers within convolutional neural networks (CNNs) during transfer learning, addressing challenges of architectural design and overfitting particularly when adapting pre-trained models to target tasks with limited or domain-different data.
2. What are the limitations of standard transfer learning models when applied to specialized domains such as 3D medical imaging, and how can custom models or advanced NAS (Neural Architecture Search) methods address these gaps?
This research area focuses on the challenges posed by using standard 2D transfer learning architectures pre-trained on natural images for complex, high-dimensional medical imaging tasks, especially volumetric data. It explores the performance differences between direct transfer, custom tailored 3D CNN architectures, and novel neural architecture transfer methods that adapt both topology and weights efficiently for various tasks with limited data.
3. How do transfer learning concepts integrate with domain-specific applications in engineering, signal processing, and computer graphics, and what are the methodological innovations enabling these applications?
This theme investigates the adaptation and integration of transfer learning methodologies beyond conventional image classification, focusing on application-driven innovations in geotechnical engineering, video coding, seismic imaging, face recognition, and 4D shape mapping. Key insights include techniques for dataset scarcity mitigation, architecture adaptation, and PDE-based mapping for complex task requirements.