Key research themes
1. How can the Divide-and-Conquer pattern be generalized and implemented efficiently for parallel programming on multicore systems?
Divide-and-Conquer (DaC) is a fundamental programming paradigm that naturally exposes parallelism by recursively dividing problems into subproblems, solving them independently and combining results. Research in this theme focuses on formalizing the DaC pattern in parallel contexts, providing high-level programming abstractions and template implementations for multicore architectures, and exploring optimization and execution models that exploit available concurrency efficiently. This theme is important because DaC algorithms underpin many real-world applications, yet efficiently parallelizing them remains a challenge for non-expert programmers.
2. What role do parallel design patterns play in enabling high-level abstractions for complex parallel applications and how do they compare in performance and programmability?
Parallel design patterns encapsulate recurring parallel programming idioms that provide reusable, composable, and high-level abstractions to aid programmers in developing parallel applications. Research in this theme explores how pattern-based frameworks can model real-world complex applications, assess programmability, and optimize performance compared to native or pragma-based parallel implementations. This is crucial because it addresses the challenge of balancing programming productivity and achieving performance on multicore architectures.
3. How do new programming models and languages based on parallel design patterns improve scalability, safety, and productivity in multicore and distributed systems?
New parallel programming languages and models aim to shift away from sequential-by-default designs to types and concurrency models that inherently support parallelism scalability, safety, and ease of use. Research investigates actor-based concurrency, social patterns for multi-agent systems, component integration patterns, and coordinated concurrent activities. These models leverage parallel design patterns to build abstractions that address common pitfalls such as race conditions, deadlocks, complexity of integration, and code modularity. Understanding and refining these paradigms is essential for managing the increasing parallelism in modern hardware.