Key research themes
1. How can fairness and equity be integrated into scheduling algorithms to improve multi-day and multi-client resource allocation?
This research theme focuses on extending traditional scheduling frameworks by embedding equity and fairness considerations, particularly when resources need to be allocated over multiple time periods or among multiple clients. This issue is critical in real-world contexts where consistent service or job completion guarantees across different users or days improve satisfaction and fairness indicators, presenting algorithmic and complexity challenges for offline scheduling with fairness constraints.
2. What are the algorithmic and theoretical frameworks for scheduling multiprocessor tasks involving simultaneous execution and precedence constraints?
This theme addresses scheduling problems where tasks require multiple processors simultaneously (gang scheduling) and are subject to precedence or incompatibility constraints. Such problems are modeled through graph-theoretic frameworks, notably mixed graph coloring, which unifies scheduling constraints with vertex colorings under precedence and conflict edges. These models facilitate leveraging complexity results and approximation strategies from graph coloring to design efficient scheduling algorithms for parallel tasks with synchronization requirements.
3. How can gang scheduling be optimized and adapted in contemporary multiprocessor and parallel computing environments, including clusters and real-time systems?
This theme explores the application, improvements, and adaptations of gang scheduling strategies to efficiently manage parallel jobs across multicore processors, clusters, and real-time systems. Emphasis is on algorithmic innovations that reduce scheduling overhead, incorporate cache and multicore architectures, manage energy and fairness tradeoffs, and handle periodic and rigid parallel tasks. The studies contribute practical scheduling frameworks enhancing parallel workload throughput, response times, and resource utilization in large-scale and time-constrained computational environments.