Dynamic Scheduling Algorithm for Heterogeneous Multiprocessors
Several kinds of computations can be parallelized, in theory. Yet, optimal parallelization is exceedingly difficult in practice. Notoriously so, in heterogeneous multiprocessor systems.
Typically, general-purpose processors must allocate many tasks to special-purpose processors, in such systems. The burning question is; how to optimize both space and time — the two fundamental, disparate computing resources — in a system built with fundamentally different processors?
At a high level, we must maximize program store (space) usage of special purpose processors. Simultaneously, we must minimize time spent in managing and running programs. Many conflicting constraints arise.
For example, the program store of a special-purpose processor may hold more than one program simultaneously. However, such a processor can execute only one program at a time. The processor's program store can be rewritten, but it is time-consuming, so we must minimize program-switching. What if all processors are busy? How to manage thread queues? And so on.
We invented a novel dynamic scheduling algorithm that optimally allocates resources under such disparate constraints.
Our algorithm's patent is cited in over 75 other patents by top parallel computing companies like IBM (45 patents cite our method), Nvidia (10), Intel, Amazon, HP, Qualcomm, Olympus, Siemens, Samsung, Toshiba, and Sony.
Read the patent to know more: Method and system for allocation of special purpose computing resources in a multiprocessor system.
Static Scheduling Algorithm
We created a set of algorithms to optimize static repeating scheduling of sets of interlocking jobs (synchronous work graphs).
Using our algorithms we can solve questions like: How to achieve a reasonable turnaround time, using minimum processors? How to achieve the fastest possible turnaround time (hint: more processors don't always help)?
Our scheduling algorithms apply to computing, and also generalise to high-volume industrial operations, like, machine shops, service floors, fast food chains, and medical testing laboratories. In all these cases, typically, we must use a fixed set of resources, to process a defined set of inputs, into many kinds of predefined output.
Analysis of Dynamic Scheduling
We created new mathematical theory that proves bounds on dynamic scheduling of complex dependent jobs.
Informally, the bounds state that any ‘sane’ scheduling algorithm can near-optimally schedule jobs created a certain way. This means that an engineer's attention should be less towards which scheduling algorithm to use, and more towards how to create jobs in that specific “certain way”.
The theory has implications across cluster/cloud scheduling, multi-core/GPU scheduling as well as superscalar/microinstruction scheduling. Furthermore, the mathematical proofs are surprisingly general: we find that our result also generalises to other dynamic scheduling methodologies, including project management methodologies such as Agile and Scrum.
Read the paper to know more: Performance of Work Conserving Schedulers and Scheduling of Some Synchronous Dataflow Graphs.