Thursday, April 30, 2009

NIMD - A new parallel compute formalism

We have been working with configurations for applications running on hybrid, heterogeneous compute clusters. Ours started out being a plain vanilla Rocks Cluster using CUDA Rolls.

The challenge in developing massively parallel computer applications centers around the way in which data and tasks are partitioned. Specifically, these partitioning decisions are closely coupled with both the internal and external message channels within and between the cluster components. We at Watt's Advanced Research Projects have found that an adaptive approach to the computational contexts works best for us. We have developed an "intelligent distributor" that can discover the context of the compute cluster - the schedules, priorities, resources, utilizations and configurations - using evolutionary neural networks to "reconfigure" the compute fabric and making efficient and effective use of the cluster's context.

We have termed this compute fabric NIMD (for networked instruction, multiple data). It differs from traditional MIMD in the fact the architecture is non-hierarchical, and more specifically can be recurrent. The additional complexity is not problematic, but provides an ensemble approach to solution space for the compute fabric. Thinking back, even hierarchical parallelism schemes are non-deterministic to some degree. We seem to make better use of that fact.

No comments: