Monday, May 11, 2009

HyperNEAT for CUDA

It's time for me to give back to the Neuro-Evolution for Augmenting Topologies (NEAT) community. I am starting to port strict HyperNEAT to a stand-alone CUDA implementation. For those who don't know, CUDA is NVidia's Compute Unified Device Architecture.

Unless I hear otherwise, it will be a simple implementation for 1 multi-core CPU (of say 2 on-die cores) and 1 CUDA capable GPU (say a GTX 8800 or a GTX 295). Nothing too fancy, just enough to give insight into how I would implement HyperNEAT on CUDA. The way I'm currently doing this may be novel - I'm using a factory class to generate the actual CUDA code, and I probably should offer my experiment development GUI to help others understand this "magic smoke" a little better.

Watt is a new species?

As evolutionary algorithms evolve (apparently a quasi-Lemarkian form of evolution where what you learn by using evolutionary algorithms end up affecting the evolutionary algorithm itself) they look less and less like their ancestors. They soon become a new species, not just the next generation of their parents.

(BTW, AIP's Inside Science News Service has a Mother's Day article on epigenetics that is interesting - but way beyond my pay grade.)

I just did a quick comparison of WattsNEAT with Jason Gauci's version of HyperNEAT. Did these two exemplars come from the same solar system, or even galaxy? Could have fooled me.

Here are the only observable common traits. They both go through cycles of complexification and pruning. The way they go through these cycles has transmogrified into something unrecognizable, unless you consider alleles a form of CPPN (which IMO they are).

I wonder if recursion, which is a scale form of recurrence, converts source code into their fractal representation and back while you're not looking, or is it just entropy, crossover and mutation? I ask because the crossover and mutation functions really look different now.

Saturday, May 9, 2009

SwarmFest 2009, June 28 - 30 in Santa Fe

SwarmFest is the annual International conference for Agent Based Modeling and Simulation - IMO, a necessary skillset needed to understand complexity in systems. I just received notification that I have been accepted to present at this year's conference, and I am excited.

Why is this important to me?

My presentation details recent findings I've made regarding using a form of Swarm in which the schedule and rules for the agents are not hard-coded in the experiment. The our case, agents "discover" the rules and patterns based upon whatever pre-existing structure and connection it finds. This provides part of the solution space called a context. The discovery mechanism that searches the context for solutions is a topology and weight evolving artificial neural network, originally developed by Ken Stanley and Risto Miikkulainen at the University of Texas, called NEAT. (NEAT is actually the great-grandfather. WattsNEAT is our N-th generation implementation of HyperNEAT).

Now for the cool part. In order to configure the compute fabric for the many expriment contexts we might create in which to evolve solutions to problem experiments, we utilize WattsNEAT to evolve configurations of the compute fabric itself. This is our solution to the problem of partitioning data and structures for massively parallel computation.

If you have followed along so far, we have just used WattsNEAT to configure a compute fabric in which to effectively and efficiently run WattsNEAT in massively parallel compute fabrics. For those of you who have used Torque or SGE rolls in a Rocks cluster, the resulting advantage is an intelligent distribution and scheduling agent that configures the compute fabric according to the context (schedules, priorities, and component configurations it discovers at the time).

This is not as static as it may initially appear.

I'm moving to get the specifics (of which there are many) down and put (at least) into a provisional patent. After that, we will decide the bifurcation strategy between proprietary paths and open source.