Tuesday, November 16, 2010

Recursion

Recursion Extends Code Usefulness Rather Seriously In Our Neuron
Smoke that!

After running quite a few iterations for the net, it became apparent that very little useful linking went on. From time to time a long chain would emerge only to be dismantled a few hundred iterations later. For the most part, neurons simply were never active, never linked to the source or the drain.

The problem seemed to be that random linking neurons is simply not effective enough. Even with a high number of neurons, once the critical path between source and drain is broken, it's rare to see it re-established.

Moreover, because of the pathway-rewards and active decay, once the source-drain path is broken, no pathways are rewarded, because none provide output.

So it seems a modification is in order: Neurons must be added to an existing pathway

This means that a basic net consists of at least one input, directly connected to at least one output (neurons with callbacks registered). A net can then have any number of free, unlinked, neurons at startup. These will randomly link with the stipulation that they must link in such a way that they compose a path between the source-drain pair.

So, a neuron may intercede between two connected neurons, effectively extending the path. Or it may bridge entire sections of an existing path, forming a fork. In more complicated instances such a bridged connection may join two existing paths.

Consider:

src->a->b->c->drain

Connect x and get

src->a->x->b->c->drain
or
src->a->b->c->drain and src->a->x->drain

where x now bridges the b->c section of the original path

All that is good and well, but we will need some help in linking neurons up in a sensible way. To do this, we add two helpers to the Neuron class: indexToDrain and pathToDrain, detailed below.

The basic gist is that a neuron needs to know which of it's neigbours ultimately leads to a drain. In these functions, a further stipulation is that it must be the shortest route to a drain. This should keep the net's as tight as they can be. Long neuron chains/paths are still possible, with interlinking.

We use recursion to resolve this problem, so that each neuron just needs to know it it is a drain, and if not, if it's direct neighbours are.

There is also an added guard against doubling back on the path, so a neuron will not consider it's caller (by definition a neighbour) when considering routes to a drain.

neuron.h:

...
public
int indexToDrain();
int distanceToDrain(Neuron* from);
...

neuron.cpp:


int Neuron::indexToDrain() {
int index = ERR_NOT_FOUND;
if (callback != 0) {
return NEURON_RANK;
} else {
int distance = 0;
for (int i=0; i< NEURON_RANK; i++) {
if (_links[i] != 0) {
int link_distance = 1 + _links[i]->distanceToDrain(this);
if ((link_distance > 0) &&((link_distance < distance) || (distance == 0))) {
//if distance is still 0 here, we are not a drain and we have not found a drain either(yet)
distance = link_distance;
index = i;
}
}
}
if (distance == 0) {
return ERR_NOT_FOUND;
}
}
return index;
}


int Neuron::distanceToDrain(Neuron* from) {
int distance = 0;
if (callback != 0) {
return THIS_NODE;
} else {
for (int i=0; i< NEURON_RANK; i++) {
if (_links[i] != 0 && _links[i] != from) {
int link_distance = 1 + _links[i]->distanceToDrain(this);
if (link_distance < distance || distance == 0) {
//if distance is still 0 here, we are not a drain and we have not found a drain either(yet)
distance = link_distance;
}
}
}
}
if (distance == 0) {// we never found a drain
return ERR_NOT_FOUND;
}
return (distance);
}




With these in hand, and tested of course, I can now make the net to do proper linking to maintain src->drn pathways...

1 comment: