Neural Network Learning Tastes

By | December 26, 2005

A little project I’m working on at the moment: creating a neural network that will (maybe) learn the aesthetic tastes of a person.

There are two parts to this, the first creates tree-like shapes (what Richard Dawkins calls biomorphs in his book The Blind Watchmaker). Here is an online demo of these biomorphs. My version is going to allow for significantly more complex biomorphs.

When interacting with the program, the user selects the biomorphs that they like, and that is used as the progenitor of the next batch. Mutation is then applied to these, and the set of modified versions is presented to the user to select the next ‘best’ one.

The other part of this program is the neural network. It will be trained on the user’s input and selections, and after a while it will be allowed to run on its own, selecting what it thinks is the best based on its training. After running for a while (several dozen generations or so), the results will be displayed.

The point of this (aside from the fun of implementing it) is to see how well the neural network can pick up the ‘styles’ that the user was going for, and thus to see if it can end up knowing what people like.

Here are the current details on the design of this. These designs are getting updated over time. Comments/suggestions/whatever are welcome.

4 thoughts on “Neural Network Learning Tastes

  1. tikitu

    Might also be interesting to give the NN some more vision-inspired input — widest span hor/ver, average darkness, ratio of hor/ver twigs, etc. It seems odd to me to have both the NN and the biomorphs operating on exactly the same genetic representation (as I presume is the idea?).

    Or I suppose you could give the *phenome* as input (as an image), but then you probably have to do a whole lot more serious vision-style preprocessing.

  2. robin Post author

    I have yet to decide on the input to the neural network, and I might try with several. I’ll probably start with the decoded genotype representation, but I would like to get something more phenotype-based. However, the fact that giving a neural network unprocessed image data isn’t too effective (or, I imagine it isn’t, anyway, I may try it), makes that harder. Ideally I’d convert the representation into some form of suitable input. I’m just not sure yet what that is.

  3. tikitu

    Yeah, I agree that straight image data is probably not going to do much. But I wondered about just supplementing the genotype with some extra preprocessed image features. You could even look at the NN weights afterwards and see which was the better predictor, genotype or (feature-extracted) phenotype.

  4. robin Post author

    My main issue there is what data to add. Most of the things I came up with were stuff that could be easily calculated from the genotype by the NN anyway (average length, etc), although I suppose things like density could help. The problem then becomes that anything I add to it becomes much easier for it to select for. So I could be introducing a bias towards it only recognising the users preferences based on the feature extracted data, rather than deriving its own. On the other hand, deriving its own is hard work for a NN (and in the case of one with one hidden layer, it may well be impossible for some things, although I don’t think people are picky enough to match a hard function).

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.