Thursday, April 13, 2006

being bayesian about DNA: encoding hyperparameters?

During one's lifetime, one will have to learn how to accomplish many different tasks. How different are all of these learning problems? As of 2006, researchers have broken up the field of artificial intelligence into subfields which study particular learning problems such as: vision, path planning, and manipulation. These are all examples of tasks at which human excel without any hardship. Should researchers be studying these problems independently? One can imagine that the human's intellect consists of many modules which are responsible for learning how to do all of the magnificent things that we do. One can proceed to be Bayesian about this learning architecture and somehow relate these learning modules hierarchically via some type of prior. Perhaps researchers should be studying Machine Learning architectures that allow a system to rapidly learn how to solve novel problems once it has solved other (similar?) problems.

Throughout one's life, the learning modules will be at work and over time reach some 'state.' (One can think of this 'state' as an assignment of values to some nodes in a Bayesian Hierarchical Model). However, this state is a function of one's experiences and isn't anything that can be passed on from one generation to another. We all know that one cannot pass down what they learned via reproduction. Then what are we passing down from one generation to another?

The reason why the state of a human's brain cannot be passed down is that it simply won't compress down into anything small enough that can fit inside of a cell. However, the parameters of the prior associated with all of these learning problems [that a person solves throughout their life] is a significantly smaller quantity that can be compressed down to the level of a cell. One can view DNA as a capsule that contains these hyperparameters. Once passed down from one generation to another, these parameters would determine how likely one is going to be; however, the state of the new brain will have be filled in again from experience in the real world.

Since evolution is governed by a high level of stochasticity then one can view nature as performing a gradient-free search through the space of all hyperparameters. How does nature evaluate the performance of a given hyperparameter value? Well, each human (an instantiation of those hyperparamters) works up to a 'state' and his/her survival/reproduction contribute to the score of that hyperparameter setting.

3 comments:

  1. Anonymous10:58 PM

    Either you've taken an introductory optimization course (in which case you bear a striking resemblence to the guy with the pony tail in the bar in _Good Will Hunting_; perhaps you should at lesat give your prof/book some credit now that you're regurgitating her/it word for word (without anyone asking, I might add)) or you really need to.

    Additionally, if you're interested in these topics, you should start reading some of the literature which addresses most of this (before blogging like an expert). As learning organisms learn new things that are useful for long periods of time, there is increasing selective pressure for those things to become instinctive. Yes, the genes have an impact on the structures that end up driving the learning process, but the learning process itself also has an impact on the genes that survive. It's nature via nurture, not versus.

    Human beings have more instincts than other organisms, not less. It's a mistake to focus too much on the learning process. It's far more interesting to focus on things that are inherently human -- structures that were once learned but now have become highly heritible. In fact, the learning structures in the brain have increased the complexity of genetically inherited traits, not decreased them. If there are information constraints in the nucleus of a cell, we're hardly pressing against them. Natural selection is constantly churning learning into instinct.

    Anyway, the tone of your post doesn't seem like you're interested in this at all. You're just interested in how knowing it makes other people feel about you. Nice job, pony tail boy.

    ReplyDelete
  2. Woah. Relax.

    I'm interested in Machine Learning (particularly Bayesian Hierarchical Models) and not in evolution; thus I always focus on the learning process.

    What is an introductory optimization course anyways? I'm no evolution expert (and not claiming to be), but I don't regurgitate anything any prof said (unless I specifically say something like "___ said ___ ").

    If you don't like what I have to say, perhaps you shouldn't be reading what I have to say. But I say what I want on my blog. End of story.

    ReplyDelete
  3. Anonymous4:14 PM

    I find flaws with this person arguments.... I will post my response later.

    ReplyDelete