A Bayesian Detour 8

After this process was sufficiently well developed, you might look at your new evolved scientist. It's quite possible you wouldn't understand the purpose for practically any of the new code. But what this agent would have developed was newer, better priors for starting out in this simulated game-style world. They would have evolved with priors that tended to work in their previous situation.

This should seem very familiar.

Ah, but the problem with these evolved priors is that when the little simulated scientist is taken out of their first simulated world, and put into version 3.4, they are likely to die very quickly. They would have been taken out of the environment in which they had evolved and put into a new world which they were not entirely prepared to deal with. Like an ape on the savannah taken to the big city. The choices this ape makes in its new environment would not necessarily be the best choices for their new environment. They might have problems understanding the environmental pressures that their existence has put on their world. The climate is big, and the ape's brain is small.

Evolved priors are biased to the environment in which they evolved.

Another technique is for you, the engineer trained in mathematics, to develop a more general method by which your agent can learn about the world when taking their first steps in any possible game environment. And in order to do this, you need both a deep and instinctive understanding of the difference between the territory and the map that tries to depict that territory, and you need to understand the mathematical necessity of having the agent's beliefs update as efficiently as possible regardless of the situation they're embedded in. You want them to figure out the physics of their world -- and the geology, and the economics, and the sociology -- regardless of what their world is.

You recognize that the world, whichever one the agent is dropped into, exists on some objective level. You don't know the rules, but you know that there are rules. And you want some method for efficiently learning what those rules are. It is, of course, impossible for you to be successful for literally every world your agent might find itself in. Nevertheless. You want to create a "first step out of the gate" that will work best in as general a way as possible, and in as efficient a way as possible, for as many types of worlds as you can manage.

You want to give this agent the best universal prior they can have.

In the world of pure theory, there are obviously computational problems. But the nice thing about having those computational problems is that they represent a genuine ideal toward which you can aspire. It's not possible for you to program a little scientist that can work perfectly. But it is possible for you to think very carefully about the ideal, and then work toward that ideal. And this is, inherently, a mathematical question.

How do you approximate as closely as possible the ideal?

Well, you're going to use the most efficient theorem for updating information whenever it's available, and when it's not, you're going to do your best to approximate the efficiency of that ideal theorem with hard word and lot of mathing. You've got a little scientist you're trying to program. You want this scientist to understand the world. And you want to give this scientist the tools that it might need.

I have said before, more than once, that there are many tools available and we should choose the tool that best suits the problem at hand. Something that's one thing, and sometimes another.

But the existence of multiple tools does not change the inherently mathematical idea of efficient updating.

Obviously, I'm not just talking about programming a little scientist into a game simulation in order to see how well it does. (Although I genuinely think that might be a fun game.) I'm talking about programming ourselves. This is not an evolutionary problem. At core, it's an engineering problem but any evolutionary process is going to approximate an optimization algorithm for only its current environment without being able to handle changes well. We need to do better than that.

But the evolutionary example does answer the deeper question.

Where do our own human priors come from? It's pretty obvious, now. They come from our instincts, from our experiences, from our gut feeling, from our common sense, from our careful deliberations. Our priors come from the morass inside of us. We know things that we don't even know we know, and that knowledge comes out when we have proper incentive to think about problems clearly.

For many, many problems in this world, each of us has some genuine kernel of knowledge that we don't even know that we had. Our guess is wrong? Our prior is fucked up? Well, sure it is. We have all had different experiences. Our minds are anchored in different ways. Some of us guess too high, and others guess too low. But there is still information, genuine knowledge, inside our skulls. Even the people who refuse to recognize it as such still have that information inside of them. It's mixed with error, sure, and we very often cannot separate ourselves the error from the knowledge. But that doesn't mean that the information does not exist.

Priors do not have to be "arbitrary", even when we feel like we have no opinion or knowledge or interest in the matter at hand.

I could've written that sentence 6000 words ago, but I'm not sure you would have believed it. I had not provided the evidence yet for it. But the evidence is right there. Sometimes we are forced to make a guess about topics that we're ignorant about, and it feels like our guess is nothing but ignorance with literally no information, but that is quite demonstrably untrue. It is shown to be untrue with every successful example of the Miracle of Aggregation.

Now, obviously our personal, unconscious, internally cultivated, evolved priors are often bollocks. The Wisdom of Crowds does not always work. We can be biased collectively to make large group mistakes. Happens all the time.

But this is exactly why we need a strong theory of efficient updating based on new information, so that we can better recognize those times when humans suffer severe cognitive bias. And many famous psychologists work on this very problem. THINKING, FAST AND SLOW by Daniel Kahneman is a fantastic book about this. By recognizing those biases, when our errors are all correlated with each other so that we are much more likely to make collective mistakes, we can develop better instincts about how to rely on our instincts.

This entry was posted in Uncategorized. Bookmark the permalink.