A Bayesian Detour 6

We're getting closer to understand this WTF result.

We can do some mathematical reshuffling. All of us individually had only our individual guesses, and our individual guesses were some mixture of knowledge and falsity, of information and misinformation, and because it was our head alone, we could not personally suss out the difference. "I don't know anything about butchered meat."

But from Galton's perspective, that is not actually the case. Anyone with actual to the truth can dig deeper into our numbers and interpret them as:

Guess = Truth + Error

Every guess can be written in this form. Individually, we are not able to do this. We can't tease through the storm of neurons in our head to separate the wheat from the chaff, and so many of us are very quick to assume it's all chaff when it's regarding a topic we do not well understand. This is, quite often, extremely sensible. Our ignorance as non-cattle farmers is obviously very great. (Really, I'm only good at, like, three things on this planet. I just happen to be very good at those three things.) From our personal perspective, we don't feel like we have anything to contribute.

But from Galton's perspective? Obviously we did have something to contribute. From the perspective of the statistician, after the fact, it looks very much like every single person in the crowd had some nugget of truth inside their heads. Sure, from their perspective, this information was tainted with errors. But it still existed. The nugget of truth was right their in their heads. And in order to extract that pure gold nugget of real information, all Galton had to do was to add up our ignorance together in order to cancel it out entirely.

I know that's not the way it looks like, looking at this from the inside. Introspection does not provide us with the truth. We are truly ignorant about practically everything. But we are not infinitely ignorant. And that is the answer that resolves the WTF.

A lot of economic models have agents who receive private signals that look something like:

S_{i} = X + \epsilon_{i}

The S there is the private signal. It's all that the individual agent has access to. But the signal is made up of the truth value X plus some error term. The expected value of the error is zero, and each individual i's error is uncorrelated with anyone else's error. My initial intellectual reaction to this sort of setup for a private signal was, to be perfectly honest, "What the fuck is this bullshit?"

But Galton, man. The ox-guessing competition.

Grad school in modern econ (in my personal experience) does not get into any depth about the "philosophical" reasons for why we do things. It's all math, all the time. Learn the models, solve the models, try to write papers and get a job. That's it. Which to my mind is unfortunate. I had to read things like Surowiecki's book on my own to try to puzzle out the motivation for this sort of stuff.

Because I'm in the perspective here of that agent. That agent gets some random signal from inside their own head. "I don't know anything about butchered meat." Our personal perspective is not the Galton perspective of being able to see that each person can be seen as having a real nugget of truth inside their head. We have to take a broader perspective for that.

The Miracle of Aggregation does not always work well, of course.

But I'm not trying to make a case for it always working well. I'm trying to point out how amazing it is when it works at all. It works, when it does, because our ignorance -- as vast and humbling as that ignorance can be -- is quite often still mixed in with some nugget of truth, even if we can't personally recognize or appreciate that nugget ourselves. It's something we possess, but cannot see without the assistance of others.

XXXX

SHIFTING GEARS

I'm not done yet. I have one more large idea to express. I need one more crazy thought experiment to express this idea.

Here it is.

Imagine one person creates an artificial world, something like a computer RPG, but not actually a game with a game interface. It's a simulation that runs on its own. The world has its own little game physics, with its little game people going about their lives, and maybe some dark forests with monsters, a few hidden treasures. It's a game world that exists for its own sake.

And you are invited not to play this game (because it's not a game) but rather to create a character to explore this world. You are to program an agent who walks around and tries to figure out the rules of the world. Your job is to program a scientist who explores the world to learn more about it.

The problem: you know almost nothing about what the world will be when you start.

Sure, you're told that your agent can walk north, south, east, west; pick up swords; kill monsters; write down notes; stay at the inn. Whatever. You have some basic guidelines for your character's actions, and your task is to create a little scientist to figure out how the rules of this world actually work.

This artificial scientist is NOT a probe. It will not be returning with reams of data for you to personally comb through in order to make determinations on your own. This agent will make their own conclusions based on the programming you give them. You don't know how the world works. You can't set up your agent perfectly. Dude might die in the first "day" of simulation if the world is programmed to be ridiculously harsh and unforgiving.

You need to program in some sense of curiosity, and a willingness to learn. But more than that, you need your agent to begin their journey with some basic idea of how best to navigate the world, and then, hopefully, they will learn more and more about the best way to survive as they explore and learn. Your agent will start with some prior understanding, as provided by you, and then, hopefully, improve and update on that understanding as they go on their adventures.

This entry was posted in Uncategorized. Bookmark the permalink.