Take Silly Examples Seriously

The previous post discussed Gun to the Head problems. But whenever silly hypotheticals come up, there are always (always…) people who try to question the nature of the example.

A psychopath walks up to you with a cattle gun, friendo, and asks you to call the flip of a coin. Get it wrong and you die. So what do you do? You might reason that if a person is crazy enough to make your life dependent on such a thing, then why should you trust that you will survive if you get the answer right? If this guy is crazy in one way, he might be crazy in other related ways. Maybe it’d be smarter to try to jump him and get the cattle gun away, or to distract him by saying the Goodyear blimp is floating behind, or to keep question his philosophical motivations rather than answer his challenge, or even to question whether you’re dreaming because that’s more likely than the strange scenario out of a movie that’s been described.

These kinds of objections always (always…) come up.

All of these tangents are excuses to avoid thinking about the underlying structure of the situation. The silly hypothetical? It’s just a skin, a superficial outer surface. To focus on the skin is a focus on the superficiality, while avoiding that deeper structure. I don’t know what’s important to you. But presumably you find something essential to get right.

The point of silly examples is not specifically to discuss weird shit for its own sake. The point is to develop the rules of plausibility, with emphasis on the fact that we’re learning these rules because we actually give a damn about something real. I don’t know what that is, from your perspective, and even if I did, it wouldn’t necessarily match what’s important to the other non-you people who might read this. So it’s going to be general policy here that I’m not going to answer the nitpicky objections that always (always…) come up in these sorts of discussions. I use hypotheticals that try to be interesting in an evocative way, but you can decide for yourself an analogue that’s relevant to you if you don’t like them. I want to illuminate the rules of plausibility, so that we develop and apply those rules to future questions that are genuinely important. The actual skin of the example isn’t relevant.

Take the silly hypothetical seriously. Which means: take the structure of the question that’s being asked seriously.

If you don’t like the particular form my ridiculous examples take, then build your own inside your own head, and then translate the example to something that has more relevance for you. But at very least, engage the structure of the problem as if the result were important. That underlying structure should “feel” real and important, so we can recognize not what the rules of plausibility are, but why they are the way they are, and why they are so important.

Posted in Uncategorized | Comments Off on Take Silly Examples Seriously

I don’t know what’s important to you.

It’s hard to give examples of the importance of the rules of plausible reasoning without subjecting yourself to empty criticism, or even ridicule.

If I use examples that I personally care about, then that example is going to be steeped in years of economics training. So what do I do then? Back up and teach three semesters of economic theory in order to get other people caught up on the problem? And then start to discuss the rules of plausible reasoning, once we’re all agreed that the examples I’m using relate to genuinely important economic problems that we as a society face?

What if my training has led me to conclude something that is counter-intuitive? Contrary to what most non-economists would think? Are we going to be able to discuss plausible reasoning at all, or will the discussion get mired in the dispute over controversial points in economic theory, rather than the broader topic of logic and probability and statistics? Shouldn’t we discuss the logic of statistical belief first, and only after we’ve got a handle on that start discussing important issues of policy?

I don’t know how to thread that needle. I don’t know how to talk about what’s important to me, without also knowing that it’s important to you. So here, I’m just going to stay very general, very lacking in detail, and go back to the basic idea: There are problems in this universe where it is crucially important to get the right answer.

Is that true for you? Are there problems in society — or even in your own life — that are legitimately so important that it’s actually worth investigating how to think more carefully about solving these problems? If that’s not true for you, then… okay. That’s fine. That’s cool. I guess I don’t know why you’re reading this? I don’t really see the reason for you to study probability or stats if it’s not for any specific purpose. But still, glad to have you here! Thanks for reading. But this discussion is more geared toward those people who agree with me that there are genuinely problems in our world about which we are not entirely certain of the outcomes, but which nevertheless we desperately want to get the answer right instead of wrong. This is for you! Let’s learn how to handle the big masses of data that the modern world throws at us, let’s learn some probability, some statistics, and let’s be part of the bigger conversation that’s going on in the world.

All of this means, fundamentally, taking the rules of plausibility seriously. But again, I don’t know what is important to you. I can’t use examples that are most relevant to your life and your concerns, because I don’t know them, or even if I did, what was important to you wouldn’t necessarily be important to the other non-you people who might possibly read this. So my solution? I’m going to use silly examples.

I’m going to use strong, evocative, silly Gun to the Head examples. Sometimes literally. For instance, we can go back to the conversation I discussed in my opening post, the person who talked about the lack of interest in applying probabilistic reasoning to problems to which he was personally indifferent. This is a direct quote from that discussion.

For example, why couldn’t it be that there were three uncertain propositions A, B, and C of which I had no reason to find A more or less plausible than its negation ~A, nor than B, nor than C, though A, B, and C are exclusive and mutually exhaustive? Why couldn’t I have total ignorance about a situation?

This is an easy thing to say when there is nothing at stake. But let’s get silly, in order to get serious.

There is a revolver with three chambers. Proposition A is that a bullet is in one chamber, Proposition B is that a bullet is in the second chamber, and Proposition C is that a bullet is in the third chamber. The propositions are mutually exclusive: there is only one bullet. The propositions are exhaustive: there definitely is one bullet in one of the chambers.

We can already see that we are not in a state of “total ignorance”, despite what was so hastily claimed. There are three chambers. That’s information. The bullet is definitely in one of the chambers. That’s information. The bullet can’t be in two chambers at the same time. That’s information. All of this was given as part of the setup to this problem, after which the state of knowledge was then inexplicably described as “total ignorance”. No. No no no no no. Total ignorance would mean not knowing there is a gun, not knowing there is a bullet, not knowing we were in danger. Total ignorance means no fear. You can’t be scared of what you don’t know exists. That’s not this situation. Not even close. We have very relevant information here.

If we had “no reason to find A more or less plausible than its negation ~A”, then we would be indifferent between the choice of facing a single pull of the trigger right now with the barrel pointing against our forehead, or of advancing the cylinder exactly one chamber and then firing twice. That is literally what it means to have no reason to find A more or less plausible than ~A. Proposition ~A is that the bullet is not in the first chamber, which necessarily means it is either the second or the third chamber.

What’s particularly frustrating for me about these kinds of discussions is how OBVIOUS this is.

The person who wrote the above does not actually agree with the words he wrote. He does not think those words are true, would not apply this sort of reasoning to any genuine problem which he found gun-to-the-head important. He had just never thought carefully about this kind of problem before. The strange lapse in reasoning happened only because nothing was at stake. But how do I communicate that effectively? I don’t actually know what real-world problems other people find important. I do know, however, that if I turned into an Evil Probability Maestro and showed up in the dark of night at his house with a six-chambered revolver (I’m evil but not evil enough to stick with only three chambers), there is no chance in hell that he would claim that he had “no reason to find A more or less plausible than its negation ~A”, when proposition A is that the bullet is in the current chamber. Proposition A means facing only one pull of the trigger. Proposition ~A means facing five pulls of the trigger. The decision makes itself, when you know what the stakes are.

I don’t know what’s important to you.

But I do know that you could personally have the same difficulties with kiddie stuff like the problem above if you don’t raise the stakes, inside your own mind, in order to consider shit that actually matters. Engage problems that get their claws into your fleshy scalp. If you can’t manage that, then you’re going to be stuck saying, and worse maybe even believing, dumb stuff like that quote above. Be rigorous now. Study this now. Get good at plausible reasoning before the stakes are high. Practice now, not later. There are important problems to solve, remember? We all agree with that, right? Then let’s build the tools together to help work on, and even better to have a conversation about this. We want to communicate and discuss these problems effectively and well, so that we can work on them together.

I’m writing this precisely because I want to get better myself. Reviewing it sharpens it for me, too. You can never practice this too much if you want to get good at it.

So let’s keep practicing together.

Posted in Uncategorized | Comments Off on I don’t know what’s important to you.

There are problems in this universe where it is crucially important to get the right answer.

I would not, originally, have considered starting (or restarting) a blog with this as the slogan. This sounds like a platitude, a truism so banal and boring and uncontroversial that it’s not worth any explicit statement, just like saying “basic logic works” does not seem to merit being said out loud.

And yet.

I was having an online conversation once about probability theory. The discussion was with a person who claimed to have thought heavily about probability, not just the mathematics but the underlying philosophical issues. I was trying to explain the importance of the qualitative “rules of plausibility”, although I wouldn’t have phrased it that way at the time. And the person I was talking with made the comment — paraphrasing here — that he didn’t see the importance of these qualitative rules of plausibility. After all, he reasoned, what if you apply these rules of plausibility to a situation that doesn’t matter? Why would they matter then?

After some bewildered consideration of that comment, I tried to write a reasoned response. I acknowledged that there would be no point expending effort in ranking relative plausibilities, if the outcome of the ranking were not important. However, I then suggested the somewhat different task of applying the rules of probability to a situation that he actually found important, a situation where he did not know the absolute truth of the matter, but had to make a decision under some level of uncertainty. (The world is big. We are small. There are many things we do not know.) I suggested that rigorous thinking about probabilities was desirable in exactly those cases where it would be critical to him, even essential, that he got the answer right, when he was ultimately ignorant of what would happen.

This same person, who previously claimed deep consideration of these issues, declined to respond to this point, or any other point I made. Conversation over. I’m still not sure what to make of that. But that conversation, among others, is a source of this current emphasis. I understand now, finally, that this is a point that must actually be made explicitly. And emphasized. Repeatedly, if necessary.

There are problems in the world that are actually important. It’s worth thinking about how to improve our chances to solve those problems.

If anybody has an objection to that, well then, I don’t really know what to say except… I disagree. Likewise, if I’m discussing the importance of the various “rules of plausible reasoning”, I would like to discuss the importance of these rules within the context of an important problem, one where we genuinely care about getting the answer right, especially when there is potential human suffering on the line given a mistake.

The world is big. We are small. We cannot be absolutely certain that we are doing the right thing, and DESPITE ALL THAT, there are still problems that are so overwhelmingly important that we really, really, really, really want to get the answer right. This is true in the political world. It’s true in the sciences, as well. The inventors of the H-bomb were certainly smart enough to create the H-bomb, but (as the saying goes whose source I cannot find at the moment) they were definitely not smart enough to not make the H-bomb. So how smart were they? Wouldn’t it be a better world if everyone clever enough to make doomsday devices were also rational enough to NOT make them? There are problems in this universe where it is crucially important to get the right answer. I would say the inventors of thermonuclear explosives didn’t manage that. They solved the physics problem, and thereby spectacularly failed to solve the human problem.

And of course, it’s not just decisions on this scale that we need to get right. It’s also true that there are decisions that we need to make in our own personal lives, maybe not the world-shattering decisions of nuclear physicists, but nevertheless crucial for our own and our family’s happiness. It’s important to get those decisions right, too.

So now, I’m stating that as an axiom up front. There is shit that we want to get right. How to improve our chances of getting this stuff right is worth thinking about.

Posted in Uncategorized | Comments Off on There are problems in this universe where it is crucially important to get the right answer.