Saturday, February 28, 2015

One (Very) Wealthy Person Could Reform Campaign Finance Today

It now costs over a billion dollars to make a credible run for President of the United States. How much more than a billion, we don't know, because so much secret money is given to groups that unofficially shill for a particular candidate. Which brings us to my favorite knock-knock joke, which I think came from Stephen Colbert:

"Who's there?"
"Unlimited campaign contributions from corporations and unions"
"Unlimited campaign contributions from corporations and unions who?"
"I don't have to tell you who."

It's a big problem. When a candidate receives large donations from wealthy individuals, corporations, and unions, it's a form of legalized bribery. Everyone knows this. Furthermore, the vast amounts of money sloshing around campaign war chests makes it impossible for a third political party to emerge because any such party is caught in a chicken-and-the-egg problem. In order to attract big donations, the party would need to be credible. But in order to be credible, the party needs vast sums of money. So we have two bribed political parties who are incapable of passing meaningful campaign finance reform because they were put into office by the very people who benefit from the current, highly corrupt system. A game theorist would call this an "equilibrium". No change to the system will come about with an abrupt and severe shock to the entire system.

One might assume that an external shock large enough to disrupt the campaign finance/bribery system would have to be so gigantic that only the federal government could accomplish it. Billions upon billions of dollars has imparted a lot of inertia to the current system. Of course, a coordinated effort by millions of ordinary voters and taxpayers could accomplish it, too. But we now live in a society in which these ordinary people are so overwhelmed by economic demands on their time that no such grass-roots movement is likely to ever happen. In fact, I'm quite confident that it will never happen unless there's a massive, systemic economic or political collapse. This isn't out of the question, but it's not something we should hope for.

But there is a way out. And it would take only one (very) wealthy individual to make it happen. Such an individual would have to be a billionaire -- in fact, a multi-billionaire (although once you've made your first billion, the second billion is a lot easier). According to estimates, there are about five-hundred American billionaires. A small, well-coordinated group of extremely wealthy non-billionaires could do it, too. But for simplicity, I'll just talk about one (very) wealthy person.

In order to understand how to make this happen, we have to get clear on one simple fact, which is this: It doesn't matter, at all, how much money a political candidate has. What matters is how much money a political candidate has, relative to their opposition. This is because political campaign fund-raising is a classic "arms race" scenario. Just like in a literal arms race, advantages are always relative to the other side. If one side wages war by throwing rocks, and the other side has bows and arrows, then the latter has an unassailable advantage. If both sides have machine guns, then neither side has an advantage. The absolute strength of your weapons doesn't matter at all -- what matters is how your weapons compare to the other side's weapons.

This is where it would be possible for a (very) wealthy individual to disrupt the system. Individual donors, as well as corporations and unions, only give money to political candidates in order to give that candidate a relative advantage over the other side. This is why, when candidates appeal for donations, they emphasize how much money their opponent is raising; and they always portray themselves as falling behind in raising money. Individual donors give money in order to either level the playing field, or give their own candidate a relative advantage.

Now imagine that your favorite candidate is asking you for money. But imagine that you have an evil twin who is guaranteed to give an exactly equal amount to the opposing candidate. So if you're a Republican, your evil twin will give match your donations, dollar for dollar, by giving to the Democrats. Would you give money? Of course not! Your money would be wasted because it wouldn't provide a relative advantage to your preferred candidate.

Now imagine that you and your evil twin both have an evil uncle. Your evil uncle will match either of your contributions in the following way. At the end of each day, your evil uncle will estimate how much money you've given to the Republicans, and how much money your twin has given to the Democrats. If there's a difference, then your evil uncle donates the difference to the side that received the least amount of money. In this way, your evil uncle guarantees that any contributions you or your twin give are, in effect, impotent. Neither of your contributions end up making the slightest difference to your preferred candidates. You'd be an absolute fool to continue donating money. You'd keep your money in your pocket.

Let's think about what your uncle would have to do in order to pull this off. First, your uncle would have to be wealthier than you. If your uncle had only ten dollars to his name, you might donate twenty, overwhelm his bank account, and then continue giving more money.

Second, your uncle would have to be convincing. You'd have to really believe his threat. Most likely, you'd test him by giving some significant amount of money to your candidate and then you'd watch to see if he really gave the same amount to the opposing side. If he did carry out his threat, you'd probably seriously reconsider giving any additional money. You might "test" him again, maybe with more money. So your uncle would have to be resolute in his determination.

How much wealth does your evil uncle have to possess in order to prevent you and your twin from giving money to political candidates? At first glance, you might think that if you and your twin were each to give a few thousand dollars to each candidate, then he'd need enough money to match both sets of contributions. But that's not the case at all. In the worst case scenario, he'd need the difference between the two -- remember that your uncle is only trying to equalize the total contributions. A much smaller amount of money is necessary in order to do that.

But that's the worst case scenario. If your uncle were to do this from the very beginning of the campaign season, he'd probably change your behavior pretty quickly. After you and your twin had "tested" your uncle and determined that he'll carry out his threat, you'll stop wasting your money. Unless you're both stupid, you and your twin will stop giving money to these campaigns very quickly. In the end, your uncle will have to have a lot of money in his bank account to make his threat credible, but he won't end up spending very much of it.

Now suppose that you and your twin are irritated by your uncle's meddling behavior and you want to put an end to it. How do you do that? The short answer is that you can't. Here's why.

The only way to end his meddling is by making it prohibitively expensive for him. The only way to do that is to create a large imbalance in the amounts that you and your twin are donating. So you have to agree that one of your candidates will be given a lot of money until your uncle's bank account runs out. Now, let's assume that somehow, you've actually scraped together enough money to make this possible. But now you face the question of selecting which candidate will get this money, and you can't possibly agree to that, because neither of you will permit the opposing candidate to have an advantage. For instance, if you're the Republican, would you agree that the Democrat should be given so much money that your uncle goes bankrupt? Of course not! And the same holds true if you're the Democrat.

Probably, what you'd actually do, if you were fanatical enough, would be to launder your money into the coffers of your candidate so that your uncle wouldn't find out about it. Let's assume that you could do this legally. It wouldn't matter, because the candidate's campaign has to spend the money in order for the donation to make a difference, and your uncle could easily estimate how much money is being spent. Of course, your uncle would never be able -- even under the best circumstances -- to be 100% accurate in his estimates, but that doesn't matter. He'd probably overestimate sometimes to the benefit of one candidate, and underestimate other times to the benefit of the other. The net effect would be close enough to zero that it wouldn't make a difference.

The fact of the matter is that your evil uncle has you checkmated. The only way to undermine him is to coordinate your efforts; but you can't do that because you're on opposite sides. You'd spend your money on something else. Maybe even something worthwhile.

All we need is for one (very) wealthy person (or a small group of somewhat less wealthy individuals) to be our collective evil uncle. The final element of the plan is that our uncle would also pledge not to match any money obtained by public financing. After a relatively short, but tumultuous time, there would only be public financing left. Or perhaps our uncle could pledge not to match small donations of less than some pre-specified amount. Either way, our American system of legalized bribery would end. Both parties would end up strengthened because they could spend their efforts actually doing strange things such as communicating with voters; elected officials wouldn't have to spend their time on the phone begging for money; and there would be a real chance of a third political party emerging.

Wednesday, February 25, 2015

W.V. Snodgrass: "History and Development of Artificial Minds"

By a method which I am not at liberty to reveal, I have obtained a copy of Professor W.V. Snodgrass's seminal study, "History and Development of Artificial Minds: Conceptual and Technical Issues, Volume I", which will be written mainly in the year 2130, and released to the public in late 2131 (publishing as an institution not existing by that date). It's a fascinating book, not only because the author has not yet been born, but because it is filled with valuable insight into a topic that's frequently misunderstood in such fundamental ways.

After a good deal of soul-searching and some level of experimentation, I've decided to print excerpts of this work. It's not the first work I've obtained from the future, and so the question of whether to release information that I've obtained in this way has come up before. I've performed a few experiments by releasing a little bit of this information and observing the effects. For example, I "predicted" the nomination of Mitt Romney and the outcome of the last US presidential election well in advance, and observed that this prediction (which was the view of only a tiny minority of political "experts" at the time) had no discernible effect on any subsequent events. I've also made many other "predictions" (which aren't really predictions if they're made retrospectively from the standpoint of a writer from the future), which have all come true. But I have never observed any effect from these public pronouncements.

One might reasonably ask why I'd bother to share these insights at all, if they really fall on deaf ears so consistently. To that, I can only say that the urge to shout this information feels much like the urge to gossip. Being in possession of a really scandalous secret creates an almost irresistible desire to tell it to someone; but that desire isn't the desire that someone else should know it. Rather it's the desire to simply say it out loud to someone; and this is a desire that would be satisfied even if you knew that your listener would forget it completely in only a few minutes. Such is the way with gossip. And these communiques from the future comprise the best gossip one could possibly imagine. All of which is to say, "Pay attention or not. I don't really care. Sharing information is not what blogs are about, anyway."

So here it is. I'll jump around quite a bit in the presentation of this material (mostly because Snodgrass tends to get mired in technical detail), but I will try to keep some continuity in the general thread. I should also apologize in advance, because the author's style is extremely pompous and more than a little irritating. I guess academic writing won't improve much between now and 2130.

(From W.V. Snodgrass: History and Development of Artificial Minds: Conceptual and Technical Issues, Volume I)

Early Conceptual Origins
The idea of artificial minds or oracles have been with humanity for thousands of years. Many historians of science place the first modern conception of artificial intelligence with Gottfried Leibniz, who envisioned a perfect system of calculation that would calculate the correct answer to any question, empirical or a priori. Although Leibniz's vision shares some resemblance to some conceptions of artificial intelligence, it is important to note that Leibniz did not envision a machine, but rather a mathematico-logical system that would be deployed by human beings...

...It was not until the early part of the twentieth century that the idea of artificial intelligence started to develop a precise formulation in the sense that it could lead to the development of working hypotheses about the capabilities and limitations of intelligent machines, as well as a working hypothesis concerning their implementation details. With the development of Kurt Godel's Incompleteness Theorem, the idea was already formulated that an upper bound could be placed on the computational complexity of certain tasks. This is fundamental to the idea of artificial intelligence because it gives lie to the notion that certain classes of computations of which humans are capable may require infinite computational resources. For example, try to place yourself in the mind of an early skeptic of artificial intelligence living in the United States in 1920. Although it sounds terribly quaint today to speak in these terms, it was probably the majority opinion that even playing a finite game such as chess (an example to which we shall return later) could not be simulated by any finite set of computations whatsoever. Misled by human beings' inclination to pass aesthetic judgment on nearly anything -- including the play of a chess master -- encouraged the view that something non-computational must be going on in the minds of competent human chess players. And if the question were to occur to someone of this view, they would have to say that, a fortiori, an artificial computing machine could not produce a winning chess strategy.

Godel's work, unbeknownst to him of course, gave the correct form of a response to such skeptical -- indeed, mystical -- worldviews. The right way to show that a task could be automated is to show, first, that there is an upper bound to the number of primitive calculations necessary for its solution. Godel's discovery of the primitive recursive function (which he called simply "recursive") gave the blueprint for such arguments, because it was necessary in the course of establishing the incompleteness (which he called "undecidability") of the Russell-Whitehead system of Principia Mathematica to prove the existence of these upper bounds. Although writers of the late twentieth century tended to focus on Godel's proof of the diagonalization theorem (a generalization of Cantor's argument, applied to what we now know as "fixed points") as the foundation of certain ideas in artificial intelligence, it would be more accurate to credit his discovery of primitive recursive functions. A good example of this mistaken focus on diagonalization can be found in the mid-twentieth century work by Hofstadter, whose treatment of the topic takes a decidedly mystical and counterproductive turn when he asserts...

Other logico-mathematical discoveries of the early twentieth century also paved the way for what eventually become an influential concept of artificial intelligence. Starting with the intuitionist mathematicians -- especially Kurt Brouwer, but most influentially with Alan Turing -- it became common to posit an "ideal mathematician" who reasoned flawlessly according to a certain predetermined set of inference rules. This "ideal mathematician" would discover new mathematical and logical theorems, adding to its store of knowledge monotonically over time. This concept, although somewhat loose initially, suggested a logical semantics that was eventually deployed in early computer database design when the exponentially increasing demands on computer storage became apparent.

Models of the "idealized mathematician" became increasingly rigorous, and as the capabilities of transistor-based computers reached a threshold, it finally became possible to rigorously model these ideas on equipment that was widely available...

(Here I'll skip ahead to a later chapter in which Snodgrass discusses some wrong turns taken by AI researchers in the late twentieth century.)

False Dichotomies
Starting the fourth quarter of the twentieth century, it became common in popular culture to present nightmare scenarios of artificial intelligence run amok. For a time, it was plausible to guess that these scenarios might trigger a popular backlash against artificial intelligence research. However, the extreme concentrations of wealth that were underway in that time period precluded any such "revolt" from taking hold...

Interestingly, the "experts" of this formative period would attempt to quell any unfounded (and some well-founded) concerns by drawing a pair of distinctions in artificial intelligence research programmes. Both sets of distinctions have since long been recognized as incoherent, as was pointed out by...

The first dichotomy is between "wide" and "narrow" intelligence. As the names imply, this distinction was concerned with the scope of artificial intelligence. "Wide" intelligence was to refer to the capacity of an intelligent being to reason about extremely disparate domains, whereas "narrow" intelligence was to be limited to the solution of specific problem-types within a single well-defined domain. The prototypical example of the "narrow" type was a computer chess program, which, even by the end of the twentieth century, was capable of defeating the most able human opponents, but was incapable of anything else.

Now we recognize that this conception of the "narrow" intelligence is an anthropocentrism of the worst kind. We, that is human beings, divide the world into categories and types; accordingly, an intelligence is of the "narrow" type if it deals with only a single one of those categories, such as the category "chess game". But our set of concepts is arbitrary to a very high degree, as was recognized by analytic philosophers as early as Wittgenstein. This fact makes the distinction between "wide" and "narrow" incoherent. A chess game, for example, may be divided into separate phases, each of which may be considered a different game. We could, if we pleased, consider the opening phase of a game of chess to be its own game. Or we could consider chess and checkers to be the same game, typically played by humans in two phases. Thus, a chess-playing intelligence is "narrow" if we conceptualize chess as occupying a single domain; but it could be "wide" if we reconceptualize chess accordingly. The distinction between "wide" and "narrow" is exactly as arbitrary as our conceptual framework...

Research in the first half of the twenty-first century also cast away the distinction between "hard" and "soft" artificial intelligence, which is our second dichotomy. "Hard" artificial intelligence referred to the research programme of modeling an intelligence after the human mind, by (as was a common term in this period) "back-engineering" the brain. "Soft" artificial intelligence was not concerned with duplicating the workings of the human mind, but was committed only to problem-solving by whatever methods were most available, such as...

Ironically, it was the so-called "deep learning" paradigm that permanently wrecked the "hard" v. "soft" distinction. Recall (see chapter 2: "Pseudo-neuronics") that early research into neural network techniques plateaued when the limits of back-propagation and its variants were reached. In hindsight, those limitations should have been apparent from the beginning, but they were not recognized for three decades. Back-propagation, we recall, was based upon initializing a simulation of interconnected neurons and stochastically readjusting their topological properties by an error-correction mechanism. Whenever the output of the network failed to meet specified criteria, that topology would be adjusted from back to front -- that is, starting with the neurons that were the last ones before the output was generated, and moving incrementally toward the "front" of the network, eventually reaching those neurons which had direct access to input. But just as with a large hierarchical organization of humans, it becomes increasingly difficult to assign blame as one moves through the many levels of the hierarchy. This intuitive idea has an exact mathematical formulation in the many-to-one mapping between output errors and the solutions to the partial differential equations that were used...

The answer to the back propagation problem came from the neural network's inspiration: neuroanatomy. It had long been known that the various regions of the brain could be "reassigned" as it were, to tasks for which those regions were not "designed" by evolution. For example, the vision centers of the brain could be "rewired" as it were, to process sound following a debilitating accident or invasive brain surgery. This being the case, it was recognized by the early twenty-first century that there must be a single learning algorithm that generalized across highly disparate domains at work in the human brain. A detailed study of these learning algorithms soon followed, resulting in breakthroughs which obviated any need of the traditional back propagation techniques...

When the limits of these so-called "deep learning" strategies became apparent, another breakthrough soon followed. In the early development of computing machines, it was traditional folk wisdom to say that computers were good at tasks for which humans were ill-equipped, such as performing large numbers of calculations; but that conversely, computers were not good at tasks at which humans excelled. These included recognizing handwriting, faces, sounds, speech, and so on. In short, anything for which the input was imprecise and had a high variance. Deep learning changed this by enabling the machines of that era, for the first time, to complete those tasks as well. Limits appeared when one attempted to turn the techniques toward problems with imprecise or "fuzzy" input data, when those problems were not easily solved by human beings. Examples of these tasks were easy to come by, once the researchers of that era recognized the possibility. For example, humans are very poor at recognizing causal relationships in highly complex systems with feedback mechanisms. This class of problems certainly fit into the category of problems with imprecise input data, but the "deep learning" methodologies of the time failed here. This became an important research problem by the year 2021, and although certain advances in Bayesian networks were made, those advances were fundamentally unsatisfactory because they did not extend the "deep learning" techniques of the previous decade.

Eventually, it was recognized that those "deep learning" algorithms suffered from yet another anthropocentrism, namely, that they were based on the general learning algorithms implemented in the human mind. As with the distinction between "wide" and "narrow" intelligence, a so-called "general" learning algorithm is in fact only concerned with the specific needs of the animal in which it evolved. Recognition of vague inputs is important to Homo sapien, but attribution of causation in complex networks is not. No wonder, then, that these deep learning algorithms systematically failed in domains having no evolutionary correlate in human pre-history.

Thus began the systematic research into alternate learning algorithms, which began in earnest in the middle part of the third decade of the twenty-first century...

(I now skip ahead a couple of chapters to a brief discussion reprising the history of computer chess.)

Chess Redux
An illustrative event occurred in 2030, when a group of young researchers began to reexamine the problem of computer chess -- a problem which had long since gone "out of style", since chess had been solved (not in the strict mathematical sense of "solved", but in the more pragmatic sense of being able to construct machines to play chess at a level of competence orders of magnitude greater than any human).

Looking at the chess problem tabula rasa, so to speak, these researchers noticed how similarly computer-generated chess moves appeared to human-generated chess moves. For example, chess-playing computers always played opening patterns that have long been recognized as "optimal" by human players. They changed strategy qualitatively as the game transitions between opening, middle, and end-game phases. And so on.

However, there is no a priori reason to believe that humans would have chanced upon the optimal opening strategies, for example. Rather, it seemed much more likely that the human conception of "optimal chess play" was heavily influenced by heuristics that resulted from accidental features of the human mind. Putting the point another way, it was noticed that chess computers played chess just like human beings, only much more so. But why should this be the case? Why shouldn't computers play chess qualitatively -- as opposed to merely quantitatively -- differently?

An explanation for the human-like behavior of chess playing computers is apparent to anyone. Those computers were designed by human beings, and any so-called "machine learning" that was allowed to take place used as its corpus many thousands of chess games played by human beings. Thus, the strategies available to the computer had been canalized into a narrow subset of the space of all available strategies.

And so it turned out that a tabula rasa approach to computer chess yielded a machine that could not only outplay all human opponents, but would do so in a manner that was utterly inhuman. The moves of the machine seemed random, incoherent, inconsistent. But they inevitably yielded a win for the computer. Adding insult to injury, this new breed of chess computers was able to soundly trounce any of the previous generation of machines -- largely because it had a much wider space of available strategies from which to choose, not being fettered by the anthropocentrism that had limited its predecessors...

(I may copy more excerpts from this remarkable book later.)

Sunday, February 22, 2015

A Puzzle About the Limits of Our Minds

It's a very familiar fact that the moon appears much larger when it's on the horizon. This is often called the "moon illusion". It isn't really, of course. We can measure the actual size of the moon and its actual distance from the earth, and thereby transcend our own faulty cognitive capabilities. So despite the fact that our brains are configured to make this mistake, we can reason our way to the right answer.

There are lots of other cases just like the moon illusion, where we cannot prevent ourselves from misinterpreting the world, but we can still use our rational faculties to recognize that fact. The McGurk effect is a great example. It happens when we see someone pronouncing the sound "fa", but hear the sound "ba". We simply cannot prevent ourselves from hearing the sound as "fa" because the visual information overrides the auditory information. If you haven't seen this in action, this video from the BBC is worth watching:
Our brains are wired to misperceive the sound when it's paired with the wrong images. In fact, no matter how many times you watch this video, and no matter how well you understand that it's an illusion, you will still hear the sound "fa".

You could easily imagine an intelligent being who has some pretty good powers of rational thinking, but lacks the reasoning power that it takes to recognize the moon illusion or the McGurk effect. Maybe that sort of creature would be pretty smart, but would lack a certain introspective capacity that we have. Perhaps they could so some science, build complicated machinery, and play a decent game of chess, but just lacked the cognitive function that's required for understanding those illusions. I'm not imagining a creature that just hasn't figured them out yet, I'm imagining a creature that fundamentally lacks the power to ever understand them.

Let's consider a little fable in order to make this point more specific. Imagine a planet called "Kerbal", which is inhabited by intelligent beings who like to do science. They've got their own moon, and have started their own version of the Apollo program. The Kerbals, like us, are subject to the moon illusion. But despite their relatively advanced science, they haven't figured out that the moon illusion is an illusion. They think that the moon is actually larger at certain times of the day. And so, reasonably enough, they plan their moon mission accordingly. They make sure to land their ship on the moon in the morning, when the moon is at its maximum size.

When the ship lands on the moon, the Kerbal astronauts are puzzled to discover that they can't detect the changing size of the moon. Shocking! When they return to their planet, a major scientific undertaking is concerned with explaining why they couldn't detect the changing size of the moon when the astronauts were there. A thousand years go by, and the Kerbals have colonized other planets and advanced their science in many profound ways. But they never understand the moon illusion because their brains are just fundamentally incapable of making that specific leap.

Is this scenario possible? Is it possible for a creature that's approximately as intelligent as a human being to have a sort of "cognitive blind spot" that prevents that creature from ever understanding the true nature of a specific illusion?

If you think the answer is "no", then you're taking a very substantive position and making a lot of difficult assumptions. Presumably you think that there is a set of minimal cognitive capacities that enable an intelligent being to "self-correct" for any misleading or faulty perceptual function whatsoever. Perhaps when a being understands some basic logic, can do a sufficient amount of math, and has sufficiently good senses, there's a method available to them (whether they've actually used it or not) that would allow them to recognize any and all illusions. Let's call this view "Cognitive Universalism" because one's cognitive capacities could universally recognize any illusion whatsoever.

If you believe in cognitive universalism, then you also have to believe that there's a threshold an intelligent being must reach, above which they can recognize all possible perceptual illusions. Let's assume that human beings are above that threshold (an assumption that I personally don't share) and that mice are below it. Where is the threshold? What capacities must one have in order to reach it? Frankly, I think this is an unbelievably difficult question. I also think it's the single most important question about intelligence and technology, and also that it's been missed by philosophers, cognitive scientists, psychologists, brain researchers, and logicians. Perhaps it's been missed by everyone.

Here's why I think it's so important: There are many different kinds of illusions, and we don't always call them "illusions". Sometimes, we call them "common sense". To borrow an example from Bertrand Russell, common sense tells me that my desk is solid, but science tells me that it's actually a vast number of invisible particles violently interacting with each other at tremendous energies. Common sense tells me that the world has three spatial dimensions, but I know it's much more likely that the actual number is quite a bit higher, and that I just can't perceive the others. Common sense tells me that time is constant, but Einstein tells me otherwise; and I believe him. There's no important difference between the moon illusion and my perception that my desk is solid. They are both perceptual illusions. The only difference is that it takes a lot more science to recognize some of them.

It is not an exaggeration to say that literally everything we perceive is an illusion. We just happen to be in the habit of using the word "illusion" to refer to some ways our senses mislead us and not others. But there's no principled way to make out a distinction between our common-sense experience (which we know from science is wrong) and perceptual illusions.

This is why cognitive universalism is so important. It is a conjecture about the limits of science. If you believe in cognitive universalism, you believe that for a sufficiently intelligent being (maybe humans, but maybe not) there is no principled limit to what can be understood by science. If you don't believe in cognitive universalism, then you believe that for any intelligence, no matter how powerful, it is possible for it be utterly confounded by some feature of the world, just as the Kerbals were confounded by the moon illusion.

In order to be optimistic about humanity's prospects for understanding the universe, you have to be a cognitive universalist, and you have to believe that humans have met the necessary threshold of intelligence. Personally, I've never seen any argument for either of these views. Ironically, I can't seem to even imagine what such an argument would look like.

Saturday, February 21, 2015

Play "Clash of Clans" Like a Professional Game Theorist

I'll admit it -- I like playing Clash of Clans. It's a very clever game with a lot of variety. I'll also admit that one of the other reasons I enjoy Clash of Clans is that I used to study game theory quite seriously (my PhD dissertation was on game theory, and I've written several academic articles and book chapters on the subject).

When a game theorist uses the word "game", they mean something very specific. Two or more people are playing a "game" (in the technical sense) when they each have a choice of actions, and their choices affect each others' outcomes. This means that if you and I are playing a game, my best choice of action will depend, in part, on what you do -- and vice-versa. So games often involve trying to anticipate what the other person will do, knowing that they are also trying to anticipate what you'll do.

This technical sense of "game" includes actual games like poker or chess, as well as Clash of Clans. It also includes a lot of other situations that we don't ordinarily call "games", such as negotiating the price for something.

One way of thinking about a game is that you're always trying to do two things. First, there's something you're trying to maximize or minimize. For instance, if you're buying a car and you're negotiating a price with a car dealership, you're trying to minimize the price. And of course, the seller is trying to maximize it. When you play chess, you're trying to maximize the number of squares of the board your pieces control. If you're running a business, you're interested in maximizing profit. In poker, you're maximizing your money.

The second thing you're trying to do is to minimize the amount of information your opponent has. Ideally, your opponent is totally in the dark, and has to make choices randomly. There's a sort of paradox involved in this -- if you always act rationally, then your opponent will learn a lot by watching your behavior. So you have to appear as if you're behaving randomly. If you're negotiating the price of a car, for example, you don't want to give away that you've got lots of money by dressing in expensive clothes. And you should also seem a little eccentric, as if you could make an irrational choice at any moment. If you go to the dealership spouting information about the trade-in value of every car, the typical resale value, the wholesale price of the car, and so on, then your opponent (the dealership) will have been given an awful lot of information about how you behave. But if you act like you're uninformed, you'll have an informational advantage.

Understanding the role of information in game strategy is one of the things that separates expert poker players from amateurs. Amateurs think that it's bad to be caught bluffing. Experts know that you want to get caught bluffing occasionally. That way, your opponent knows that they can't guess the strength of your cards by watching how much you bet. In other words, if you occasionally get caught bluffing, your opponent will learn less by watching your bets.

So let's get back to Clash of Clans. We've got to answer two questions:
  1. What are we trying to maximize (or minimize)?
  2. How do we avoid giving our opponent any information?
I'm going to make a couple of assumptions here. The first is that we're interested in how to arrange our village "base". That's where these issues really come into the game. The second assumption I'll make is that we're interested mainly in preventing someone from getting any stars when they attack us.

The answer to (1) is not as obvious as it seems. The way most people seem to think about this is that they're trying to minimize the amount of resources an attacker can take, and/or minimize the number of buildings their opponent can destroy. So they stick their gold and elixir behind a wall; and they put their more valuable resources (e.g. dark elixir) behind their strongest walls. On a similar note, some people will try to minimize the number of buildings their opponents can destroy by putting as many buildings as possible behind walls, or at least under the protection of a defensive building, like an archer tower or a mortar.

In general, the reason why that's a bad way to think about village layout is that it makes it easier for your opponent to destroy your defensive buildings. Then it doesn't matter if your resources are behind a wall or not because an attacker will be able to break through.

We've got to be a little more subtle about what we're trying to maximize. Here's the right way to think about it:
When you're designing your village layout, you're trying to maximize the proportion of time your defensive buildings will spend firing on the attacker. In an ideal layout, every defensive building would spend every second of an attack firing on the attacker.
The way to do this is to quickly bring the attacking troops within firing range of as many defensive buildings as possible, and keep them there as long as possible. Protecting your defensive buildings is more important than protecting your resources. So your defensive buildings should be behind walls, but your other buildings should not (except the town hall, of course).

Here's the layout of a professional game theorist:
You'll notice a few things about this layout. The first is that no resources (other than a couple of dark elixir drills) are behind walls. They're totally exposed. This means that the attacking troops will go straight for them (except for giants and hog riders, who always prefer to attack defensive buildings). But as soon as troops are at those resources, they're typically under fire by at least five or six defensive buildings. There is no place near any resources where an enemy troop is under fire by fewer than five defensive buildings. The town hall is similarly well-protected. In order to make this happen, you have to pay a lot of attention to the range of each defense. For example, the wizard towers have the shortest range, so I put those closest to the clusters of resources.

So much for what to maximize, and how to maximize it. Question (2) is about how to prevent your attacker from gaining information. The only thing your opponent doesn't immediately know about your base is the locations of your traps -- bombs, springs traps, teslas, and so on. So we're trying to hide the locations of our traps.

It's surprising to me how many people place their traps in obvious locations, and/or place them symmetrically around their village. These villages are easy to destroy because you can drop a few inexpensive troops, determine the locations of the traps, and then plan accordingly.

Making your village layout symmetrical is generally not a good idea, but it does accomplish one thing -- it makes it impossible for your attacker to prefer a specific location for dropping troops. In other words, it makes your attacker's behavior more random, which is a good thing. But that symmetry should not extend to traps. Remember the expert poker player -- you want to allow your opponent to take some of your money occasionally so that you remain unpredictable. So you should leave some of your resources unprotected by traps. Others should be highly protected with several different traps. This leads us to our second piece of advice:
In order to minimize your opponent's information about the locations of your traps, you should layout your village (more or less) symmetrically; but you should place your traps randomly around your resources.
If you follow this advice when designing your village layout, you'll be surprised how many attacks your villagers successfully repel. Certainly, there's no magical way to defend against every attack -- you can always get sandbagged by a much more powerful army. But ever since I started applying my game-theoretical knowledge, my villagers have been repelling attacks that used to decimate them. They're very appreciative, and now I'm happy to report that every child in my village is taught some elementary game theory from a very young age.

Monday, February 9, 2015

Why Being Delusional is a Full-Time Job

Here's what I mean by being "deluded" (whether it's deliberate or not). You are deluded if you require a high standard of evidence for one kind of belief, and a low standard for another. For example, it's easy to find cigarette smokers who won't quit smoking because it hasn't been "proven" that smoking causes cancer. Of course, there are mountains of data showing beyond any reasonable doubt whatsoever that smoking causes cancer. But these smokers have such an absurdly high standard of evidence that not even this amount of evidence suffices to show anything.

These smokers are deluded because that standard of evidence doesn't apply to anything else in their lives. We are quite willing to bet our lives on evidence that's not nearly as compelling as the case against smoking; and we bet our lives like this all day long. If this deluded smoker takes prescription medication, for example, it's very likely that the evidence for its safety and efficacy is far less convincing than the evidence that smoking causes cancer. If they get in a car, they have less evidence that the car won't explode. And so on.

So how does one argue with our hypothetical deluded smoker? The only way is to get a little more abstract that we usually are, by asking, "Why do you require so much more evidence to believe that smoking causes cancer than you require for anything else?". We might point out that if they consistently lived their lives by making decisions only when there is perfect, incontrovertible, and totally complete evidence, they'd never be able to get out of bed in the morning. And it's perfectly clear why such a smoker would be so stubborn about the evidence against smoking: it's because the smoker doesn't want to quit smoking. Most likely, it's because quitting smoking is unpleasant and frightening. And nothing affects our judgment more than fear (and I say this as an ex-smoker -- quitting smoking is damn scary).

Here's my claim, which may sound crazy at first. But I hope to make it more plausible, and even obvious:
In order not to be deluded, you must apply exactly the same standard of evidence to every single belief you have.
This will strike some people as a bit crazy because it certainly seems like people require more evidence for important decisions and less for unimportant decisions. But this is completely compatible with my italicized claim. Here's why.

Suppose Alice is thinking of having a dangerous medical procedure, and there's some serious risk involved. She'd be well-advised to acquire as much information and evidence as she can get. She should get a second opinion from another doctor, ask about the likelihood and severity of complications, and so on. Now suppose Bob is thinking of ordering the chicken instead of the fish at a local restaurant. It would be nutty for him to find the same amount of information as Alice. So isn't this a case where it's perfectly reasonable to have different standards of evidence?

No, it isn't. At the end of their information-gathering, Alice will believe (say) that there's a 95% chance that she's making the right decision. Bob might believe that there's only a 50% chance that his decision is right.  It's okay for Bob to go ahead and act on the basis of his decision because there's so little risk involved; the worst-case scenario is probably that he may wish he had ordered the fish instead. But Alice's worst-case scenario is far worse. So she had better make sure she's as confident as possible in her decision. There's an important difference here between how much evidence you need in order to act, and how much evidence you need in order to believe. When acting, the amount of confidence you should have will vary depending on the risks involved. But believing isn't like that. Believing is based on the evidence, not on the outcome of your decisions.

It's a subtle distinction between confidence in believing and confidence in acting. It's perfectly rational to require different standards of evidence for justifying your actions. But it isn't rational to require different standards of evidence for your beliefs. Think about Bob and Alice again. If Bob were to say, "I'm 95% sure that the chicken is better than the fish", you'd expect him to have some evidence. Maybe the chef has a reputation for making incredibly delicious chicken, but not fish. And perhaps he has a dozen friends who swear by the wonderful chicken, but all say that the fish is lousy. But if Bob maintained his 95% confidence that the chicken is better than the fish, and he didn't have any evidence at all, you'd think he was sort of crazy. But it would not be crazy for him to go ahead and order the chicken, anyway.

All of this is to say that one's beliefs should respond to evidence in the same way, regardless of what the belief is about. Someone whose beliefs require one standard of evidence in one context, and another standard in another is "deluded". That's what I mean when I say that someone is "deluded".

My experience as a professor taught me a lot about this subject, in at least two ways. The first is that it's well-known that university professors tend to rate themselves as being above average teachers. That is, if you ask university professors how their teaching skills compare with their colleagues', they will overwhelmingly report that they are far better than average. In fact, over 90% will rate themselves as being better teachers than their colleagues. You have to wonder what evidence they could possibly have for this self-confidence. In the overwhelming majority of cases, the answer is simple -- they have no evidence at all. And at least some of these professors are reasonably intelligent people who don't usually have beliefs that are totally unfounded. This is why, whenever I was asked if I was a good teacher (which I was asked a surprisingly large number of times), I'd always say, "I'm probably about average." If you ask me how good-looking I am or how likable I am, or how well I get along with other people, I'll tell you the same thing: "probably about average". We are all probably about average in most respects.

The second thing I learned about delusional beliefs came from my students. As a philosophy professor, I'd occasionally teach a course of "Introduction to Philosophy". In it, I'd introduce the students to various arguments for and against the existence of God (by which we mean a Judeo-Christian conception of God). Always, without exception, at least one student would object to the subject by saying this:

Student: "Belief in God isn't based on arguments and evidence. It's based on faith. You can't argue for the existence of God. You either have faith or you don't." 

The response to this common claim is pretty standard; anyone who teaches philosophy on a regular basis will know it. The subsequent dialogue goes like this:

Professor: "So let me get this straight. You may or may not have evidence that God exists. But the reason you believe in God is because you have faith, not because you have evidence."

Student: "Yes."

Professor: "But in other areas, such as choosing a college major, or deciding whether to take a medication, you look for evidence and evaluate arguments, right?"

Students always say "yes" to this, because it's obviously true. But then the next question is too obvious. It goes like this:

Professor: "Okay, but why do you have different standards? Why is faith alright for religion, but not for choosing a major or taking medications?"

This is the highlight of the conversation. What is revealed is that the student must fall into one of two categories:
  1. It's possible that the student just sticks with the "faith" explanation, and that's the end of the matter. They believe in God because they have faith, and they have faith in the fact that faith is the appropriate attitude toward religious belief, and so on. Every answer to any question about their belief system is the same: "Faith".
  2. It's also possible that the student has a reply other than faith to the question of why faith is appropriate for religious belief, but not for other questions. For example, they believe that faith is appropriate because the Bible says this, or because a religious teacher said so, and so on.
I've personally never heard a student give the first reply. In my experience, the answer is always number (2).

Now, you may think that I'm setting up this example so that I can smugly conclude that these students are deluded. But the opposite is true. If they give number (2) as their answer, then they are not deluded. Here's why. Whatever reason they have for using "faith" to believe in God is actually a reason for believing in God without faith. For instance, if you believe that the Bible provides a reason to have faith, then presumably you believe that the Bible contains truths. In other words, if something is in the Bible, then you've got evidence, just as I have evidence for a statement about physics if that statement appears in a physics textbook. So the student is wrong -- they do not believe in God because of faith; they believe in God because they believe that the Bible provides evidence that God exists. In my experience, it always turns out that the students who claim to have "faith" in God actually have evidence of God's existence. That is not being delusional.

What we learn from this is that for the vast, overwhelming majority of people, "faith" (whatever that is) is not an alternative to reason and evidence. There is, in fact, much less faith than people think. "Faith" is just a word that's used to cover up rationality and reason.

This is why I think it's a mistake for the so-called "new atheists" (Dennett, Dawkins, etc.) to be so hard on people who have religious beliefs. It's simply wrong to accuse the vast religious people of being "blinded" by "faith", or to accuse them of irrationality. But ironically, the reason it's a mistake is because most religious people don't have "faith" in the sense of having beliefs that aren't founded on reason. They may say that they do, but on closer examination, they've got plenty of reasons for believing in God. Now, you might think that those reasons aren't compelling -- but that's different from accusing them of irrationality. God knows, there are plenty of atheists who disbelieve in God for reasons that are less compelling than those held by some believers in God.

The more interesting group of people consists of those who claim that they're deluding themselves on purpose. This includes a small number of people who believe in God, but it also includes a lot of people who say that they have to believe unfounded things about themselves if they are to function. For example, I've heard people say that they force themselves to believe (based on no evidence at all) that they're smarter, more knowledgeable, or more charismatic than they know they actually are. The motivation is that (or so they believe) that if you're more confident, you're more likely to succeed. And if you can get yourself to believe that you're exceptionally skilled, you'll be more confident.

This may seem like a good strategy, if you can pull it off (personally, I can't imagine being able to do this on purpose). But a problem arises because, like the student who says that they believe in God on faith, there's no way to separate domain that requires evidence from the one that doesn't. And as a result, the delusional beliefs can spread through one's mind like a virus.

We can make this concrete by a simple example. Suppose I can somehow get myself to believe that I'm the smartest, most knowledgeable person in the room when I'm presenting some recommendations at a business meeting. Let's grant that as a result, I become more confident, and I do an excellent job in my presentation. During the subsequent discussion, let's say that Carol (who is at least as intelligent and knowledgeable as I am) raises a potential problem with my recommendations. How am I supposed to evaluate her objection?

From a purely objective perspective, I should take her concerns very seriously and treat her as someone who is equally well-qualified to evaluate the data. All else being equal, there's an even chance that she's right and I'm wrong. But this isn't the case if I'm more intelligent and knowledgeable than she is. All else being equal, a more intelligent and knowledgeable person is more likely to be correct.

So what do I do? Either I stick to my guns by consistently treating myself as if I were better-qualified than Carol, or I go back to a more objective perspective and reevaluate my proposal. If I do the former, I'll likely use some irrational criteria to evaluate my own work; to do otherwise would be to undermine the belief that I'm better than Carol. In this way the irrational belief that I've deliberately acquired can "spill over", so to speak, into domains that I really ought to be evaluating rationally.

I think this is a general phenomenon. In the workplace, imagine I believe that I'm a fair-minded person who treats everyone equally, but I don't have evidence of this. I merely believe it because it's so uncomfortable to believe otherwise. Confronted with evidence that I've (perhaps unwittingly) taken part in discriminatory behavior, I can't consistently evaluate that evidence rationally. If I'm a scientist who also happens to be a young-earth creationist, I won't be able to consistently evaluate scientific evidence rationally. It's easy to come up with lots of examples like this. This is why being delusional is not a part-time job. Being delusional is a full-time occupation.

Wednesday, February 4, 2015

Jumping Off The Ivory Tower: One Year Later

It's been a little over a year since I quit my tenured position at the University of Missouri-Columbia, where I was an Associate Professor in the Department of Philosophy. Since then, I've been a software engineer for a Chicago-based tech startup. I'd never done any programming or engineering work professionally, although I've been programming since I was in the fourth grade. For the most part, people I knew were very supportive of this decision, although many thought I was crazy for giving up tenure. A few people thought it was a terrible idea; some thought I was being really stupid or naive.

I had many reasons for leaving academia: frustration with the direction the university was headed; highly unethical behavior by my colleagues and the administration; disincentives for interdisciplinary work; and wages that were going down in real dollars over the course of years. Those are just some highlights. But the most important reason was that I felt that I was stagnating in my academic job. I was isolated, there was no support for the kind of research I was doing, and I wasn't learning anything from the people I worked with. I didn't want to find myself still there thirty years later, with a fossilized brain, no money, and a lot of regrets.

Now that it's been a year (thirteen months, actually), I can say something about how it's worked out so far. Here's the short version: It's been great, and I'm really glad I made the leap.

Perhaps the best way to summarize the past year, as well as the benefits and drawbacks to leaving academia, is to make a list of all the reasons this was supposed to have been a bad idea.

What if you lose your job? The biggest advantage to having tenure, of course, is that you're very unlikely to lose your job. Contrary to popular opinion however, tenure isn't an absolute guarantee of employment; there are plenty of ways to fire tenured faculty. If there are economic emergencies, tenure can be pierced. If the faculty member is judged to be incompetent, they can be fired. But my favorite way to fire a faculty member is to use a little loophole that allows administrators to dissolve entire departments. In at least some universities, the administration can get rid of entire departments without cause, even if that means firing tenured faculty. So they create a new department and put all the faculty they don't like into it. Then they trash the whole department and throw the faculty onto the street. I've personally seen this happen; although it's unusual, it does happen.

But nonetheless, being a software engineer is not nearly as secure a job as being a tenured professor. But if you lose your job as a software engineer, you can do something you can't easily do as a professor: you can get a new job. I get contacted at least two or three times a week by recruiters who are trying to get software engineers for various companies all over the country. This is not uncommon. If I lose my job, I'll get another one. Just like people in the real world. Only faculty at universities think it's an absolute disaster to lose one's job, and this is because it's so hard to replace a faculty position.

You won't have the same amount of free time you used to have. This one is absolutely laughable. I worked much longer hours as a faculty member, even though I'm still working more than a typical eight-hour day in my current job. You do get more flexibility as a faculty member, which is certainly an advantage. But I'm not working longer hours than I used to.

It's also nice to not have to live with the dishonesty of an academic work schedule. Like most faculty, I was paid for nine months' work, and got the summers off. But like most faculty, I worked full-time all summer, every summer, doing research, advising students, and so on. I just wasn't paid for it. Now, I work full-time all year, but I actually get paid for all my work, not just three-quarters of it.

You won't be able to do the research you enjoy. It is true that there are research problems I'd like to work on, but have very little time for now that I'm in the private sector. But luckily, I happen to work for a company that's doing some very interesting stuff; some of it is surprisingly close to problems from analytic philosophy. This isn't the place to get into it (and some of it has to do with intellectual property that I shouldn't be discussing anyway), but a good portion of my professional effort is directed at some very interesting and challenging problems -- and several of these problems are of a type that nobody else in the world is addressing.

The question is how much of my time is spent doing this interesting work as opposed to work that's less interesting. I don't know how to put a number on that proportion, but it certainly feels to me like it's at least as high as the amount of time I got as a professor. As much as faculty like to delude themselves into believing that they're doing fascinating research all day long, that's just not true in general. It certainly wasn't true in my case; and I had a pretty decent position in a department with the PhD program in a flagship state university.

The problems you had with university culture are just as bad (or worse) in the private sector. I'm sure there are universities that are much more pleasant than my old one, and there are plenty of companies that are miserable to work in. And no business is perfect, including the one I work for. But when I compare the culture of my current employer with the culture of my old university and department, the conclusion is unbelievably clear. My current work environment is vastly better. A major reason why I needed to leave my old university was because unethical and illegal behavior was so rampant. I am under the impression that the scope of this behavior was probably worse than average, but it seemed like everyone I knew -- no matter what university they were affiliated with -- had some demoralizing story. For example, it's no secret that academic philosophy, in particular, has a major problem with discrimination against women.

Of course, only an idiot would say that academia has a monopoly on unethical, illegal behavior. And I did not leave academia believing that the private sector was run entirely by angels. But my present work environment does not have anything like the problems I encountered every single day when I was a professor.

And I can actually prove that it's better. Here's the proof: I haven't left my job. Like I said, software engineers are in extremely high demand, and any of my colleagues could easily find work elsewhere. This is not true of faculty. The academic job market is so bad that unless you're a superstar, it's extremely difficult to find a new position -- even if you're just making a lateral move. So faculty tend to stay in their jobs even if the job is awful. This is not true in my small corner of the private sector.

Of course, I wasn't right about everything. I made several errors in judgment with this decision. Here are some highlights.

I thought I'd be making a little more money. But in fact, I'm making a lot more money. The reason is interesting, though. As a professor at a state university, I was a state employee. Faculty, like all state employees, are subject to state budget cuts. Of course, they don't like to actually cut salaries outright. But they do cut benefits, and they do other things such as requiring greater "employee contributions" toward benefits. When I started my current job, the guy who hired me let me know that as a startup, their health benefits would not be good. Imagine my shock when I discovered that they were vastly better than what I had been receiving as a tenured professor. When the state needs to raise revenue, and they can't call something a "tax", they like to extract money from state workers. My former university administrators like to brag about how they weren't cutting salaries; but they were cutting benefits and taking increasing percentages out of our paychecks for them. It takes about a tenth of a second to realize that this is identical to a cut in pay or an increase in taxes. In real terms, my salary as an Associate Professor was about ten percent less than my starting salary as an Assistant Professor.

I thought I'd learn a lot. But in fact, I'm learning a huge amount every single day. I knew that because I hadn't ever worked as a programmer or engineer before, that my skills would not be up to snuff initially. But I dramatically underestimated how much I'd have to learn. Being a software engineer requires skills I didn't know even existed. I was unprepared for this. But fortunately, because I was up-front during my interviews about the range of my (in)experience, my colleagues knew that I would have to get on a steep learning curve. So I've had the opportunity to pick up a lot of skills from some very smart people. I've learned more in the past year than in the previous ten; and the vast majority of what I've learned is very interesting stuff.

So, do I have any regrets? Absolutely. I don't regret going into academia because it was a good experience, I learned a lot, and I really enjoyed working with my students. The skills I picked up as an academic philosopher have been quite valuable in my life in the private sector. But I do regret not taking this leap a few years earlier.

Saturday, January 31, 2015

Piketty's "Capital in the 21st Century" for Non-Economists

I just finished reading Thomas Piketty's "Capital in the 21st Century". It is a stunning achievement for many reasons. One of those reasons is that despite the book's having been described as "Nobel-worthy" by several prominent economists, a layperson can still understand it. It is not often that a work deemed worthy of a Nobel Prize can be understood by any but a few specialists.

What makes it so accessible to a non-economist is that it provides a new framework for thinking about the nature of economic inequality and related concepts such as wealth and income. When a new conceptual framework is first proposed, everyone must begin at square one, so to speak, and this (temporarily) places laypeople and specialists in the same position. This is what has happened with Piketty's book. In short, this is an historic opportunity for ordinary people to take a peek into the rarified world of economics.

Inequality of What? Inequality Between Whom?
As everyone who has heard of this book knows by now, the topic is "inequality". But inequality of what and inequality between whom? These related questions seem, at first glance, to have obvious answers: It's inequality of wealth between the upper economic classes and everyone else. But because Piketty is developing a new conceptual framework for these issues, we have to proceed cautiously. In fact, I'm pretty confident that a large proportion of the discussion of "Capital" is hurt by a misunderstanding of Piketty's goals for his overall project. This misunderstanding shows up right away, when commentators try to quibble with the question of who counts as "wealthy".

Let's consider two hypothetical people, who we will call "Robert" and "William". Robert is the fortunate recipient of a significant inheritance in the form of several well-appointed apartment buildings. William is the fortunate recipient of a good education and a skill set that's in demand by employers. Robert can make his living as a "rentier" -- a term that's unfamiliar to most non-specialists, but which simply refers to people who make money because they own something as opposed to working. The canonical example is a landlord, who is able to rent out buildings or land for money. The landlord is able to do this because he owns something, not because he works or produces anything. Of course, he may also work (e.g. to maintain the buildings or land), and he may have come to own the things he does because he's worked very hard in the past. But at this point in time, Robert is able to make money because he owns something, and that makes him a rentier. The money he receives is called, appropriately enough, a "rent".

William is in a different position. He's acquired a skill set and an education, and now he is able to trade his time and effort for money. In a metaphorical sense, he "owns" his own talent, but that's not relevant here. William has to work for his money; he gets paid because he works, not because he owns something. William is a "worker".

The difference between Robert the rentier and William the worker isn't necessarily that one makes more money than the other. William's skills may enable him to enjoy a salary that's much larger than Robert's rents. But for concreteness, let's say that they spend about the same amount of money to maintain their lifestyle (i.e. their consumption is about the same). Robert is able to save a percentage of his rents and reinvest them in order to eventually purchase new apartment buildings. William is able to advance in his career, acquire new skills, and increase his income. Piketty's question is, "Who is more likely to do better in the long run: the rentier or the worker?".

There's no simple answer to that question. There is no law that forces one to do better than the other. William could enjoy an astronomical salary eventually, and Robert might see real estate prices crash. Or William might stagnate in his career while Robert's property skyrockets in value. People often say that Piketty is putting forward "laws" of inequality, but he is doing no such thing. That would be dumb, and Piketty is not dumb.

Instead, the answer to the question of whether William will do better than Robert or vice-versa depends on two numbers. The first is the rate of return on capital -- when Robert reinvests his money, at what percentage rate will it grow? Rentiers, on average, can't get more than the typical rate of return on capital. The second is the growth rate of the economy. Workers on average can't expect a salary increase greater than the growth rate of the economy. Piketty refers to the first number as r and the second as g. If r > g, then Robert the rentier will do better than William the worker. If g > r, then the opposite will be true. And there is no ironclad law that r > g or that g > r.

Piketty's massive work documents in excruciating detail that throughout history, across every country with enough data to measure, r > g the vast majority of the time. Rentiers do better than workers on average -- a lot better. Thanks to compounding interest, rentiers and workers diverge in their economic status very quickly, even if r is only a tiny bit greater than g. And the tendency to diverge is self-reinforcing in two distinct ways. If Robert the rentier pulls ahead of William, he will be earning a rate of return on an ever-increasing stock of capital. So the difference in the amount of money each possesses will increase. But also, Robert's rate of return -- that is, the value of r itself -- will increase, too. If Robert's fortune is sufficiently large, then he will be able to take advantage of more exotic kinds of investments that have a higher rate of return. So not only will Robert and William diverge, but they will diverge faster and faster over time.

We all know this phenomenon. Piketty makes a lot of references to literature, and so I'll do the same by making a reference to the not-quite equally erudite film, "Anchorman 2: The Legend Continues", starring Will Ferrell. There is a character who is a very wealthy investor and describes himself as a self-made man. He inherited 300 million dollars, and he brags to everyone that in the decades that followed, he was able to increase that fortune to 305 million dollars. This is joke that everyone gets. A wealthy rentier with 300 million dollars could easily increase that fortune by a far, far greater amount with virtually no effort whatsoever. The fact that such a joke can be made in a film like "Anchorman 2" shows how pervasive the divergence between rentiers and workers is, and shows that this divergence is common knowledge.

So now we can return to the question of defining what counts as "wealth". The answer is pretty simple, even if there are subtleties in applying it. Wealth is anything that provides a rent to the owner. The inequality Piketty is concerned with is the inequality between those who get rents and those who work. Of course, lots of people fall in between those two extremes, but it's still a meaningful distinction.

Reading "Capital" as an American
About a quarter of the people who visit this blog are not Americans, so I'll clarify a few things to begin this section. I happen to be an American, born in 1972, which makes me a member of the so-called "Generation-X". My parents are "Baby Boomers", who grew up in the decades following World War II.

Like other members of my generation, I grew up learning that America was a highly upwardly-mobile society. The mere fact that you were born into a particular socio-economic class did not mean that you had to stay there. You could get ahead if you did the right things. "The right things" were simple to understand -- you had to go to college to get an education, and you had to work hard. You could get an entry-level professional position after college and rise through the ranks of the American workforce. Eventually, with enough hard work, you could do very well economically over the long run.

But is that true? That depends on the relationship between r and g. If g > r, then the growth rate of the economy (and therefore, of average wages) increases at a faster rate than the return on investment. Workers, especially skilled workers, will do best over the long run -- even better than people who are born wealthy. But on the other hand, if r > g, the opposite will be true. In such a world, the best way to get ahead is to be ahead already. The wealthy will become wealthier more quickly than workers' salaries can increase. The image that my generation grew up with was of an America in which g > r.

I mention my and my parents' generations because the Baby Boomers are unique in American history. During the years following World War II, the economy grew very quickly -- so quickly, in fact, that for a time, g was in fact greater than r. The people whose fortunes increased most rapidly were the workers, not the rentiers. So it should be no wonder that my generation was inculcated with the view that hard work is so well-rewarded; for our parents' generation, this was true. However, we now live in a world where r > g once again, as it was in the years leading up to the two world wars.

The interesting question here is "why?". Why did g outstrip r for the Baby Boomers? The romantic explanation is very common among Americans -- it's because of the hard work and highly moral character of the so-called "Greatest Generation", which grew up during the Depression and later defeated fascism in World War II. That generation led the world for a time into an era of prosperity for honest, hard-working Americans. This romantic vision has a corollary -- if we could somehow get back to the values that made the Greatest Generation so great, we could enjoy the kind of prosperity and upward mobility that the Baby Boomers enjoyed.

Unfortunately, the romantic explanation turns out to be false. In fact, Piketty buries this idealistic worldview under mountains of hard, quantifiable data. The Greatest Generation may have been great, but that era of upward mobility wasn't because of their wonderful character or hard work. It was because the calamity of World War II (and also World War I) had three effects that temporarily reduced the value of g, making the return on capital much lower than it would otherwise have been. First, the wars destroyed huge amounts of capital directly. There's nothing like a "Flying Fortress" (coincidentally, like the one my grandfather flew in), to destroy lots of capital. Second, it diverted capital into unproductive uses. When capital is spent on the military during wartime, it isn't reinvested. And third, the governments' need for revenue led to a steeply (in fact, confiscatory) progressive tax that wiped out a large share of the wealth of the most wealthy Americans.

Furthermore, following World War II, the huge population increase (known as the Baby Boomers) stimulated the economy, as did spending on rebuilding Europe. The decades following World War II were therefore unique in history. Those factors temporarily caused g to be greater than r, making that era highly unusual in that the best way to improve your socio-economic status was by working.

This is why I, as an American member of Generation-X, was so struck by this discussion. I happen also to have been a professor at large state universities for about a decade. Like other faculty, I've noticed a dramatic shift in attitude among my students. When I went to college, I believed that I could get ahead and do better economically than my parents, and that this opportunity was due to the availability of a college education. Nowadays, students seem largely to not have this attitude. They seem to think that they'll be lucky if they tread water. College merely increases their chances of doing so, but they don't think that hard work will allow them to get ahead. I wish I could say that their cynicism was unfounded. But unfortunately, it's justified. Piketty shows -- again, with mountains of data -- that workers are having a harder time merely staying afloat, while the wealthy continue to amass larger and larger fortunes.

It's common to read so-called "experts" who allege that Piketty is against economic inequality. This is baffling to me, because Piketty says over and over again in the book that he is not against economic inequality per se, and that some level of inequality is necessary. Nothing he says contradicts this. The only way to read Piketty as being unequivocally against inequality is by not reading his book at all.

Indeed, Piketty's actual views on inequality are quite moderate. He states, repeatedly, that:
  • Inequality is necessary to provide an incentive for people to work hard and innovate;
  • there is no mathematical formula that reveals what level of inequality is harmful;
  • only a democratic discussion among a well-informed citizenry should determine economic policy;
  • such a discussion must be fueled by both data and our moral judgments; and
  • at some threshold (which will be different for different societies) a severe enough level of inequality will lead to social instability, and this is something to be avoided.
He sees his book as providing two main services. First, it provides the hard data required to be informed about economic inequality. Second, it places the issue of inequality in front of the public. To be sure, he does have views about how to address the problem of vast levels of inequality, which come down to instituting an international progressive tax on capital, and building institutions that make financial transactions more transparent (which would have the effect of limiting the use of tax havens by large corporations and very wealthy individuals). People who are arguing in good faith will no doubt disagree on the first proposal. Personally, I don't see how anyone can seriously object to increasing transparency in international financial transactions by large corporations and the fabulously wealthy individuals who avoid paying taxes by stashing their money in Switzerland, the Cayman Islands, Ireland, or any of the other places where capital goes when it wants to keep a low profile.

But only a relatively small section of this massive book is dedicated to Piketty's positive proposals for addressing economic inequality. One gets the impression that his main purpose in putting forward these ideas is to get a conversation started, not to say the last word on the topic.