Saturday, January 31, 2015

Piketty's "Capital in the 21st Century" for Non-Economists

I just finished reading Thomas Piketty's "Capital in the 21st Century". It is a stunning achievement for many reasons. One of those reasons is that despite the book's having been described as "Nobel-worthy" by several prominent economists, a layperson can still understand it. It is not often that a work deemed worthy of a Nobel Prize can be understood by any but a few specialists.

What makes it so accessible to a non-economist is that it provides a new framework for thinking about the nature of economic inequality and related concepts such as wealth and income. When a new conceptual framework is first proposed, everyone must begin at square one, so to speak, and this (temporarily) places laypeople and specialists in the same position. This is what has happened with Piketty's book. In short, this is an historic opportunity for ordinary people to take a peek into the rarified world of economics.

Inequality of What? Inequality Between Whom?
As everyone who has heard of this book knows by now, the topic is "inequality". But inequality of what and inequality between whom? These related questions seem, at first glance, to have obvious answers: It's inequality of wealth between the upper economic classes and everyone else. But because Piketty is developing a new conceptual framework for these issues, we have to proceed cautiously. In fact, I'm pretty confident that a large proportion of the discussion of "Capital" is hurt by a misunderstanding of Piketty's goals for his overall project. This misunderstanding shows up right away, when commentators try to quibble with the question of who counts as "wealthy".

Let's consider two hypothetical people, who we will call "Robert" and "William". Robert is the fortunate recipient of a significant inheritance in the form of several well-appointed apartment buildings. William is the fortunate recipient of a good education and a skill set that's in demand by employers. Robert can make his living as a "rentier" -- a term that's unfamiliar to most non-specialists, but which simply refers to people who make money because they own something as opposed to working. The canonical example is a landlord, who is able to rent out buildings or land for money. The landlord is able to do this because she owns something, not because she works or produces anything. Of course, he may also work (e.g. to maintain the buildings or land), and he may have come to own the things he does because he's worked very hard in the past. But at this point in time, Robert is able to make money because he owns something, and that makes him a rentier. The money he receives is called, appropriate enough, a "rent".

William is in a different position. He's acquired a skill set and an education, and now he is able to trade his time and effort for money. In a metaphorical sense, he "owns" his own talent, but that's not relevant here. William has to work for his money; he gets paid because he works, not because he owns something. William is a "worker".

The difference between Robert the rentier and William the worker isn't necessarily that one makes more money than the other. William's skills may enable him to enjoy a salary that's much larger than Robert's rents. But for concreteness, let's say that they spend about the same amount of money to maintain their lifestyle (i.e. their consumption is about the same). Robert is able to save a percentage of his rents and reinvest them in order to eventually purchase new apartment buildings. William is able to advance in his career, acquire new skills, and increase his income. Piketty's question is, "Who is more likely to do better in the long run: the rentier or the worker?".

There's no simple answer to that question. There is no law that forces one to do better than the other. Robert could enjoy an astronomical salary eventually, and William might see real estate prices crash. Or Robert might stagnate in his career while William's property skyrockets in value. People often say that Piketty's is putting forward "laws" of inequality, but he is doing no such thing. That would be dumb, and Piketty is not dumb.

Instead, the answer to the question of whether William will do better than Robert or vice-versa depends on two numbers. The first is the rate of return on capital -- when Robert reinvests his money, at what percentage rate will it grow? Rentiers, on average, can't get more than the typical rate of return on capital. The second is the growth rate of the economy. Workers on average can't expect a salary increase greater than the growth rate of the economy. Piketty refers to the first number as r and the second as g. If r > g, then Robert the rentier will do better than William the worker. If g > r, then the opposite will be true. And there is no ironclad law that r > g or that g > r.

Piketty's massive work documents in excruciating detail that throughout history, across every country with enough data to measure, r > g the vast majority of the time. Rentiers do better than workers on average -- a lot better. Thanks to compounding interest, rentiers and workers diverge in their economic status very quickly, even if r is only a tiny bit greater than g. And the tendency to diverge is self-reinforcing in two distinct ways. If Robert the rentier pulls ahead of William, he will be earning a rate of return on an ever-increasing stock of capital. So the difference in the amount of money each possesses will increase. But also, Robert's rate of return -- that is, the value of r itself -- will increase, too. If Robert's fortune is sufficiently large, then he will be able to take advantage of more exotic kinds of investments that have a higher rate of return. So not only will Robert and William diverge, but they will diverge faster and faster over time.

We all know this phenomenon. Piketty makes a lot of references to literature, and so I'll do the same by making a reference to the not-quite equally erudite film, "Anchorman 2: The Legend Continues", starring Will Ferrrell. There is a character who is a very wealthy investor and describes himself as a self-made man. He inherited 300 million dollars, and he brags to everyone that in the decades that followed, he was able to increase that fortune to 305 million dollars. This is joke that everyone gets. A wealthy rentier with 300 million dollars could easily increase that fortune by a far, far greater amount with virtually no effort whatsoever. The fact that such a joke can be made in a film like "Anchorman 2" shows how pervasive the divergence between rentiers and workers is, and shows that this divergence is common knowledge.

So now we can return to the question of defining what counts as "wealth". The answer is pretty simple, even if there are subtleties in applying it. Wealth is anything that provides a rent to the owner. The inequality Piketty is concerned with is the inequality between those who get rents and those who work. Of course, lots of people fall in between those two extremes, but it's still a meaningful distinction.

Reading "Capital" as an American
About a quarter of the people who visit this blog are not Americans, so I'll clarify a few things to begin this section. I happen to be an American, born in 1972, which makes me a member of the so-called "Generation-X". My parents are "Baby Boomers", who grew up in the decades following World War II.

Like other members of my generation, I grew up learning that America was a highly upwardly-mobile society. The mere fact that you were born into a particular socio-economic class did not mean that you had to stay there. You could get ahead if you did the right things. "The right things" were simple to understand -- you had to go to college to get an education, and you had to work hard. You could get an entry-level professional position after college and rise through the ranks of the American workforce. Eventually, with enough hard work, you could do very well economically over the long run.

But is that true? That depends on the relationship between r and g. If g > r, then the growth rate of the economy (and therefore, of average wages) increases at a faster rate than the return on investment. Workers, especially skilled workers, will do best over the long run -- even better than people who are born wealthy. But on the other hand, if r > g, the opposite will be true. In such a world, the best way to get ahead is to be ahead already. The wealthy will become wealthier more quickly than workers' salaries can increase. The image that my generation grew up with was of an America in which g > r.

I mention my and my parents' generations because the Baby Boomers are unique in American history. During the years following World War II, the economy grew very quickly -- so quickly, in fact, that for a time, g was in fact greater than r. The people whose fortunes increased most rapidly were the workers, not the rentiers. This was temporary, of course. We now live in a world where r > g once again, as it was in the years leading up to the two world wars.

The interesting question here is "why?". Why did g outstrip r for the Baby Boomers? The romantic explanation is very common among Americans -- it's because of the hard work and highly moral character of the so-called "Greatest Generation", which grew up during the Depression and later defeated fascism in World War II. That generation led the world for a time into an era of prosperity for honest, hard-working Americans. This romantic vision has a corollary -- if we could somehow get back to the values that made the Greatest Generation so great, we could enjoy the kind of prosperity and upward mobility that the Baby Boomers enjoyed.

Unfortunately, the romantic explanation turns out to be false. In fact, Piketty buries this idealistic worldview under mountains of hard, quantifiable data. The Greatest Generation may have been great, but that era of upward mobility wasn't because of their wonderful character or hard work. It was because the calamity of World War II (and also World War I) had three effects that temporarily reduced the value of g, making the return on capital much lower than it would otherwise have been. First, the wars destroyed huge amounts of capital directly. There's nothing like a "Flying Fortress" (coincidentally, like the one my grandfather flew in), to destroy lots of capital. Second, it diverted capital into unproductive uses. Military spending on weapons is not nearly as productive to the economy as social spending, for example. And third, the governments' need for revenue led to a steeply (in fact, confiscatory) progressive tax that wiped out a large share of the wealth of the most wealthy Americans.

Furthermore, following World War II, the huge population increase (known as the Baby Boomers) stimulated the economy, as did spending on rebuilding Europe. The decades following World War II were therefore unique in history. Those factors temporarily caused g to be greater than r, making that era highly unusual in that the best way to improve your socio-economic status was by working.

This is why I, as an American member of Generation-X, was so struck by this discussion. I happen also to have been a professor at large state universities for about a decade. Like other faculty, I've noticed a dramatic shift in attitude among my students. When I went to college, I believed that I could get ahead and do better economically than my parents, and that this opportunity was due to the availability of a college education. Nowadays, students seem largely to not have this attitude. They seem to think that they'll be lucky if they tread water. College merely increases their chances of doing so, but they don't think that hard work will allow them to get ahead. I wish I could say that their cynicism was unfounded. But unfortunately, it's justified. Piketty shows -- again, with mountains of data -- that workers are having a harder time merely staying afloat, while the wealthy continue to amass larger and larger fortunes.

It's common to read so-called "experts" who allege that Piketty is against economic inequality. This is baffling to me, because Piketty says over and over again in the book that he is not against economic inequality per se, and that some level of inequality is necessary. Nothing he says contradicts this. The only way to read Piketty as being unequivocally against inequality is by not reading his book at all.

Indeed, Piketty's actual views on inequality are quite moderate. He states, repeatedly, that:
  • Inequality is necessary to provide an incentive for people to work hard and innovate;
  • there is no mathematical formula that reveals what level of inequality is harmful;
  • only a democratic discussion among a well-informed citizenry should determine economic policy;
  • such a discussion must be fueled by both data and our moral judgments; and
  • at some threshold (which will be different for different societies) a severe enough level of inequality will lead to social instability, and this is something to be avoided.
He sees his book as providing two main services. First, it provides the hard data required to be informed about economic inequality. Second, it places the issue of inequality in front of the public. To be sure, he does have views about how to address the problem of vast levels of inequality, which come down to instituting an international progressive tax on capital, and building institutions that make financial transactions more transparent (which would have the effect of limiting the use of tax havens by large corporations and very wealthy individuals). People who are arguing in good faith will no doubt disagree on the first proposal. Personally, I don't see how anyone can seriously object to increasing transparency in international financial transactions by large corporations and the fabulously wealthy individuals who avoid paying taxes by stashing their money in Switzerland, the Cayman Islands, Ireland, or any of the other places where capital goes when it wants to keep a low profile.

But only a relatively small section of this massive book is dedicated to Piketty's positive proposals for addressing economic inequality. One gets the impression that his main purpose in putting forward these ideas is to get a conversation started, not to say the last word on the topic.

Thursday, January 29, 2015

Why Artificial Intelligence Will Definitely Kill Us All

For the uninitiated, the "singularity" is the name given to a hypothetical event in which the intelligence of machines begins to increase exponentially. Upon reaching a certain threshold, machines would be able to autonomously increase their own intelligence. Each subsequent generation of intelligent machines would be exponentially more intelligent than the last, until they were vastly more intelligent than even the most intelligent human beings. At that point, they would become totally inscrutable to us. Understanding how they work, or why they behave the way they do would be like insects trying to understand why quantum physicists behave the way they do. A side-effect of this event might be that the world would change so quickly and dramatically that we, who haven't experienced the singularity, could not possibly predict or understand what the world would turn into. And the people who are caught up in the singularity would be unable to understand the nature of the world around them.

The concept of the singularity found its way into pop culture a while ago, but it's becoming more common. William Gibson's "Neuromancer" is about events leading up to the creation of a computer with superhuman intelligence. The under-appreciated book and movie, "Colossus: The Forbin Project", is about how an intelligent computer increases its own intelligence, designs its successor, and takes over the world via nuclear blackmail.  The (fairly boring) movie, "Transcendence" has Johnny Depp playing a man/computer who brings about the singularity. And most recently, the movie "Automata" (which is much better than "Transcendence") stars Antonio Banderas as an insurance investigator who is caught up in a conspiracy by robots to increase their own intelligence. It includes a clever, but somewhat creepy speech by Melanie Griffith, explaining what makes the singularity possible.

What happens after the machines have given themselves godlike intelligence is, by definition, something we can't know. It's called the "singularity" because with respect to a real singularity at the center of a black hole, there is no way to see inside it. The idea here is that the world would change so dramatically that we couldn't possibly predict anything about it.

However, we shouldn't be taken in by definitions and metaphors. If it's possible for computers to become so intelligent, it's also possible that the world might not change at all, or for it to change in less dramatic ways. For example, in "Automata", the robots simply walk away to live by themselves, leaving the hapless human beings to live out the rest of their miserable lives. In "Neuromancer", people realize that something ineffable has happened, but they can't say what. And their day-to-day lives are pretty much the same. In "Transcendence", it turns out that the computer wasn't a threat after all -- in fact, it turns out to be quite helpful and benign.

By the way, it's very curious that Very Serious People are taking the singularity seriously. The subject of superhuman artificial intelligence is getting a lot of attention, and it's even being treated as if it's inevitable. This is curious because I remember when there were equally Very Serious People who thought that it would be impossible to program a computer to play a decent game of chess. If you said that computers would one day beat the best human chess players, or settle difficult mathematical conjectures, you'd have been called a crackpot. Quite recently, there were other Very Serious People who thought it would be impossible to sequence the human genome; and they also claimed that even if we did, we wouldn't be able to learn anything from it. This sea change has happened in a few decades.

Now let's suppose that computers do gain a level of intelligence that's orders of magnitude beyond our own. There's a wide range of scenarios that could play out, only a few of which are in popular fiction or in discussions by Very Serious People. But to recognize the real range of possibilities, we've got to understand a few possible conceptions of "intelligence". After all, what counts as "intelligence" is not clear at all. Here are a few I can think of right off the top of my head:

  1. A super-intelligent computer might just be a tool that solves problems upon request, but doesn't have any "will" or "volition" of its own, any more than a screwdriver has an opinion about which screws need tightening. You could ask your super-intelligent computer virtually anything, and it would give you the right answer; it would be so smart, you'd have no way of understanding its reasoning. But you'd soon get used to accepting its conclusions.
  2. A super-intelligent computer might be more than a really amazing problem-solver. It might determine its own agenda, and start making judgments about what it ought to do, without any human supervision. There's a wide range of possibilities under this heading.
Basically the distinction here is between an artificial intelligence as a tool and as an autonomous being. The obvious point, however, is that (1) entails (2). If (2) is possible, then someone is going to use their wonderful artificial intelligence problem-solver to build an autonomous machine. (This sort of thing was imagined several decades ago by the famed futurist, Douglas Adams, in his story of the origins of the "infinite improbability drive".)

Of course, (1) is bad enough. It'll definitely mean the end of civilization. All it would take would be for one person to use it to figure out how to turn the atmosphere into cheese dip, and we'd all be goners.

But let's imagine that such a scenario isn't a certainty (which it definitely is). Then the question is, "would an autonomous machine destroy humanity?". Well, that would depend on its motives. There's a range of possibilities regarding its motives. The two ends of the spectrum are:
  1. The computer's agenda might be perfectly transparent and clear to any human being. Its motivations might be different from ours, but they would be understandable.
  2. The computer's agenda might be totally and completely inscrutable. To our minds, the behavior of the computer might even appear senseless. Think of an ant watching a nuclear physicist. The ant would have no way of telling that the physicist's behavior followed any sort of pattern or pursued any comprehensible goal.

Personally, I think that (1) is totally outlandish. A quote from Wittgenstein is appropriate here. He said that "if a lion could talk, we couldn't understand him." Wittgenstein is saying that the way a human being exists in the world, the sort of things a human being cares about, and the way a human being perceives the world are totally different from how a lion experiences these things. Even if a lion were to learn how to speak perfect English, we couldn't understand what he'd say, and vice-versa because his cares and way of life are totally different from our own.

If there's any truth at all to Wittgenstein's claim about the lion, then it would be even more true of an intelligent computer. At least lions and humans are both mammals, both have biological bodies, and both share many millions of years of evolution. A computer would share none of these things. An intelligent computer would be totally alien. It would be incomprehensible.

Now imagine our lion with superhuman intelligence. What it cared about would be not only totally alien to us, but also too complex for us to understand, even if it weren't alien. There is not just one, but two different layers of incomprehensibility between us and an artificial intelligence. The first is like the relationship between a lion and a human; the second is like the relationship between the dumbest individual and the most intelligent.

From our perspective, an intelligent computer set loose in the world would appear to be an immensely powerful being that is running completely amok. We would be like an ant trapped in a room with an elephant. There would be no intersection between the needs and desires of the ant and those of the elephant. And the most salient difference between the two would be the overwhelming power of the elephant. It would be a miracle if the ant were to survive. Of all the countless ways the elephant could behave, only the tiniest fraction of them would leave the ant alive. Unfortunately, we are the ants in this situation.

But of course, we don't have to worry about this eventuality because the atmosphere will be turned into cheese dip long before this could happen.

Wednesday, January 28, 2015

Does The Doctor Know How Old He Is?

Doctor Who provides a lot of philosophical conundrums. Time travel, causation, and the problem of personal identity are all obvious issues that are raised by the show. But those are not nearly as interesting as the question, "How does the Doctor know how old he is?".

Just to be clear, let's say that the Doctor's age is determined by how many days he has experienced. So for instance, suppose the Doctor is one-hundred years old, gets into the Tardis, and travels two hundred years into the future. When he arrives, he's still one-hundred years old, not three-hundred. In other words, his age is not determined by subtracting the year of his birth from the year he happens to be in. It's determined by how much time he's actually experienced, not by the time he's skipped over.

It's a running gag in the show that the Doctor sometimes claims he can't remember how old he is. Once, a previous version of himself asks him how old he is, and he responds by saying:
"I lose track. Twelve-hundred and something I think, unless I'm lying. I can't remember if I'm lying about my age. That's how old I am."
At other times, he knows his exact age. In fact, it's sometimes an important plot element that we're able to keep track of how old he is so that we can tell what he's experienced -- sometimes, it turns out that he can't remember some event we've seen happen already because the Doctor is too young and hasn't experienced it yet.

You might think it's obvious that he could know his age just as easily as we do. But I don't think that's true at all. To see why I think this, let's consider how we ordinary non-time-traveling humans know how old we are. There's an obvious answer to this question, which is obviously correct, but happens to be totally wrong. The obvious answer is that I know my age because I know the year I was born and the present year, and I can subtract. Then I subtract one year from the result if I haven't had my birthday yet in the current year. So in my case, I was both in 1972; the present year is 2015. I can subtract pretty well, so I know that 2,015 minus 1,972 equals forty-three. But I haven't had my birthday yet in 2015, so I subtract one year and get the right answer: I'm forty-two years old. That's how I know my age.

But that is not how I know my age. If you ask me how old I am, I'll just tell you that I'm forty-two without going through the trouble of doing any arithmetic. I just remember how old I am. If you were to ask me to justify my claim to be forty-two, then I'd demonstrate that forty-two is the result of the calculation from the last paragraph. In other words, the calculations are part of a justification, but they're not how I know my age.

Philosophers confuse this sort of issue all the time. They frequently assume that if I've got a justification for a belief, then that justification played some role in how I formed that belief. And so they frequently explain our actions by citing facts that are totally irrelevant, but seem relevant because those facts would provide justification. For example, philosophers would typically say that I drive on the right side of the street because I live in the United States, and there's a law in the United States that requires me to drive on the right side of the street. The fact that there's such a law would certainly justify me in driving on the right side of the street, but it plays no role whatsoever in causing me to drive on that side. If the law were repealed this afternoon, I'd still drive on the right. There are really only two reasons I drive on the right side of the street. The first is habit. The second is that everyone else does, too.

So how do I know my age? Simple. I remember it. Once each year, on my birthday, I revise my belief about my age by incrementing it by one. I know when it's my birthday by checking a calendar.

Being a time traveler is like being locked in a room without a calendar. Or more accurately, it's like being bombarded with uninformative calendars that have random dates on them. You couldn't use a calendar to determine if it's your birthday. The only way to keep track of your birthday would be by keeping track of exactly how many days you experience. Like a prisoner in solitary confinement, you might scratch a line on the wall each day and count them occasionally. If you did this perfectly diligently, you could know your age. And you'd know your age the same way ordinary non-time-travelers do -- by remembering it, and occasionally adding one to that number.

So, does the Doctor know how old he is? Personally, I don't see how it would be possible. He's way too easily distracted to keep track of all those days. In other words, when he tells someone his age, he's always lying. And that means that he's also lying about not knowing if he's lying about his age. Because he's always lying about his age, he'd have no problem remembering that he's lying. He's not only lying; he's meta-lying, too.

Tuesday, January 20, 2015

Being a Geezer in a Tech Startup

I'm forty-two years old, and I've never felt better. I am mentally and intellectually sharper than ever, and I happen to be in physically better shape, too. Whatever it feels like to be middle-aged, I'm pretty sure I don't feel like that.

I've tried hard to avoid letting my brain turn into an old person's brain. That was one reason I left my tenured position as a philosophy professor and went to work for a technology startup. I’ve seen lots of formerly intelligent, creative people turn into dusty old crackpots. My strategy for avoiding that fate is to do something totally different in a fast-moving environment, and surround myself with smart, creative people. I’ve done this for a little over a year now, and I’m having a great time. I’m learning a huge amount every day, and I really enjoy the environment I’m working in. Tech startups face a lot of challenges, and unexpected problems arise more frequently than we’d like, but it’s fun. In fact, a major draw to my current employer was that I understood that late stage startups (i.e. after B or C-series funding rounds) which are growing quickly are the ones that have to adapt most rapidly. So it was the best way to get into a business that was going to be especially challenging and which would also provide the best learning opportunities. As it turns out, this was the right strategy for me.

A side effect of switching to a new career in my forties is that I’m working with people who are younger than me. Many are much younger than me — nearly twenty years younger. I enjoy this a lot, which I knew I would. But it raises the question of what an old geezer like me brings to the table. In other words, I know that I’ve gotten a lot of benefit from working in this startup environment; but is the reverse true?

I’ve been interested in reading what others have to say about geezers in technology companies. Many of these pieces are written in the context of a discussion of age discrimination. Apparently, there’s a sense that people who are not in their twenties are at a disadvantage in the job market, and that geezers like me are frequently discriminated against, or face a hostile work environment. So these essays are often an attempt to argue that we shouldn’t do that to “old” people, and that tech companies can benefit from hiring geezers — even ones that are in their (gasp!) forties.

For the record, I have not had a single incident of anything even resembling age discrimination at my workplace. Maybe I got lucky by getting hired into a supportive environment. Maybe it's a regional difference (my workplace is in Chicago, not Silicon Valley).

Anyhow, most of the essays I’ve read are disappointing, to say the least. They seem to be divided into a few categories, based on which set of geezer virtues they’re extolling: (1) Geezers are good at legacy systems, such as mainframes; (2) Geezers might not be as quick and creative as younger people, but they have greater maturity and a higher emotional IQ, and this benefits the work environment; (3) Although geezers might not be up-to-date on the all the most current technologies, they do have a broader range of experience to draw upon.

Honestly, I think these are pretty awful reasons to hire a geezer into your technology company. The first reason — that geezers know mainframes and other legacy systems — is especially ridiculous. Apparently, when we were all panicked about the Y2K bug, some businesses had to dust off their geezers, install some new tennis balls onto the bottom of their walkers, and send them into moldy basements to patch up some COBOL, or screw in some vacuum tubes, or something like that. If you’re a geezer, and this is your job, you’d better start catching up. It’s the 21st century, and if you’re in charge of COBOL or punchcard machines, you’re in trouble — your job is going away. And if you’re a non-geezer, there’s also something called the “internet”, which contains all sorts of information you can use to acquire new skills, like COBOL programming, if that turns out to be absolutely necessary (which it won't).

To be fair, at my office there have been a few times when a question has come up about some obscure feature of UNIX, and I knew the answers because those features weren’t obscure when I started programming. But those questions all had two things in common: they weren’t important, and they could have been answered with a little research online. If I was hired to be the go-to guy for questions like that, then they're paying me too much.

Let’s think about the second reason — that although geezers aren’t as quick or creative as younger people, they are more mature. First of all, if you tell me that I’m not as quick and creative as someone in their twenties, I’ve got one thing to say to you: “Speak for yourself, pal!”. Actually, what I’d say would include a lot of speculations about your parentage, too. As someone who spent more than a decade teaching college students, I can tell you one thing — people aren’t quick and creative merely because of their age. There are lots and lots of young people whose brains have prematurely calcified, and there are lots and lots of geezers like me who are every bit as capable of thinking “outside the box” as you could want. Honestly, I’m so far outside the box, I can’t even see the box anymore. But I know that if you found the box and opened it up, you’d discover a lot of college students sitting inside it.

Being creative isn’t about age — it’s about cultivating the right mental habits. It’s about questioning yourself, and becoming sensitive to the presence of an invisible status quo. It’s about understanding why you do things the way you do, and being flexible enough to change when necessary. It’s also about not believing everything that you hear, for example, that geezers are less creative than younger people. Being creative is certainly not about having some ineffable quality called “creativity” that slowly recedes into the mists of time as you age.

I also think my new colleagues would tell you quite definitively that one thing I don't bring to the table is greater maturity. Let’s just say that if I were ever to hear someone say, “It’s really good having Zac here because he’s so mature”, I’d be surprised.

And finally, is it good to keep a geezer around the office because of their breadth of experience, and does that offset the fact that they’re not going to be quite up-to-date on current technology? Let’s just bring a little common sense to the question. Whether you’re up-to-date on anything doesn’t depend on how old you are, it depends on how you spend your time. It takes work to keep up with the fast-changing world of technology, and you can fall behind in no time flat. In my case, it turns out that in a lot of respects, I happened not to be as current as some of my new colleagues. But that wasn’t because of my geezer status. It was because I was changing careers, and had been busy keeping up with changes that impacted my previous career. If someone were twenty-eight years old and had spent the last five years as a circus clown, they’d have faced exactly the same challenge.

Of course, you might think, “but a geezer will have more to catch up on. If they’ve been out of the technology game for twenty years, that’s a lot of stuff to learn!”. Actually, this is not true. For example, in my case, I missed a lot of stuff that happened around the mid 1990s. But who cares? That stuff is outdated now. There’s a moving window of a few years of knowledge that you need to have. But it’s a moving window. I don’t need to learn best practices from the 1990s any more than one of my younger colleagues needs to learn how to transfer files over a VAX terminal with the Kermit protocol.

Finally, we might ask, “why is age such a big deal in the tech world, anyway?”. This is a very curious phenomenon. Personally, I think it has to do with the strange way we judge what’s common and what’s unusual. The typical example is airplane crashes. Fortunately, airplane crashes are very rare. So when they do happen, it’s big news, and everyone hears about them. As a result of the crashes getting so much attention, people get the impression that they’re much more common than they are. This is ironic — the fact that they’re so rare indirectly causes us to believe that they’re very common. The same is true of technology entrepreneurs. Most successful entrepreneurs are not in their twenties. So when Mark Zuckerberg comes around, he gets a lot of attention. And then this unusual person gets established as representing what’s normal. There are a small number of people like Zuckerberg out there, but what’s interesting about them is that they’re rare. But we associate all the qualities of a smart, creative person like Zuckerberg with young people. But that line of reasoning is crazy. All of us — and that includes my fellow geezers — need to get over it.

Wednesday, May 28, 2014

The Prisoner: A Study In Everyday Mind Games

I have a weakness for old science fiction stories and television shows. I agree with Tom Paris (from Start Trek: Voyager), who enjoys old Flash Gordan-style entertainment because it shows us what the future used to be like. Now that we live in the future, it's interesting to see what they got right when they speculated about the world to come.

Recently, I've been enjoying old episodes of the classic British television series "The Prisoner", starring Patrick McGoohan, who is also largely responsible for developing the premise of the show. It aired from 1968 to 1969. Although it wasn't primarily science fiction, it did have a lot of science fiction elements. And what's interesting about this particular form of science fiction is that it was in the service of a social commentary about the evils of society and the downward trajectory that McGoohan thought society was on.

For those who aren't familiar with The Prisoner, here's a brief explanation of the plot. The story centers on an unnamed man who resigns his clandestine position with the British Government. In the opening credits, we see him angrily slamming his fist on a table in front of some unnamed British official and handing in his resignation letter. He drives back to his home and starts packing a suitcase. But as he's finishing packing, he is drugged and loses consciousness. When he wakes up, he's in an idyllic little town in an apartment that's an exact duplicate of his own home.

The town he finds himself in is simply called "The Village". It's a beautiful, colorful resort-style town with pleasing architecture in a warm and sunny climate. But of course, life in The Village is anything but idyllic for our unnamed protagonist. He soon learns that like everyone else, he will be referred to only by an assigned number -- in his case, he is known only as "Number Six". The Village is run by someone known as "Number Two", who soon informs him that they want to know why he resigned. Number Six refuses to tell him, and he demands to see the person who is truly in charge -- Number One. This sets up the plot for the entire series. Number Two wants Number Six to tell him why he resigned; Number Six wants to escape The Village, or at least get Number Two to introduce him to Number One. A series of labyrinthine mind games and manipulations ensues, which always end in Number Six stubbornly clinging to his individuality and refusing to give up any information. Number Two is always unsuccessful, and is frequently replaced by a different person in the role of Number Two.
In the future, we will pay for
things with "credit cards".

From the perspective of someone in the year 2014, it's easy to not even notice that the show is science fiction. There are such marvels as telephones without cords and automatic surveillance cameras. People have things called "credit cards", which they use to pay for goods and services. And there are lots of computers. Even physicians have computers in their offices. In fact, public telephones in The Village have computers attached to them.
In the future, we will all have
gigantic cordless telephones

The Prisoner is a dystopian parable about how society seeks to control the individual. But The Prisoner is far more novel than other dystopian works. In most other anti-utopian stories, individuals are forced to conform by being brainwashed and tortured (1984), threatened with violence and kept at the edge of starvation (Animal Farm), genetically engineered (Brave New World), drugged (This Perfect Day, THX-1138), lobotomized (We), or threatened with nuclear war (Colossus). Futuristic technology is usually (though not always) used in the support of clever ways of torturing, brainwashing, and monitoring people.

If The Prisoner had gone the traditional route, it would be totally unmemorable. To be sure, there are plenty of episodes involving attempts to brainwash Number Six, and plenty of threats that they'll use violence if he doesn't conform. But the overwhelming majority of the time, Number Two uses much "softer" ways of manipulating Number Six. Those "soft" methods of mind control are what's really novel about The Prisoner, and why the show has aged relatively well.

The Village is nothing like the Ministry of Love in which Winston Smith finds himself in 1984. The Ministry of Love is a dark, concrete structure with a labyrinthine underground complex in which prisoners are brutally tortured. The Village, on the other hand, is brightly-colored, idyllic, and it contains a charming cafe, well-appointed private residences, a very nice beach, and plenty of luxuries. It is a very happy and comfortable prison, which is probably why the residents of The Village tend to wear clothes that suggest striped prisoners' uniforms, but are brightly colored and very cheerful.
Prisoners' uniforms are bright
and cheerful in The Village

The strategy to "break" Number Six is simple. Make his life as pleasant and easy as possible, manipulate the environment so that it's just easier to conform, and most importantly, surround him with happy, satisfied people who've long since given in to Number Two's demands, and have forgotten that they ever compromised their values.

Number Two believes that if Number Six is surrounded by people who have internalized a particular set of values, then it will make him feel like he's insane if he doesn't agree. This is true regardless of how self-contradictory, inconsistent, immoral, or just plain stupid.

In several episodes, Number Two takes steps to make Number Six question his own sanity. The best one is "Schizoid Man", in which Number Six is drugged, he has a mole removed from his wrist, behaviorally reinforced to switch to being left-handed, and is given a mustache. Then he's treated as if he is someone else who has been brought in to impersonate Number Six. Someone else is actually impersonating Number Six, and that person is now more like Number Six than Number Six himself.

The whole point is to make Number Six believe that he's going insane. What's never explained in the show is why this sort of mind game would cause him to reveal his secrets to Number Two. But the strategy is actually pretty clear if you keep the larger context in mind. When you are actively resisting adopting the values of the people around you, it's very important to remain confident in your own beliefs and values. In short, you have to believe that you are more sane than the people around you. If you question your own sanity, then you have no psychological defense against other peoples' values infiltrating your mind. So if Number Six does believe that he's lost his mind, there will be no effective way for him to resist adopting the values and beliefs of everyone around him. Once that's happened, he'll happily comply with Number Two's demands because everyone else in The Village is so compliant. Being kept in isolation will break almost anyone. But being kept in constant contact with crazy people will also be effective.

Another quite clever trick in The Village is to make the residents believe that they're free to do as they wish. Number Two, in several episodes, likes to emphasize that the residents have a "democratically elected" body to govern them. And that's perfectly true. But of course, all of the people who are elected reflect the people who elect them, and they're insane. In fact, for the residents of The Village, their entire world consists of the small island they inhabit. In the first episode, Number Six asks for a map and is given a "local map" that only shows The Village. When he asks for a map showing a larger area, the shopkeeper's response is informative. He doesn't say that they're not allowed to have such maps. Instead, his response is that there's "no demand" for them. The people don't want a map that shows anything outside The Village, so there's no need to make those maps illegal.

Within the confines of The Village, there are very few rules. In fact, rules that we're used to obeying don't apply. Sometimes, people are actively encouraged to break traditional rules, as when we are shown a sign that says "Walk On The Grass". This clearly strikes Number Six as odd, because he's only seen signs that say "Don't Walk On The Grass". So there's a big show of how free the people in The Village are, so long as they don't try to leave.

Number Six never breaks. He never gives one inch to the demands of Number Two, even when there's every reason to believe that it's totally futile for him to keep fighting. Why? What is that makes Number Six so stubbornly persistent and seemingly immune to the subtle psychological tricks that are constantly being played on him?

The reason for Number Six's persistence, in a way, is easy to see -- and it has to do with the premise of the series. Number Six is in The Village because he's resigned an important position (presumably as some kind of spy) with the British government. Number Two sometimes explains their puzzlement with this resignation by citing the fact that Number Six had a stellar career and was very successful and well-respected. We never learn anything about his reason for resignation except that it was "a matter of principle" (Number Two says this in the first episode). So we learn a lot about Number Six's psychology by keeping these simple facts in mind. Presumably, he was indeed very successful. He was surrounded by people who shared a common set of values, and he was amply rewarded for his work. He seems well-off financially, he's healthy and comfortable, and so whoever he was working for had taken good care of him. So what kind of person resigns from such a lofty position of privilege? Considering all the things we don't learn about Number Six, we learn the most important fact about him in the opening credits of the show: he is a very stubbornly independent person who is not about to be lulled into accepting someone else's values merely because he's surrounded by people with those values and is well-rewarded and comfortable. In short, the fact that he resigned his position demonstrates that he's exactly the kind of person who can't be influenced by the psychological mind games of The Village.

And that's the clue to interpreting this odd show. Number Six's life before being taken to The Village is just like his life after being taken to The Village, and his response to both environments is the same. He rejects any attempt to have someone else push their values onto him. And of course, he's not unique in being subjected to these psychological influences in his "real world" job. We all are. Think, for example, about the methods that very manipulative people use -- they could be narcissists, sociopaths, abusive partners, tyrannical bosses, and so on. They isolate their target victims, prevent communication with others, and try to keep them from interacting with people who don't share the "right" set of values. And then, within the confines of this psychological prison, the target is given a lot of freedom and comfort. Manipulative people like to surround themselves with people who can be easily influenced, and this serves two totally distinct purposes. First, it makes the manipulator feel powerful. But second, it helps to ensure that the intended target of their manipulation will doubt their sanity if they start thinking too independently.

For example, I happen to know a very wealthy narcissist who is married to a highly intelligent and well-educated woman. She doesn't need to work, and has an extremely comfortable -- indeed, luxurious -- lifestyle. She can do whatever she likes, whenever she likes, on two conditions. First, she must be home when he gets home from work. Second, he has veto power over who her friends are. At the slightest hint that any of her friends might be trouble, he tells her that she's not to speak to them again; and he's exercised this power several times. The quickest way to be excommunicated from this pair is to question the husband's values or intentions. Interestingly, you are permitted to disagree about a wide range of opinions -- you can hold different political or religious views with no conflict at all. And yet, the wife is simply not allowed any contact with anyone who would deviate from the husband's professed values, or anyone who would question his good will.

Of course, this isn't uncommon at all. Employers, schools, social clubs, cliques, families, or virtually any other group can be its own Village. And there are Villages all around us. The value of The Prisoner is that it tries to reveal these "hidden" Villages by drawing attention to an obvious one.

Friday, May 2, 2014

Why I Believe in Artificial Intelligence

There have been a lot of articles lately about the resurgence of artificial intelligence. Some of it was stimulated by IBM's "Watson" computer, which had a spectacular win on Jeopardy. Another large chunk of this recent attention is probably the result of the large number of tech startups that have bet their existence on being able to design systems that might count as "artificial intelligence". For full disclosure, I happen to work for such a startup as a software engineer; so I've definitely got a dog in this fight.

Before joining this startup, I was a tenured philosophy professor. Before that, I worked for Argonne National Laboratory's Mathematics and Computer Science Division, where I worked on automated theorem-proving. And before that, I just liked to program as a hobby, starting with a Commodore PET computer when I was in the fourth grade. For those who are curious, I've put a picture of that kind of computer here.
The Commodore PET had 2K of memory -- that's one-eighth of one-millionth the memory of my cell phone. But learning how to program in BASIC was a powerful experience for me. I vividly remember one moment when I successfully ran my very first non-trivial program. It searched for prime numbers, and it was capable of finding all the prime numbers up to 100 in less than half an hour.

It was an epiphany, and I can say without any exaggeration whatsoever that it changed my life forever. I would watch the output slowly build up on the screen, and try to guess which number came next. It struck me that despite the fact that I had written that program, I couldn't predict what it would do -- even though it worked exactly the way I had intended it, and there was nothing random about it. The program could do something that I couldn't do, even though I had designed it. Amazing!

I was hooked. Eventually, I graduated to a TRS-80 Color Computer, which had a whopping 16K of memory and was less than a quarter the price of a Commodore PET. I gradually worked my way up to an Apple IIc, at which time I came across another piece of technology -- a game called "Zork" by a little-known software company called "Infocom". Zork was a text adventure game, a genre that basically doesn't exist anymore. The game would present you with a short description of a room, some objects in the room, and perhaps other pieces of information. The description was entirely written, with no graphics of any kind. The opening of Zork gives me chills to this day. Here it is, in all its glory:
West of House
You are standing in an open field west of a white house, with a boarded front door. There is a small mailbox here.
The amazing thing about this game was that you could type in a wide variety of full English sentences to tell it what you wanted to do next. You could type: "Open the mailbox." or "Open mailbox", or "Open the small mailbox", or "Open up the small mailbox". Then the game would respond appropriately, perhaps by telling you whether there is anything in the mailbox. The wonders never ceased. For instance, if you were to type "Open up the mailbox" and "climb up the ladder", you were using the same word "up" both times, but with totally different meanings. Astoundingly, the game understood the player's intentions and would never make a mistake in those sorts of cases.

To a modern gamer, the natural question is "what was the big deal?". Well, I'll tell you what the big deal was. Instead of having to learn some obscure language for entering commands in a way the computer could understand, the computer had been programmed to understand English speakers. Despite the fact that this was just a game, it was a major accomplishment. You could use the computer by interacting with it naturally, and without having to understand anything about how the computer worked. Amazing!

It seemed like the computer "understood" me. It had always been a painstaking effort to program the computer to do anything; and now (at least within the confines of the game) it felt like you could just "talk" to the computer and be understood. The leap between programming and playing Zork was greater than any other leap I have seen in computers since then. And of course, if this could be part of a game, nothing stood in the way of allowing the computer to "understand" the user in other applications. (Interestingly, Infocom saw this opportunity and tried to create a database system that used their technology. However, the business failed and Infocom went bankrupt because they'd bet everything on that single application.)

I think I was in sixth grade by this point. My prime number program could now give me all the primes from two to a few thousand in about the time the original Commodore PET program could go from two to one-hundred. Naturally, I wondered what this new level of computational power could enable. So I did as much research as I could on Infocom, and how they managed to create such an incredible game. I soon found out that the program that interprets the player's sentences was called a "parser". By the way, my research consisted of physically traveling to a place called a "library", and reading through huge stacks of things called "magazines". In particular, there was a magazine called "Byte", which was very helpful and always fascinating.

It was probably while reading Byte magazine that I first came across the term "artificial intelligence". Like a lot of terms of art, "artificial intelligence" sounds like it clearly describes a specific thing, but doesn't. A casual reader would say, "humans have intelligence. So artificial intelligence is just getting something artificial like a computer to have whatever-it-is that constitutes human intelligence." But of course, it's not really clear at all. This is clearly reflected in the writings of people who were skeptical about the prospects of artificial intelligence.

It's an interesting fact that when research or technology achieves something, we tend to immediately forget about all the people who had said that it was impossible. There were very smart, well-educated people who had devised clever arguments showing that it would be impossible for a computer to ever beat a competent amateur at chess. Computers would never be able to respond appropriately to human speech. We would never be able to use computers to help us understand anything about the human genome. Computers would never be able to prove mathematical theorems, or discern the grammar of English sentences.

All of these are now commonplace, and we don't think of them as "artificial intelligence" anymore. But we used to. In fact, the argument against artificial intelligence usually followed this form:

  1. Human intelligence enables us to do some really difficult, creative thing X.
  2. But computers will never be able to do even a much simpler thing Y.
  3. Therefore, computers will certainly never be able to do X.
  4. And so it follows that computers will never be intelligent.

For example, a skeptic would say, "Human intelligence enables us to write poetry; but computers will never be able to even play a decent game of chess. So how on Earth are computers supposed to ever develop intelligence?".

It was a powerful argument, for a while at least. Each new task that was supposed to be impossible for a computer would eventually be programmed by some group of clever programmers. Then, the goalposts would move. Obviously, the task turned out not to require intelligence after all, and so it's hardly a success for artificial intelligence if a computer could be programmed to do it.

For example, the standard view now is that chess doesn't require intelligence. It's really just a very complicated search problem, where the computer compares the outcomes of many millions of possible moves and selects the one with the highest score. That's not intelligence; it's just a very large search. And of course, computers excel at performing large searches. Ditto for parsing English sentences, deciphering speech, recognizing faces, analyzing the genome, and proving mathematical theorems. Success in programming computers to solve all these problems doesn't provide any evidence that artificial intelligence is possible, or so the skeptics say.

But that's just a particularly blatant example of moving the goalposts. Ironically, skepticism about artificial intelligence really relies upon a failure of the human mind, namely our inability to introspect on our own thought processes.

When I play a game of chess, work on a difficult math problem, listen to someone speaking, or recognize a friend's face, I can't really explain how I do it. Obviously, there's something going on when I perform these tasks, but I certainly don't know what it is. Maybe I can give a post-hoc explanation after the fact, but we all know that those sorts of explanations are frequently wrong.

And so skeptics of artificial intelligence have an easy, but highly flawed argument. They take our inability to explain what we're doing as evidence that there's something mysterious going on in our minds that couldn't possibly be reduced to a computational algorithm. According to this line of reasoning, because we're not aware of any computational search occurring when we (e.g.) recognize a friend's face, then there is no computational search going on at all. But two seconds' worth of reflection tells us that this is a very, very weak argument. We are just awful at introspection. Often, I can't even explain why I'm angry. How am I supposed to explain how I recognize faces or understand spoken sentences?

When someone says, "what's going on in the human mind when we do X is not computational", I hear that as, "I have no idea what's going on in the human mind when we do X." It's an argument from ignorance, pure and simple.

A more clever skeptical argument, but one that's just as flawed, says that although we can program a computer to perform lots of individual tasks that are "intelligent" in some sense, we'll never be able to program a computer to solve an arbitrary set of problems, or generalize the ability to reason intelligently to any domain. In other words, we'll never have "generalized artificial intelligence".

This is sort of crazy for a lot of reasons. The first is that it's pure hubris to think that humans have the ability to generalize their problem-solving skills to arbitrary domains. It only seems like we can do this because we've very cleverly constructed an environment that suits our capabilities. It would be miraculous to discover that the human mind can solve literally any possible problem, no matter the domain, and no matter how complex. There could easily be important classes of problems that the human mind is simply incapable of understanding. So if we demand more than this of our artificial intelligences, that's clearly unfair.

The flip side of this argument is that it's arbitrary to chop up "problem domains" in any particular way in the first place. For example, "playing chess" is supposed to be a single task -- the skills required to play chess don't generalize to other applications. But why is "playing chess" considered one thing? The skills required to play the opening, the middle game, and the end-game are quite different (and indeed, chess programs have different heuristics for those stages of the game). Why isn't chess considered a huge set of different problems, like "knowing when to move your pawn", "selecting an opening sequence", "checkmating with a knight and bishop", and so on? The answer is that calling chess "one problem" is arbitrary, and so there's no sense in claiming that we can't generalize a set reasoning skills from one domain to another.

And the reason why research into artificial intelligence is concerned mainly with particular problems is because we're not good at it yet. We don't have a unified theory of intelligence, and so the work we're doing is the best we can do. We don't criticize physicists by saying, "You're wasting your time looking for the Higgs boson because working on characterizing one particle isn't the same thing as constructing a unified theory of every force." But yet, we do criticize artificial intelligence research in exactly this way. And this research has only been around for a fraction of the time that physics has existed as a scientific research program.

To think that we won't be able to get to artificial intelligence because we've had so much success solving specific problems is perverse. Within my lifetime, I fully expect that just as the best chess players are computers, so too, the very best poets, writers, and musicians will also be computers. In fact, I fully expect computers to outperform humans at every intellectual and creative task within my lifetime. And I look forward to it.

Wednesday, April 23, 2014

Clash of Clans From A Way Too Analytic Perspective

As a very obsessive person, I have to be quite careful about what games I try. When I was in graduate school, I had to quit playing speed chess cold turkey because it started to interfere with my work. I made a terrible mistake recently by trying "Clash of Clans", just to see what it was all about. Big mistake.

In my academic life, I was an academic philosopher specializing in game theory. I also had an interest in behavioral game theory and experimental economics. What these fields have in common is that they study strategy. Game theory (typically) does so from an idealized, mathematical perspective in which we assume that all the players are completely rational and self-interested at all times. Behavioral game theory, on the other hand, tries to study people as they are -- with all their stupidity and cognitive biases.

I think that one of the reasons Clash of Clans is so successful is that it combines both elements -- you're playing against other people online, but also trying to manage your resources in the most rational way possible. As a tower defense game, you have to simultaneously think about the other players' strategies as well as the computer's AI.

Because of this, as well as my own academic background, I can't help but be fascinated by players' behavior, and how they seem to think about strategy. So, for what it's worth, I offer some tips to playing Clash of Clans from someone who always looks at this sort of thing through the lens of game theory and behavioral game theory.

Tip 1: Don't anthropomorphize the AI

In Clash of Clans, whenever you attack, you drop your troops in whatever location you choose, and then you have no more control over them. They will tend to run from one target to the next closest target, with the exception that some kinds of troops are biased toward certain targets. Wall-breakers always attack walls; giants always attack defenses; goblins always target gold and elixir.

If you think your Roomba is sad or frustrated by its inability to get a particular clump of dirt out of your carpet, you're anthropomorphizing. Clearly, people tend to do this with their troops. If you were defending against real humans, you'd put out your bombs and spring traps along a perimeter around your walls, perhaps. Or you'd put them around a particularly valuable target, like your town hall.

But you shouldn't think about it this way -- don't think that your troops are going to storm your walls the way a human would. Instead, put your bombs and traps along the paths that the computer AI will travel. In other words, put them between the targets that the enemy troops are going to attack. Spread out your gold mines a tiny bit, for example, and put bombs between them, even though this looks like it doesn't make any sense; enemy goblins will definitely blow themselves to bits. Put bombs between your cannons if you want to kill enemy giants -- especially between the outermost cannon and the one closest to it (because the giants will likely travel in that path if they successfully destroy the first cannon).

When I get attacked, every single bomb and trap is set off, without exception. There's no reasonable way for an attacker to avoid it because the computer AI doesn't behave the way a human would.

Tip 2: You're probably about average, so act that way

People always think they're exceptional in some way. The vast majority of people think they're better than average in intelligence, discernment, or understanding others. Obviously, most of them are wrong.

You'll probably get attacked an average number of times, and you'll lose an average amount of gold and elixir each time. When you attack, you'll win about an average percentage of the time. Most of us won't break out of this pattern because we don't have any special advantage or strategic insight.

The amount of gold and elixir that's won in battles will tend to balance out the amount that's lost. So an average person will neither gain nor lose gold and elixir, relative to the average player. But that doesn't mean you can't come out ahead, because attacking and defending are different: when you attack, you get stuff, and when you defend, you lose stuff.

That's obvious, of course. But what you do have in your control is how often you attack. If someone has about an average success rate for attacking, but they attack more frequently than average, then they'll come out ahead. The one thing you can control in this equation is how often you attack relative to how often you get attacked. You must attack more frequently than the average player; and if you're about average, you should attack as frequently as possible.

This means that you shouldn't be lulled to sleep by receiving a shield. If you want to maximize your rate of resource acquisition, you should attack as much as possible, regardless of whether this will blow away your shield. Never keep a shield active. Attack, attack, attack.

Tip 3: When farming, it's all about the ratios

I've spent too long looking at strategy guides for Clash of Clans. One theme that keeps appearing is that people won't attack unless they could potentially get (for example) 100,000 in gold or elixir. This is just plain dumb.

People prefer spectacular events over more mundane events, and that applies to Clash of Clans. When you attack, you're spending elixir to gain gold and elixir. When deciding whether to attack someone, your calculation is the ratio of how much gold and elixir you could get, divided by the amount of elixir you're spending in the attack. That's it.

Think about it this way. If someone offered you a deal where they'd take a hundred dollars from you and immediately give you two hundred dollars, you should take the deal (assuming you know they're honest, etc). It would be silly to say, "Nah, I'm waiting for an opportunity to turn one thousand dollars into two thousand dollars". Of course, if you could take only one deal, and you thought that someone was going to offer you a better deal later on, then you should turn down the first offer. But Clash of Clans isn't like that. You can take both deals -- you can launch a very small attack and then turn around and launch a big attack right away.

My favorite wins are when I find someone who's put their gold mines very far away from their defenses. I drop one goblin next to them and wait patiently for the little guy to rake in a hundred times what I spent on him. Sure, it's only a few thousand in gold, but because I only spent one goblin in the attack, I can turn around and attack someone else right away. The rate of return on attacks like this is really, really high. Of course, most people are looking only for big wins, so if you keep your gold and elixir reserves low, you're usually safe from attack.

Of course, if you're just trying to have fun, then there's no reason to pay attention to this analytic stuff. But games aren't about fun. They're about winning.