Tuesday, November 10, 2015

Open Letter to the University of Missouri Board of Regents

Board of Regents, University of Missouri-Columbia

I am a former Associate Professor in the Department of Philosophy at the University of Missouri-Columbia, and I am writing this letter because you now have the important responsibility of selecting the next President of the statewide university system. You have been given the opportunity to prevent a repeat of recent events. But this will require an honest appraisal of how these crises were allowed fester over the past several years, beginning with the decision to hire Timothy Wolfe as president of the university system.

Timothy Wolfe was selected to lead the state's university system in large part because of his experience in business. He was praised as an outsider who could bring sound business and managerial skills to a university system that was caught up in the nation's economic problems. As an outsider with a business background, it was hoped that his leadership would bring about efficiencies that would benefit everyone in the university system. Furthermore, the system was believed to have a marketing problem, which Wolfe's experience could could be useful in addressing. Indeed, his business background was thought to be so valuable that the fact that he lacked any university experience was not considered to be a significant drawback. By all accounts, he possessed exactly the skills he was touted as having. He is an intelligent, hard-working businessman who set out to bring about those efficiencies.

Many people seem to have been surprised by Wolfe's inaction and tone-deafness to recent events on campus. The student protesters were rightly appalled by how oblivious Wolfe seemed to be when he was peacefully yet forcefully confronted over systemic racial problems on campus. Yet, I don't believe that we have any right to be surprised by his and his administration's ineptitude because his behavior is exactly what we should expect from a man whose entire experience is in the corporate world.

At the heart of Wolfe's failures is his lack of understanding of university culture and the values held by university faculty and students. The former president simply did not understand concepts of shared governance, the broader role of education in improving the lives of Missouri's citizens, or the ethical and intellectual standards that are necessary to create a campus in which the university's broader mission can be achieved. His leadership style was dictatorial, and his actions seemed to be directed exclusively toward achieving the economic efficiencies he was hired to bring about. His recent failures to understand the broader consequences of his decisions were foreshadowed in his earlier decision to close the University of Missouri Press, not to mention his administration's cutting of health benefits to graduate students – both of which had to be reversed following the predictable reaction of those affected. Clearly, Wolfe did not understand the broader impact of his decisions. And what is more important, he did not understand that those decisions were at odds with the core values of a university. The history of Wolfe's administration should have allowed us to see in advance that he was ill-equipped to handle recent events.

There is no effective leadership without a solid ethical foundation. No person can have that foundation without also having a deep understanding of the culture and values of the organization he or she is expected to lead. Universities are unique in their culture and values, which is why an effective system-wide president must have a broad range of experience within academic culture.

Thus, I urge you to replace Timothy Wolfe with a person who has a history in higher education, who can demonstrate a sincere and well-informed appreciation for the unique culture and values of the university.


Zachary Ernst, PhD

Saturday, August 8, 2015

How I helped train college students to fail at job interviews

For a little over a decade, I was a university professor. I quit my job to go work for a tech startup in Chicago, which I enjoy a great deal. This puts me in an unusual position because I've now taken part in both ends of the college pipeline. First, I spent a lot of time teaching students at all levels, ranging from freshman in their very first college course all the way to PhD candidates defending their dissertations. More recently, I've taken part in a lot of interviews of young men and women who are straight out of college. So I've been able to witness firsthand how college prepares students for job interviews in the real world.

So how well does college prepare students for the job market? In a word: "terribly". So I'm offering a few brief, actionable lessons for recent college graduates. Of course, my experience is in tech, so your mileage may vary.

Lesson 1: Interviews are not midterm exams
In an interview for a technical position, it's standard to ask some technical questions that have right and wrong answers. For example, tech companies will typically give some sort of problem to the people they're interviewing, and ask them to reason about it. Job candidates usually have a pretty good idea whether their answers are correct. What's distressing is that many job candidates (especially recent college graduates) approach these questions as if they're taking a midterm exam, and their "answer" is the only thing that counts.

No competent interviewer is merely interested in whether a job candidate can solve a specific problem. We're interested primarily in other skills. We're looking for at least these skills:
  1. Ability to communicate. Can you explain your thought processes as you go about solving the problem? Can you justify your approach? Can you help us understand why you're thinking the way you are?
  2. Willingness to ask questions. Do you take a step back and consider whether you really understand the question? Do you follow up with questions of clarification when necessary? Do you take a moment to check with us that you're understanding the question correctly?
  3. Openness to suggestions. Are you someone who can work effectively on a team? When someone suggests a change to your approach, are you willing to seriously consider it?
  4. Patience. Do you keep a cool head if the right approach to a problem doesn't immediately suggest itself?
  5. Creativity. Can you come up with an approach that's not what you'd find in a textbook? In the best case, can you actually show the interviewers something they haven't seen before?
Recent college graduates are often caught totally off-guard if they think they've answered our technical questions correctly, but aren't offered the job. Their response is exactly as if they'd gotten every question right on a midterm exam, but given a failing grade. In the latter case, their frustration would be justified. But job interviews are not midterm exams.

To put the point simply, if you show that you're technically competent for the job, that's good. That competence gets your foot in the door. But in the end, it's only perhaps 10% of what you need to demonstrate. We're not hiring a machine to solve problems; we're hiring a colleague to work with in a collaborative environment.

Lesson 2: Know thyself
If you just graduated from college, you are almost certainly not an expert in your field. You are almost certainly not as good at your craft as someone who's been working professionally for years. When I was interviewing for jobs, I was frequently asked to rate my own competence at something. This is a fairly common interview question. If you claim to be an expert in something, you'd better be able to show that you're actually an expert. There's a big difference between someone who's competent and has a realistic view of their own skill level, and someone with the same level of competence who has an inflated self-image.

Similarly, if you're a recent college graduate, there are plenty of things you haven't learned, and which you don't even know you haven't learned. So don't claim to have a wide breadth of experience. If someone claims to have greater expertise than they actually do, there are three options: (1) They don't understand their own skill level; (2) they don't understand that their field is actually quite complex; or (3) they're lying. Interestingly, all three options indicate someone who's going to have a hard time learning anything, and that's deadly to a job candidate.

It's okay to not be an expert. If you're not an expert, and you know you're not an expert, then you're someone an employer can work with. If you're willing to learn, they'll be willing to put in the time it takes to train you.

Lesson 3: Do some homework
Obviously, it's become far easier to research almost anything, including the company you're interviewing with. There is now no excuse for failing to do so.

If a job candidate shows up and has no idea what our business does, it's a big red flag. It shows a lack of interest, and even a lack of general curiosity. Because it takes literally ten minutes to get a good idea of our basic business model, products, and history, only someone who's either lazy or oblivious would fail to type our company's name into Google and read through the top few results. If you really don't know anything about the business, you're going to have a very difficult time convincing anyone that you're a good fit for a job there.

Conversely, if you put in a little bit more effort, you can really stand out. It's legitimate to ask who will be interviewing you; and you can surely find many of the relevant employees on LinkedIn. The founders of the company and their executive leadership almost certainly have some kind of online presence. And if the company has patented anything, you can find those patents online in a few seconds. Sadly, very few job candidates (especially for junior-level positions) will make this small effort. So if you do, you'll have an advantage.

Lesson 4: Don't be an asshole
This one's pretty simple. Don't be an asshole. I don't care how good you are in your field; you won't get the job if you're an asshole.

There are many ways to be an asshole in a job interview. One is to be condescending. Lecturing interviewers about their own field does not make a good impression; nor does explaining to them why they shouldn't have asked a particular question or that they structure their interviews poorly (by the way, interviewers don't always have a choice about how the interviews are structured). Becoming angry and lashing out if you don't know the answer to a question is another way to be an asshole, as is interrupting people for no good reason. Accusing interviewers of having an ulterior motive will also make people think that you're an asshole.

You cannot compensate for being an asshole by also being really smart. Assholes drain productivity from their work environment to such a degree that there's no way to offset it. Any intelligent employer will know this. So, if you happen to be an asshole, but you get the job anyway, you'll probably be working with other assholes, idiots, or both.

As a college student, you could succeed even as an asshole. You got a good grade if you got the answers right. You didn't need to do any work that wasn't specifically required. And you didn't need to have an accurate self-assessment. In short, you've been rewarded for behavior that's irrelevant or downright detrimental in a job interview. In fact, you've been rewarded for behavior that's harmful in many important real-world situations. On behalf of my fellow professors, I apologize for this.

Thursday, May 28, 2015

Python: Implicit is better than explicit

Why the slogan?

I'm a big fan of Python. The syntax is great; the open-source community around it is extremely helpful; and the language lets me focus on the problems I'm trying to solve rather than the implementation details of the code.

A very peculiar feature of Python is that it comes with a set of ethical or aesthetic guidelines. My favorite -- because it's the most perplexing -- is the motto: "Explicit is better than implicit". It's perplexing because although it seems really simple (and maybe even obvious) when you read it, the more you think about it, the less clear it becomes. In fact, I think it's at least misleading, and probably wrong.

Like any programming language, Python makes a number of compromises. It aims for simplicity, and therefore hides quite a lot of details concerning how the interpreter works. Of course, you give up a lot when you don't understand the inner workings of the language. But Python allows you to surface a lot of its inner machinery if you want it. So you have the choice between either cruising along, happily skimming along the surface of the language and taking a deep dive into it.

One of the most conspicuous items that's missing from Python is type-checking. Functions don't care what they're passed, and no checking occurs before run-time. Instead, functions heroically try to perform whatever operations they've been told to perform, regardless of the types of their arguments. If the operation can be carried out, it's done. If it can't, we throw an exception.  This is known, of course, as "duck-typing". Python doesn't care whether the thing actually is a duck. So long as it behaves like a duck, we'll call it a duck.

Here's an easy example. Suppose you've written a function that adds two things together.

def my_function(x, y):
    return x + y

We haven't had to declare the types of our function's arguments. It'll accept anything, and blithely try to add the arguments together. Here's where the duck-typing comes in. If we pass two integers to the function, it'll behave as expected by returning the sum. But if we pass in two strings, the function will also work: namely, by concatenating the strings together.

Python doesn't bother checking the types of x and y. Instead, it looks up their types and tries to find a method corresponding to the "plus" sign. If it finds one, the method is applied. So the function will work for strings, lists, ints, and floats, but not for sets. The upshot is that if you see a snippet of code that uses the plus sign, you have no idea whatsoever what that code does. The meaning of the plus sign is not explicit -- it's implicit.

It easy to come up with other examples in which the value of implicit functionality has been baked into Python. For instance, a common "gotcha" for people who are new to Python is how variables are passed to functions. Sometimes, they're passed by value. Other times, they're passed by reference. Mutable objects are passed by reference; immutable objects by value. Nothing in the code indicates this. It's implicit in the object's type. To take another example, method resolution order for multiple inheritance has a lot of implicit functionality under the hood. Which methods are used by a class with multiple inheritance is not something that's explicitly stated anywhere in the code.

Other than purely declarative programming languages, Python may have more implicit functionality than any other language I've come across. Which is why the "Explicit is better than implicit" slogan is so puzzling, and why it's the source of so many rants and arguments on StackOverflow.

I believe in *magic

The "explicit is better than implicit" slogan probably explains why there's such widespread resistance to so-called "magic" in Python. If you ask a question on StackOverflow about how to implement some Python magic -- for example, how to dynamically create classes or functions, or how to overload Python's import mechanisms -- you'll definitely get at least one answer saying, "Don't do it!". Python magic increases the amount of implicit functionality, and so there's often an assumption that if there's any other way to get something accomplished, magic should be avoided. So the fact that Python makes magic so easy is also another source of tension when people argue about the "best" way to implement something.

Personally, I think there's nothing at all wrong with having lots of magic and lots of implicit functionality in your Python code, even if it's not absolutely necessary. In fact, I think it's great! A couple of examples will help explain why I'm so strongly in favor of Python magic.

For my own work as a software engineer, I often write modules that are intended to be useful to other engineers. But of course, the mere fact that a piece of code would be useful doesn't by itself get other people to use it. There are high barriers to entry when you're learning how to use a new module. A common feature of modules that have a steep learning curve is that they require a pattern used in one piece of code to line up in some complicated with another pattern used in another piece of code. Consider a really simple example. Suppose you've got two functions that look like this:

def sender(some_dictionary):
    new_dictionary = {
        k, v for k, v in some_dictionary.iteritems()
        if some_test(v)}

def receiver(foo, bar):
    do_something(foo, bar)

The two functions have to "line" up correctly in order to avoid throwing an exception. The "sender" function gets a dictionary and builds a new dictionary by extracting some subset of its key-value pairs. The new dictionary is sent to "receiver" as a set of **kwargs.

If I were writing these functions for myself, I wouldn't bother doing anything fancy. But if I intended for someone else to be able to quickly implement functions like this, I'd worry a little bit. I'm forcing the other person to keep track of exactly which components of the dictionary are relevant to the "receiver" function. And this creates a set of "gotchas" that can be the source of much frustration. To take one example, this can happen if the module requires the user to subclass and provide a couple of methods that are required by the base class. (Scrapy works like this, for instance.)

Instead, why not let the code inspect the "receiver" function, get a list of its arguments, and automagically pare down the dictionary to include only the keys that the function needs? Python makes this easy to accomplish. The new dictionary can be programmatically defined like this:

{k: v for k, v in some_dictionary.iteritems()
 if k in if k in inspect.getargspec(receiver).args}

This functionality could be baked into some base classes, or put into a decorator. It's magic, but so what? It would be nutty to write something like this if there was no intention for others to use the relevant code. But at the cost of a small bit of complexity and a bit of magic, we can free other people from having to worry about how the "sender" and "receiver" functions align with each other.

"Simple", "Complex", and "Complicated" don't mean anything

The "Explicit is better than implicit" slogan seems really plausible because it seems to be implied by the principles that "Simple is better than complex" and "complex is better than complicated". When a Python script does something implicitly, the code frequently becomes more complex. For a really clear example of this, check out the "sh" module, which hijacks Python's import mechanism and uses it to create functions that execute external commands. The effect of this is that you can "from sh import foo" and get back a "foo" function, even though no such function is actually defined in the sh module.

The module does this by doing a number of clever tricks. But is the module "complex? Or worse yet, "complicated?" There's no answer. Or more precisely, the answer depends on who you are. If you're a user of the module, there's nothing simpler. If you're a casual programmer poking around in the module's code, or someone who's looking to modify it, then it's probably "complex" (at least).

Simplicity isn't a characteristic of anything. It's a relationship. In programming, it's a relationship between three things: the code, the user of the code, and the purpose to which the code is being put. When those three align in the right way, you've got simple (and elegant) code. If they're out of alignment, then you've got something else -- in the worst case, it's complicated. The same code can be both simple and complex.

And this is why we too often assume that explicit is better than implicit. In typical use-cases, implicit creates complexity because of who the users are and what they're using the module for. But in many contexts, the opposite is true.

Monday, May 25, 2015

Be an ethical engineer

I made a career transition from academia to software engineering about a year and a half ago. It's been a very good move for me, for many reasons. If you're an intellectually curious person and you're comfortable making a big change, there's nothing better than switching careers. You'll be on an almost vertical learning curve; you'll learn stuff you didn't know existed; and you'll meet a lot of people you wouldn't have met before.

One of the many things I've learned is just how hard we're pushing the boundaries of existing technology. I'm sure the vast majority of people don't have any idea how much effort, time, money, and brainpower is required to build the computer systems we use every single day. I won't get into details here, but if you use any popular web-based service like Google, Netflix, Facebook, or Twitter, you're relying on a vast infrastructure comprising thousands upon thousands of computers spread out all over the world. Data centers are so large that they have to be built in specific locations where there's enough electricity available; and they are often built near large sources of water so that the water can be pumped through the facility and used for cooling. Even small tech start-ups with just a few employees can easily require hundreds of machines networked together in extremely complex ways.

This is why engineers are so valuable to industry and government. It is not easy to get these systems up and running, and it requires a skill set that's not easy to find. At the company where I work, we have been hiring engineers constantly for years. There hasn't been a time when we weren't interviewing people, spending money on recruiters, going to job fairs, or doing something else as part of the never-ending search for skilled engineers. The entire tech industry is in an escalating competition for talented people. Every technical talk I attend either begins or ends with a plea for people to apply for jobs at their respective companies.

Getting your hands on huge amounts of computing power is quick, simple, and cheap. But raw computing power is worthless by itself. Transforming those resources into something useful requires engineering know-how. And the demand for that knowledge is outstripping supply by a very wide margin. This gives engineers the rare luxury of choice among potential employers. The engineers I've met value tackling intellectually challenging problems, learning about new technologies, the autonomy to make their own creative decisions, the company of like-minded people, a collaborative environment, and the opportunity to see their work make a significant impact on the world. And they typically don't need to make very many compromises when they're looking for a job.

For all these reasons, we've got a major opportunity to make the world better by taking our ethical values into account when we're looking for work. Take, for example, the National Security Agency's (NSA) new data center. Here's how The Economist describes it:
Deep in the desert in Utah, the National Security Agency... has built a $1.5 billion centre to scoop up and analyse data from the internet. The building includes its own water-treatment facility, electric substation, and 60 back-up diesel generators. It will use over a million gallons of water a day. Its data-storage capacity would be enough, according to one estimate, to store a year of footage of round-the-clock video-recording of over a million people. At this centre, communications from across the globe are tapped directly from the fibre-optic backbone of the internet.
I happen to think that the surveillance directives handed down to the NSA are unethical. You may agree or disagree. But it's certainly true that a facility like the one in Utah requires a large group of engineers with highly specialized and rare skills. It takes a lot of really smart people to design, build, and maintain a behemoth like this one. The number of people capable of handling an engineering challenge like this one is very, very small. And that tiny group of engineers makes it possible for such facilities to exist.

Without those people, the NSA facility would go up in flames. Literally. It's easy to come up with other examples of complex facilities and technologies that are morally suspect. In my opinion, high-speed trading on Wall Street is enormously unethical because it creates volatility, concentrates wealth artificially, and distracts from the real purpose of a market -- namely, to enable capital to flow from unproductive to productive uses. Here again, there's a rare set of skills that makes this activity possible.  An engineer with the ability to shave a couple of nanoseconds off the time required for a signal to propagate through a network is potentially worth vast sums of money. But there are very few people capable of doing this. High-speed trading would collapse if it lacked the people with the know-how to build and maintain these systems.

Furthermore, there's a huge difference productivity between a top-notch "A-Player" engineer and a merely competent "B-Player" engineer. Employers take this very seriously. Smart employers will spend huge amounts of time and money seeking out "A-Players" to hire, and they have no qualms about leaving a vacancy open for a very long time in order to avoid hiring someone who is merely competent. For this reason, a highly skilled engineer deals a major blow to any organization he or she refuses to work for, even if someone eventually fills the position. For all these reasons, engineers have the means, motive, and opportunity to improve the world. You simply have to take into account your own values when deciding who to work for.

Sunday, May 17, 2015

The "Follow Your Passion" Trap

I just read an interesting post in the "what's the matter with the youth today?" genre. It's about how the youth today don't struggle and therefore never develop a passion for anything, because passion requires struggle and lots of hard work. Like a lot of observations in this genre, there's probably some truth to it. But I think it really misses the important reasons why people lead lives filled with mind-numbing work they have no passion for.

First, an important disclaimer. Obviously, a huge number of people spend their lives performing thankless, unfulfilling, rote work simply because they have no other choice. The reason why these people might not be able to name something they're passionate about is because they're exhausted. So when we're talking about people who aren't passionate about anything, we're really discussing a problem that only lucky people have. We're talking about people who have the luxury of time and resources -- people who ought to be able to follow their passions, but don't, for the simple reason that they're not passionate about anything.

Second, it would be nice to get clear on what a "passion" is. Unfortunately, we can't. But we can tell when someone is passionate about something, just as Supreme Court Justice Potter Stewart admitted that he couldn't define pornography, but said that he knows it when he sees it. There are certainly symptoms that a person displays when they are passionate about something. They spend a lot of time pursuing it; they care deeply about becoming skilled at it; they find it intrinsically worthwhile, and not just a means to an end. Often, they have a difficult time explaining to others why they care so deeply about their passion. They often seem obsessive, and they get a lot of satisfaction by becoming more skilled at it, or by achieving goals associated with it. Conversely, when they can't pursue their passion, they become frustrated, angry, or depressed.

I give a lot of (solicited) advice to people about switching careers, because this is something I've done recently. Several of these people have expressed frustration, and have told me something like this:
You're lucky. You have something you're passionate about, so it was easy for you to decide what career you wanted to move into. But I don't. There isn't anything I care deeply about, so I don't know what to do!
This is a really pernicious attitude, because it requires an assumption that's utterly self-defeating, but is difficult to expose. We'll have to approach this assumption indirectly, and sneak up on it.

Let's not think about being passionate about work, but about being in love with a person. It's not a bad analogy; after all, many of symptoms of being passionate resemble the symptoms of being in love. Consider a person who we'll call "Bob". Bob isn't in love with anyone, but wants to be. Bob complains one day:
Look at those happy couples. They're so lucky. It would be good to be in love, but I'm not in love with anyone. I'm just not an in-love kind of person. If I were that kind of person, then I'd be in love with someone.
Bob is kind of nutty. There isn't an "in-love" kind of person who's in love and therefore does things with the object of their love. Bob is not only nutty, but also fatalistic. If he really thinks that he's just not that kind of person, then Bob is very unlikely to ever fall in love with anyone.

If you were Bob's friend, you'd probably want to slap him and say:
"Bob, it doesn't work like that! You don't start out being in love. You meet people and spend time with them, and then maybe you fall in love with one of them. You don't spend time with someone because you're in love -- the feeling of being in love comes later!"
Now let's get back to the problem of having a passion. When we say to someone, "follow your passion", we're setting up a subtle trap. We're suggesting that people have passions first, and then they pursue those passions second. It's like telling Bob to spend time with the person he loves. Not only would this be unhelpful advice, but it would also be highly demoralizing.

So what I tell people is this: Of course you don't have a passion -- you haven't tried anything! For most of us, we don't switch careers or make any other kind of big change because we already know what our passion is. We have to just take a stab at something, and try to keep an open mind about discovering something we can learn to care deeply about. This is a very scary proposition. It will probably turn out that you try something that doesn't work out -- in fact, this is overwhelmingly likely. This may feel like a failure, and so you'll probably feel that you've failed several times at least. You will doubt yourself, and your self-esteem will take some very big hits. Many of the lessons you learn will be negative -- mistakes that you promise to yourself you won't make again. But unfortunately, we are not born with our own personal set of passions that guide us through our lives. Passions are discovered by trial-and-error, and then developed over time. They do not guide you. Instead, you have to wander through the dark while struggling to keep an open mind. And if you're lucky, you'll find the right opportunity. Then the real work begins.

Saturday, May 2, 2015

Shut Down Your PhD Program

When I was a graduate student in an excellent philosophy department, there was a conventional wisdom about the academic job market. Generally, people believed that although the market was bad, it would soon improve dramatically as the large number of faculty hired during the late 1960s and 1970s retired. Their retirement would create a lot of job openings and put the academic job market back in balance.

That was in the late 90s and early 2000s. Since then, the academic job market has gotten steadily worse. There are plenty of reasons for this, some of which we couldn't predict at the time. As the economy got worse, senior faculty delayed their retirements. State appropriations for colleges and universities declined, and administrators were eager to recover whatever money they could by either not filling faculty lines, or (more often) filling them with temporary or adjunct positions, which often have criminally low salaries.

When I see humanities PhDs on the job market, I'm horrified. I thought I had it tough when I went on the market in 2001. But that was a walk in the park compared to the current situation. People who could have landed excellent jobs fifteen years ago are out of luck today. I'm pretty sure that in my own case, although I happened to land a terrific job out of graduate school, I'd have a tough time today finding anything if I were doing it over again.

There is no indication whatsoever that the job market for highly qualified, hard-working PhDs is likely to improve anytime in the foreseeable future. Those retirements didn't have any measurable effect, the hiring of adjuncts and other temporary, non-tenure track faculty is only accelerating, and budgets are low. Although we're out of the recession in the United States, it is not like higher education has bounced back.

In this climate, academic departments with PhD programs are under an obligation to justify their existence -- this is especially true of departments whose graduate programs are of average quality or below. When I was a tenured faculty member in such a department, we justified the existence of our PhD program with something like this (although we wouldn't be so explicit about it): "We're fairly average now, but we are having good luck placing our PhDs into academic positions (relative to other, similar programs), and we're doing a good job training them. More importantly, we have a lot of administrative support for improving our program. We're able to hire additional tenure track faculty, and we've got a solid plan for becoming an excellent program, especially in a few strategically selected areas of specialization."

I have since left academia entirely, in part because I started to appreciate how delusional that justification was. Year after year, we collectively shot ourselves in the foot and squandered one opportunity after another. Our graduate students had an increasingly difficult time finding jobs, funding declined, and we became overwhelmed by the institutional inertia that characterizes so much of academia. Academic departments are very rarely capable of making the qualitative changes that are necessary in order to improve dramatically. It's crazy to think that the leadership that brought about the status quo and flourished in it will suddenly become motivated to institute major changes, or for that matter, that they'll be competent to do it. I don't think my department was unusual at all in this regard. This trajectory seems to me at least to be quite typical.

For these reasons, our justification for having a PhD program was flimsy at best. But I'm not too disturbed by that, mainly because our purported reasons were not our actual reasons. The fact is, faculty tend to enjoy working in a department with a PhD program. It's usually a bit more prestigious, the pay is somewhat better on average, there's less teaching and more freedom to do research. And of course, graduate students do a lot of the work that faculty don't want to do. There are downsides, of course. There's extra administrative work, and the time spent supervising graduate students is significant (although I found this very enjoyable). But on the whole, for most faculty it's quite desirable to have such a position. Faculty and administrators want to have a PhD program because of self-interest.

This would be alright if having a PhD program didn't do any harm. In other words, someone might try to justify their PhD program by saying something like this: "It's not as if we are damaging our graduate students by training them in our discipline; and after all, they are adults, and nobody forced them to go to graduate school or to choose our program. If it wasn't a benefit to them, they wouldn't have applied in the first place; and nobody is forcing them to stay, either."

If this is the justification for a PhD program, then it's important to understand what this argument really amounts to. It's an economic argument, pure and simple. It asserts that the market for one's PhD program demonstrates its utility, exactly like a market for widgets would be used by an economist to demonstrate the value of widgets. And because it is an economic argument, it's subject to the same criticisms we commonly level against that kind of reasoning.

Market-based arguments for the value of something assume that the buyers are informed. But are potential graduate students informed of what academia is really like, and more importantly, of what the academic job market is like? Personally, I'm skeptical. It's one thing to be told how bad it is, but it's very difficult to fully appreciate that the job market is a post-apocalyptic nightmare. I know I didn't (and I had it way better than today's graduate students). From my years with graduate students -- even very smart ones -- I don't think most of them understood, either. To be sure, the truth would gradually sink in, but only after a significant amount of time and money was invested.

Furthermore, arguments that appeal to a market also assume that buyers can switch if they don't like what they buy. And certainly, graduate students can switch by simply quitting. Proponents of PhD programs will no doubt say that there's nothing easier than waking up in the morning and not going to graduate school. In short, if you don't like it, then go do something else.

This is a line of reasoning I don't accept at all. When someone goes to graduate school, they dedicate a huge proportion of their time and effort to their studies. This means that they immerse themselves in the culture of their graduate program. However inadvertently, faculty inculcate in their graduate students the attitude that quitting graduate school is to be defeated. It is to be not as smart, dedicated, or hard-working as their fellow students who choose to stay. I think that as faculty, we underestimate how powerfully we send this message, and how psychologically difficult it is to overcome it. Leaving a graduate program is very difficult. And faculty bear some responsibility for this.

No doubt, some people will be offended by my paternalistic tone. If you're one of those people, then so be it. More interestingly, however, I think there's a good self-interested reason for faculty to be opposed to the existence of so many PhD programs. The argument is also market-based, but it doesn't appeal to dubious assumptions about market efficiency.

Every faculty member laments the fact that tenure-track jobs are being replaced by non-tenure track jobs. Adjuncts are typically exploited in all the familiar ways that I don't need to discuss here. Furthermore, the lack of new tenure-track positions weakens the bargaining power of faculty. The more reliant higher education becomes on temporary workers, the fewer tenured faculty there are, and the less influential they are as a whole. Although faculty often don't like to think of themselves as workers, it is nonetheless true that tenured faculty form a de facto union because they have similar interests and the extraordinary job security of tenure, which gives them an unusual amount of bargaining power (if they choose to use it).

Everyone knows that one way you break a union is by hiring lots of non-union workers. In this case, the non-union workers are the adjuncts and the graduate students. Tenured and tenure-track faculty who have PhD programs are effectively recruiting and training their own scabs! We don't think of graduate students and adjuncts as playing the same role in this dynamic, but there's no difference between them insofar as they weaken the bargaining power of faculty, reduce the number of tenure track jobs, and empower administrators to continue to make asinine strategic decisions on behalf of the university. Of course, with graduate students gone, the university could redouble its efforts to hire adjuncts. But even this would be better because it would highlight the short-term and destructive decisions that are being made by bloated and uninformed administrations.

For all of these reasons, the vast majority of PhD programs harm everyone in the university, and an important step toward slowing the decline of higher education in the United States is to shut them down.

Saturday, April 25, 2015

Living With Chronic Pain

Like millions of other people, I have chronic pain. In my case, it happens to be a chronic migraine condition called -- accurately enough -- "daily chronic migraine". Migraines count as "daily" if they occur at least fourteen times per month, although that's somewhat of an arbitrary number of course. In my case, my migraine is twenty-four hours per day, seven days a week. It does not take off for either weekends or holidays. This is more common than you might think. Millions of Americans have chronic migraine, and many of them have it, like I do, constantly.

The two most common questions I get from people about my chronic pain are: (1) Really? You have a migraine right now?, and (2) Have you tried...?. The answer to both questions is "yes". When I say that I have a migraine 24/7, this is not hyperbole; I mean this quite literally. This logically entails that I have a migraine right now. I also receive many very well-intentioned suggestions for treatment options. As much as I appreciate the thought behind these suggestions, I'm frankly no longer interested in hearing about them. I've tried them all. I have been on every over-the-counter medication and supplement (especially vitamin D and coenzyme Q10), I was prescribed no fewer than twenty-seven different medications, many of which I took in various combinations and in varying doses. I've been hospitalized several times for treatments that have to be conducted under constant observation because they're dangerous. I had Botox and another nerve block (whose name escapes me) injected into various nerves. At one point (at a doctor's suggestion), I started inhaling Lidocaine into my nose to abort these attacks. And yes, I have tried acupuncture. I've had every kind of test, including a huge number of blood tests, half a dozen MRIs, a CAT scan, and a cranial angiogram, for which a cable is inserted from your groin into your skull so that an image can be made from inside the skull. I've even had a brain biopsy, in which a small piece of brain tissue is removed and sliced into a zillion tiny pieces to be tested for, among other things, degradation of blood vessels caused by autoimmune disease. I was wrongly diagnosed with a rare autoimmune condition that's always deadly and has to be treated with such severe chemotherapy that it kills about forty percent of people who undergo it. Luckily, the mistake was caught in time, and I did not have this treatment.

The end result of these experiments was that no known treatment works on me. Unfortunately, this is not as unusual as you might think. The fact is that medical science doesn't understand headaches very well. And it also doesn't understand chronic pain very well, either. Progress is being made on both of these fronts, but we aren't there yet. In fact, it's entirely possible that "migraine" isn't a single kind of headache at all.

People often ask me what medications I'm taking. The answer is "none". I don't take anything for my migraines at all, and I also don't take any pain killers. Even if I have a bad head cold, I don't take medication that contains a pain killer. I've only taken pain killers once in the past five years -- that was when I got kicked by a horse. I took a little Advil for a day when that happened.

I'm not avoiding pain medication because I like being in pain. Quite the opposite. Chronic pain is very different from acute pain, and has to be treated differently. When you have acute pain -- for instance if you hit your head on something or have an occasional headache -- pain medications can be totally appropriate. But this isn't true for chronic pain. People like me who have chronic pain have a neurology that's overly sensitive to stimuli, and our central nervous system is too "ramped-up", so to speak. If we take anything that depresses our nervous system, we compensate for it over time. In particular, if we take pain medications for any significant length of time, we just become more sensitive to pain and we end up worse off than before.

So, counterintuitively, what we have to do is deliberately expose ourselves to stimulation that causes pain. The way we do this is by simply living our lives like anyone else. For example, I'm now sitting in a noisy coffee shop that's crowded, has rather harsh lighting, and also has a powerful smell of coffee. All of these are migraine triggers, and they exacerbate my chronic pain. But over the long run, environments like this will tend to cause my central nervous system to become less sensitive. This means less pain overall.

Besides living one's life, there are other coping strategies. One is to avoid thinking about it. This may sound trite, but simply getting in the habit of distracting oneself from the pain will actually make it better in the long run. General healthy living also helps a lot. This means exercising regularly, eating well, and keeping a regular sleep schedule. I'm better at some of these strategies than others, so there's always room for improvement.

As a result, I am in far less pain than I used to be, and I can live my life. People who meet me do not know that I have chronic migraines unless I tell them (which I usually don't, unless a particularly difficult day starts to impact me to a significant degree; then I might tell them what's happening).

It's important for someone in my situation to understand the causes of their chronic pain, despite the fact that the exact causes are not understood. We know there's a genetic factor, and we know some things that can exacerbate the condition. But in the end, we don't really understand why one person's pain becomes chronic while another person's doesn't. In my case, though, I'm very confident that my personality and lifestyle were major contributing factors. People with chronic pain tend to be "type-A" personalities; we're often very driven and obsessive. We tend to work too hard, keep irregular hours, become angry or frustrated too easily, and we're too demanding of ourselves. This describes me perfectly. For as long as I can remember, I've been totally obsessive about work (professional work as well as personal side-projects). I didn't sleep enough, I'd get stressed out to an irrational degree if my work wasn't going as well as I wanted, and I dealt with every difficulty by placing more demands on myself. I try really hard not to be like this, and I have varying degrees of success. I try to always ask myself, "how many people will die if this doesn't get done perfectly?". If the answer is "zero", I try to give myself permission to step back from the work. Because I am not planning on becoming a brain surgeon anytime soon, I expect the answer will always be "zero". But there will always be an obsessive freak living inside my brain, so this is always a challenge. In fact, one of the reasons I write this blog is to give that freak something to do that doesn't entail any obligations at all.

An unfortunate fact is that this type of personality tends to alienate people. But people with chronic pain need to avoid isolation. So people who develop chronic pain are often the very people who are most ill-equipped to deal with it. And if it's difficult to not be an obsessive person, it's (for me, at least) even harder not to become isolated. In my case, I'm sure that my wife quite literally saved my life. But the burden I placed on her was unbelievably heavy, not to mention unfair. People with chronic pain often do this. Our combination of personality and chronic pain will drive away the very people who want to help.

It would be wildly false if I were to claim that I've dealt perfectly with having chronic pain. But in a way, that's good news because I've been able to get on with my life despite my many flaws.