Saturday, December 31, 2016

Who is responsible when an article gets misread?


How much of the responsibility for understanding lies with the writer of an article, and how much with the reader? This is not an easy question to answer. Obviously both sides bear some responsibility. There are articles so baroque and circuitous that to get the point would require an unreasonable amount of time and effort to parse, even for the smartest reader. And there are readers who skim articles so lazily that even the simplest and most clearly written points are lost. Most cases fall somewhere in between. And the fact that writers don't usually get to write their headlines complicates the issue.

See what you think about this one. The other day, Susan Dynarski wrote an op-ed in the New York Times criticizing school vouchers (a subject I've written about myself). Dynarski opens with the observation that economists are generally less supportive of vouchers than they are of most free-market policies:
You might think that most economists agree with this overall approach, because economists generally like free markets. For example, over 90 percent of the members of the University of Chicago’s panel of leading economists thought that ride-hailing services like Uber and Lyft made consumers better off by providing competition for the highly regulated taxi industry. 
But economists are far less optimistic about what an unfettered market can achieve in education. Only a third of economists on the Chicago panel agreed that students would be better off if they all had access to vouchers to use at any private (or public) school of their choice.
Here's the actual poll: 


As you can see, the modal economist opinion is uncertain about whether vouchers would improve educational quality, while the median is between "uncertain" and "agree". This clearly supports Dynarski's statement that economists are "far less optimistic about vouchers than about Uber and Lyft. 

The headline of the article (which Dynarski of course did not write) might overstate the case a little bit: "Free Market for Education? Economists Generally Don’t Buy It". Whether the IGM survey shows that economists "generally don't buy" vouchers depends on what you think "don't buy" and "generally" mean. It's a little click-bait-y, like most headlines, but in my opinion not too bad. 

Scott Alexander, however, was pretty up in arms about this article. He writes:
By leaving it at “only a third of economists support vouchers”, the article implies that there is an economic consensus against the policy. Heck, it more than implies it – its title is “Free Market For Education: Economists Generally Don’t Buy It”. But its own source suggests that, of economists who have an opinion, a large majority are pro-voucher... 
I think this is really poor journalistic practice and implies the opinion of the nation’s economists to be the opposite of what it really is. I hope the Times prints a correction.
A correction!! Of course no correction will be printed, because no incorrect statements were made. Dynarski said that economists are "far less optimistic" about vouchers than about Uber/Lyft, and this is true. She also reported close to the correct percentage of economists who said they supported the policy in the IGM poll ("a third" for 36%). 

Scott is upset because Dynarski left out other information he considered pertinent - i.e., the breakdown between economists who were "uncertain" and those who "disagree". Scott thinks that information is pertinent because he thinks the article is trying to argue that most economists think vouchers are bad. 

If Dynarski were in fact trying to make that case, then yes, it would have been misleading to omit the breakdown between "uncertain" and "disagree". But she wasn't. In fact, her article was arguing that economists tend to have reservations about vouchers. And she supports her case well with data.

This is a special kind of straw man fallacy. Straw manning is where you present a caricature of your opponent's argument. But there's a particularly insidious kind of straw man where you characerize someone's arguments correctly, but get their thesis wrong. You misread someone's argument, and then criticize them for failing to support your misreading. Other examples of this fallacy might be:

1. You write an article citing Autor et al. to show that the costs of trade can be very high. Someone else says "This doesn't prove autarky is better than free trade!" But of course, you weren't trying to prove that.

2. You write an article arguing that solar is cost-competitive with fossil fuels by pointing out that solar power is expanding rapidly. Someone else says "Solar is still a TINY fraction of global generating capacity!" But of course, you weren't trying to refute that.

3. You write an article saying we shouldn't listen to libertarian calls to dismantle our institutions. Someone else says "Libertarians aren't powerful enough to dismantle our institutions!" But of course, you weren't trying to say they are.

I think Scott is doing this with respect to Dynarski's article. To be fair, his misreading was somewhat assisted by the headline the NYT put on the piece. But once he was reminded of the fact that the headline wasn't Dynarski's, and once he re-read the article itself and realized what its actual thesis was, I think he should have muted his criticism. 

Instead, he doubled down. He argued that most reasonable people, reading the article, would think it was arguing that economists are mostly against vouchers. But his justification for this continues to rely very heavily on the wording of the headline:
First, I feel like you could write exactly the opposite headline. “Public School: Economists Generally Don’t Buy It”... 
Second, the article uses economists “not buying it” as a segue into a description of why economic theory says school choice could be a bad idea... 
In the face of all of this, the New York Times reports the field’s opinion as “Free Market In Education: Economists Generally Don’t Buy It”.
On Twitter, he said: "the actual article is more misleading than the headline." But he appears to say this because he takes the headline - or, more accurately, his reading of it - as defining the thesis that Dynarski is then obligated to defend (when in fact she wrote the piece long before a headline was assigned to it). When he finds that Dynarski doesn't support his reading of a headline she didn't write, it is her article, not the headline, that he calls "misleading".

Of course, the fault here is partly that of the NYT, who used a headline that focused only on one part of Dynarski's article and overstated that part. It's a little harsh for me to say "Come on, man, you should know an article isn't about what its headline says it's about!" Misleading headlines are a problem, it's absolutely true. But after learning that Dynarski didn't write the headline, I think Scott should have been able to then read the article on its own, and go back evaluate the arguments Dynarski actually makes. It's the refusal to do this that seems to me to constitute a straw-man fallacy.

Anyway, one last point: I think Dynarski is actually wrong that economists are more wary of vouchers than other free-market policies. Yes, economists in general are probably wary of voucher schemes. But they're also a lot more favorable to government intervention in a variety of cases than Dynarski claims. Klein and Stern (2006) have some very broad survey data (much broader than IGM). They find that 67.1% of economists support "government production of schooling" at the k-12 level, with 14.4% uncertain and 17.4% opposed. But they also record strong support for a variety of other interventionist policies, such as income redistribution, various types of regulation, and stabilization policy. On many of these issues, economists are more interventionist than the general public! So I think if Dynarski makes a mistake, it's to characterize economists as being generally pro-free-market. Their ambivalence about vouchers doesn't look very exceptional.

Saturday, December 24, 2016

The Fundamental Fallacy of Pop Economics


The Fundamental Fallacy of Pop Economics (which I get to name, because this is my blog and I can do whatever I want, mwahahaha) is the idea that the President controls economic outcomes.

The Fundamental Fallacy is in operation every time you hear a phrase like "the Bush boom" or "the Obama recovery". It's in effect every time someone asks "how many jobs Obama has created". It's present every time you see charts of economic activity divided up by presidential administration. For example, here's a chart from Salon writer Sean McElwee, using data from a paper by Alan Blinder and Mark Watson:


Blinder and Watson attribute the difference to "shocks to oil prices, total factor productivity, European growth, and consumer expectations of future economic conditions", but McElwee attributes it to progressive economic policy.

Larry Bartels has made headlines with similar analyses about inequality:


But the worst perpetrators of this fallacy tend to be conservative econ/finance commentators. And of these, the worst I've seen is Larry Kudlow. Kudlow is being mooted for chairman of the Council of Economic Advisers -- basically, the president's chief economist. Here's an excerpt from a Kudlow post in December 2007 (!) denying that the economy was in danger: 
The recession debate is over. It’s not gonna happen. Time to move on. At a bare minimum, we are looking at Goldilocks 2.0. (And that’s a minimum). The Bush boom is alive and well. It’s finishing up its sixth splendid year with many more years to come.
Notice how this is actually wrong (there was a mild recession in 2001), but Kudlow explicitly associates economic good fortune with the President's term in office. Here's another, from around the same time:
The GOP...has a positive supply-side message of limited government, lower spending, and lower tax rates....I believe the economic pendulum will soon swing in favor of the GOP. There’s no recession coming. The pessimistas were wrong. It’s not going to happen. At a bare minimum, we are looking at Goldilocks 2.0. (And that’s a minimum)...The Bush boom is alive and well.
Or here's Kudlow on Obama:
You've had so much war on business in the last eight or 10 years…I think that has really damaged the economy and has held businesses back from investing and creating jobs. It will take a while to turn that ship around," Kudlow said of Obama's economic policies.
You see the same kind of President-based magical thinking here. In fact, go back and read Kudlow's commentary over the years, and his whole body of work is shot through with this simple thesis - Republican presidents are great for the economy, Democratic presidents are terrible, etc. Kudlow has ridden the Fundamental Fallacy about as far as it's possible to ride it. 

In a recent post, James Kwak declares that Kudlow is a victim of what he calls "economism" (and which I call "101ism"). He thinks Kudlow is wedded to a vision of an economy where free markets always work best. But I respectfully disagree with James. Kudlow doesn't seem to think about supply and demand, or deadweight loss, or any of that - nothing that would be taught in an econ class. Kudlow's thinking is more instinctive and tribal - it's "Republican President = good economy". It's the idea that if the man in charge comes from Our Team, things must go well, and if it's someone from the Other Team, things are bound to be a disaster. The Fundamental Fallacy doesn't come from Econ 101 - it's far more primal than that, an upwelling of our deepest pack instincts.

So, you may ask, why is the Fundamental Fallacy a fallacy? Three basic reasons:


1. Reason 1: Policy isn't all-powerful. 

Macroeconomic models are not reliable, so it's very hard to get believable numbers for the effects of policies like the Bush tax cuts or Obama's stimulus bill. But most estimates show that the effect of both was very modest - the Bush tax cuts might have increased overall GDP by 0.5-1.5% in the short term, and probably had close to no effect in the long term. Meanwhile, the ARRA's effect on unemployment and growth was probably quite modest. Optimistic estimates have Obama's policy package reducing unemployment by about 0.5-1.5% from 2009 through 2013 - not nothing, but not nearly enough to make the Great Recession go away. And those are the most optimistic, favorable estimates.

Only in (some) econ models does policy have complete control over things like GDP and unemployment. But those models are almost certainly highly misspecified. In reality, policy has institutional constraints - nominal interest rates can't go much below zero, there's a federal debt ceiling, etc. And even more importantly, if policy becomes extreme enough, the models themselves start to lose validity - if you have the government go deeply enough into debt, the fiscal stimulus effect will no longer be the only way in which more government borrowing affects the economy. 

In reality, things like growth and unemployment are often determined by natural forces rather than government decisions. For example, I suspect that the pattern of higher growth during Democratic administrations cited by Blinder & Watson is at least partly endogenous - recessions cause white working-class voters to ignore social/identity issues and vote for Democrats like Clinton in 1992 and Obama in 2008, allowing those Democrats to take credit for the natural as well as the policy-induced parts of the recovery.


2. Reason 2: The President doesn't control policy.

Charts like those of Bartels and Blinder & Watson, as well as buzzwords like Kudlow's "Bush boom" look only at the party of the President. But Congress is often controlled by a different party. Obama and Clinton faced Republican Congresses for much of their term in office, and Reagan faced a Democratic Congress. Even when the President has a Congress of the same party, it's often difficult for him to push through his desired policies - witness Bush's failure to privatize Social Security, or Clinton's failure to enact fiscal stimulus.

Additionally, a lot of power is held by the states. Much of Obama's stimulus bill actually just went to shore up decreases in state spending. Meanwhile, the Fed controls interest rates, and though the President appoints the Fed chair, he has very little control over what that Fed chair subsequently decides to do. 


3. Policy often acts with a lag.

Cutting taxes does relatively little if spending isn't also cut. That's because if tax cuts aren't eventually matched by spending cuts, then the government has to either hike taxes, or default on its debt. Therefore, if tax cuts don't "starve the beast", their only effect will be through short-run fiscal stimulus. And tax cuts aren't a very efficient form of stimulus.

So, guess what? Tax cuts don't ever seem to lead to spending cuts. "Starve the beast" doesn't work. 

This is just one example of how policy often acts with "long and variable lags". Deregulation is another. Many people believe that Reagan's deregulations led to the boom of the late 1980s, but Carter actually slashed a lot more regulation than Reagan did. It could have taken years for those deregulations to lead to higher growth. 

Any structural policy you want to name - welfare reform, tax cuts, infrastructure spending, research spending, trade treaties - should only have its full effect after a number of years. It takes years for businesses to invest and grow, for trade patterns to shift, and (probably) for worker and consumer behavior to permanently change. Presidential terms only last 8 years at most. So even if presidents controlled policy, and even if policy was very effective, we'd still see many presidents getting credit for their predecessors' deeds.


Obviously there are some big exceptions to this. The President can start a war, and wars can make the economy boom (as in WW2 for America) or wreck it utterly (as in WW2 for everyone else). Given enough power, a President could in theory wreak havoc on the economy, as Hugo Chavez did in Venezuela. In poor countries, a strong President like Deng Xiaoping can push through reforms that change a country's entire economic destiny. 

But when a country is already rich, where the President is restrained by checks and balances, and where policy changes are not sweeping or huge - i.e., as in the United States over the past half century - we would be well-advised not to exaggerate the economic impact of the chief executive.

Wednesday, December 14, 2016

Academic signaling and the post-truth world


Lots of people are freaking out about the "post-truth world" and the "war on science". People are blaming Trump, but I think Trump is just a symptom. 

For one thing, rising distrust of science long predates the current political climate; conservative rejection of climate science is a decades-old phenomenon. It's natural for people to want to disbelieve scientific results that would lead to them making less money. And there's always a tribal element to the arguments over how to use scientific results; conservatives accurately perceive that people who hate capitalism tend to over-emphasize scientific results that imply capitalism is fundamentally destructive.

But I think things are worse now than before. The right's distrust of science has reached knee-jerk levels. And on the left, more seem willing to embrace things like anti-vax, and to be overly skeptical of scientific results saying GMOs are safe. 

Why is this happening? Well, tribalism has gotten more severe in America, for whatever reason, and tribal reality and cultural cognition are powerful forces. But I also wonder whether a few of science's wounds might be self-inflicted. The incentives for academic researchers seem like they encourage a large volume of well-publicized spurious results. 

The U.S. university system rewards professors who have done prestigious research in the past. That is what gets you tenure. That is what gets you a high salary. That is what gets you the ability to choose what city you want to work in. Exactly why the system rewards this is not quite clear, but it seems likely that some kind of signaling process is involved - profs with prestigious research records bring more prestige to the universities where they work, which helps increase undergrad demand for education there, etc. 

But for whatever reason, this is the incentive: Do prestigious research. That's the incentive not just at the top of the distribution, but for every top-200 school throughout the nation. And volume is rewarded too. So what we have is tens of thousands of academics throughout the nation all trying to publish, publish, publish. 

As the U.S. population expands, the number of undergraduates expands. Given roughly constant productivity in teaching, this means that the number of professors must expand. Which means there is an ever-increasing army of people out there trying to find and report interesting results. 

But there's no guarantee that the supply of interesting results is infinite. In some fields (currently, materials science and neuroscience), there might be plenty to find, but elsewhere (particle physics, monetary theory) the low-hanging fruit might be picked for now. If there are diminishing returns to overall research labor input at any point in time - and history suggests there are - then this means the standards for publishable results must fall, or America will be unable to provide research professors to teach all of its undergrads.

This might be why we have a replication crisis in psychology (and a quieter replication crisis in medicine, and a replication crisis in empirical economics that no one has even noticed yet). It might be why nutrition science changes its recommendations every few months. It might be a big reason for p-hacking, data mining, and specification search. It might be a reason for the proliferation of untestable theories in high-energy physics, finance, macroeconomics, and elsewhere. And it might be a reason for the flood of banal, jargon-drenched unoriginal work in the humanities.

Almost every graduate student and assistant professor I talk to complains about the amount of bullshit that gets published and popularized in their field. Part of this is the healthy skepticism of science, and part is youthful idealism coming into conflict with messy reality. But part might just be low standards for publication and popularization. 

Now, that's in addition to the incentive to get research funding. Corporate sponsorship of research can obviously bias results. And competition for increasingly scarce grant money gives scientists every incentive to oversell their results to granting agencies. Popularization of research in the media, including overstatement of results, probably helps a lot with that.  

I recall John Cochrane once shrugging at bad macro models, saying something like "Well, assistant profs need to publish." OK, but what's the impact of that on public trust in science? The public knows that a lot of psych research is B.S. They know not to trust the latest nutrition advice. They know macroeconomics basically doesn't work at all. They know the effectiveness of many pharmaceuticals has been oversold. These things have little to do with the tribal warfare between liberals and conservatives, but I bet they contribute a bit to the erosion of trust in science. 

Of course, the media (including yours truly) plays a part in this. I try to impose some quality filters by checking the methodologies of the papers I report on. I'd say I toss out about 25% of my articles because I think a paper's methodology was B.S. And even for the ones I report on, I try to mention important caveats and potential methodological weaknesses. But this is an uphill battle. If a thousand unreliable results come my way, I'm going to end up treating a few hundred of them as real.

So if America's professors are really being incentivized to crank out crap, what's the solution? The obvious move is to decouple research from teaching and limit the number of tenured research professorships nationwide. This is already being done to some extent, as universities rely more on lecturers to teach their classes, but maybe it could be accelerated. Another option is to use MOOCs and other online options to allow one professor to teach many more undergrads. 

Many people have bemoaned both of these developments, but by limiting the number of profs, they might help raise standards for what qualifies as a research finding. That won't fully restore public trust in science - political tribalism is way too powerful a force - but it might help slow it's erosion. 

Or maybe I'm completely imagining this, and academic papers are no more full of B.S. than they ever were, and it's all just tribalism and excessive media hype. I'm not sure. But it's a thought, anyway.

Friday, December 09, 2016

Is Twitter a dystopian technology?

"What will the apocalypse look like? The answer, to use a term generally understood but the specifics of which you cannot imagine, and which this document will attempt to describe, is 'warfare'."
- William Bell, Fringe


I've been wondering whether Twitter is a true dystopian technology. Meaning, a technology that makes each user better off, but makes the world worse off as a whole. 

Note that you don't need a dystopian technology to create a dystopia. Joseph Stalin, Mao Zedong, and plenty of others managed to create dystopias using much the same technologies used in much happier, freer places. The question is whether some technologies, merely by existing, push the world toward a bad equilibrium.

How could a technology that's good for each individual be bad for the world? Externalities. For example, suppose that the only fuel we had was so horribly polluting that it would destroy the planet if we used it for even just a few decades. But each individual's choice of whether to use the fuel wouldn't be enough to tip the balance away from or toward planetary destruction. So without some kind of outside authority banning the use of the fuel, or some massive unprecedented outpouring of altruism, the world would be doomed. In fact, if global warming does destroy human civilization, fossil fuels will turn out to have been a dystopian technology.


What are some other examples of dystopian technologies? I'd say the jury is still out on the nuclear bomb. If nuclear deterrence dramatically reduces war and no one ends up using nukes, then it was a good technology. But if nuclear war eventually blows up civilization, it was a dystopian technology. Obviously, if weapons of mass destruction got cheap enough, they'd put society in untenable danger. I'm pretty sure that the ending of The Stars My Destination, in which (SPOILER) a guy runs around tossing out weapons of mass destruction to everyone on the street, won't turn out well.

Another possible example might be the stirrup. Stirrups enabled cavalry to become peerless warriors. For at least a thousand years, cavalry ruled the battlefield, and nomadic tribes from the Huns to the Mongols raided and/or conquered every settled civilization. Not coincidentally, during this millennium (usually called the Middle Ages), human wealth and development didn't advance very much. Then again, when global trade eventually linked the world, horses were an important piece of the equation. So without knowing whether globalization and science would have been possible without stirrups, it's hard to tell whether stirrups made the world worse off. 


These examples illustrate how hard it is to do a cost-benefit accounting for a technology, even with the full benefit of millennia of hindsight. So I don't expect the question I ask in this post to ever have a satisfactory answer. But in any case, it's interesting to ponder.

Is Twitter a dystopian technology? Obviously, Twitter is worth it for hundreds of millions of individuals to use. The costs - harassment, bad feelings, the risk of accidentally tweeting something that harms your career - are considerable, and no doubt have contributed to the company's user growth stagnation. But for many users, those costs are still worth paying. 

But what about the externalities of Twitter? Well, one of the biggest negative externalities is war. Twitter's format is very conducive to firing off quick thoughts without carefully considering the consequences. The brevity of tweets also makes them easily subject to misinterpretation. Imagine if President Trump fired off an intemperate tweet that leads to a nuclear war with China or Russia. It's quite possible that if he were forced to use a longer-format medium like Facebook, the risk of that happening would be much lower. Already Trump has used Twitter to denounce China's military buildup in the South China Sea.


So there's that.

A more insidious way that Twitter could generate negative externalities is to contribute to political polarization. In an earlier post, I described why Twitter is conducive to bad feelings, aggression, perceived aggression, and the creation of aggressive online mobs. If you don't believe me when I say Twitter is a vicious jungle, look at what happened to the Microsoft chatbot that taught itself to tweet by watching other people:


In brief, the features of Twitter that make it conducive to fighting are:

1. Limited length of replies. 

This is the big one. Like the faster-than-light internet in A Fire Upon the Deep, Twitter allows only short, pithy replies. Short replies have no room for self-deprecation, qualifiers, jokes, praise, caveats, or any of the other social lubricants that make debates friendly and collegial. Typically, the only way to get the point across in 140 characters is to be blunt. 

In addition, short replies make it harder to establish a voice when writing; that forces readers to project an imagined voice onto each tweet. Some people will assume the best, but others will assume the worst - that the writer of the tweet is being rude, sarcastic, or aggressive. And since an (assumed) unfriendly tone does more social damage than an (assumed) friendly tone heals, the net result of random errors in tone-interpretation will be to create bad feelings, including resentment, offense, threat perception, and a feeling of persecution.

2. Retweets that make replies impossible.

When I retweet someone, all of my followers can see both her tweet and my added commentary. But the author of the tweet cannot issue a reply that all of my followers automatically see. She can reply, but only if the readers scroll down through the thread will they be able to read her reply. 

So imagine if someone tweets "I think Democrats should try to appeal to some Trump voters," and I quote this tweet, adding the comment "This person thinks we should coddle racists." All 46,700 of my followers can see my uncharitable interpretation of her statement. But if she wants to reply "No that's not what I meant at all," then my 46,700 followers will only see her reply if they click on my tweet and scroll through the thread. Most will probably not do this, since it takes time and effort. Instead, they will probably see my uncharitable interpretation, assume it's true, and feel contempt for the person who wrote the original tweet. Also, my followers will now know her handle, so they will be able to tweet mean things directly to her ("Asshole, please delete your account", etc.).

I see this happen all the time.

3. Open mentions.

On Facebook, only people I approve can comment on my posts. I can make posts open, or I can limit replies to only my personal friends. On Twitter, on the other hand, anyone can automatically reply directly to anyone else, unless they have been blocked or muted. Since mentions are the main way that people talk to each other on Twitter, this means that if you want to talk to people on Twitter, you have no choice but to scroll through the replies of anyone who decides to talk to you. There's just no way not to. In addition, there's no way to untag yourself from other people's tweets, as you can do on Facebook.



4. Anonymity

Twitter, like discussion forums - but unlike Facebook - thrives on anonymity. This of course makes the platform have lots of bots. But more importantly, as we all know, online anonymity allows people to blow off steam, and reduces people's incentive to be friendly and diplomatic.


These three basic features of Twitter's technology encourage aggressive discourse, bad feelings, harassment, and constant rhetorical combat. That's certainly a cost to the user, but it might also encourage partisanship. Each person, in order to feel emotionally safe from the constant attacks that he feels like he's getting on Twitter, might be pushed to join an ideological group - like a prison gang, for protection (this excellent analogy was originally made by Charles Johnson, the nutty right-wing reporter).

Ideological polarization creates few costs for the user. It really doesn't make my online experience much worse to join the BernieBros, or the Alt-Right, or the Social Justice Warriors, or GamerGate, or the Libertarians, or whoever. I sacrifice a little bit of opportunity to say maverick, unorthodox things, and in return I get a whole bunch of people who have my back and are willing to beat off waves of attackers on a daily basis. 


But ideological polarization might be very costly for society. Eventually, if no one listens to the other side, people can come to believe that everyone in the opposing tribe hates them and wants to destroy them. The result can be large-scale social strife, civil war, or simply long-term political dysfunction. 

Many people have remarked that Twitter wielded a huge amount of influence in the 2016 presidential election. That election generated unprecedented level of bad feelings. You can chalk this up to the candidates themselves, but it seems very likely to me that the constant, unending, bitter Twitter wars were a big part of what made the campaign so unbearable. I myself would occasionally "detox" from campaign Twitter for days at a time, and find myself feeling much better about U.S. politics, about both candidates, and about life and the world in general.

But note that political tribalism can also dramatically raise the cost of quitting Twitter. Even if the online conflict is grueling and unpleasant - and even if it would make everyone happier if the conflict would just stop, or at least quiet down - one can't abandon the battlefield to the enemy. Once you join and commit to a Twitter tribe, you can't just check out, any more than you can desert your platoon in the middle of a war. 


So it's possible that Twitter's existence is adding significantly to America's already worrying polarization problem - causing American society to turn into a new sort of war zone. And if this is indeed the case, it's the 4 aforementioned basic features of the technology that are doing it. 

Maybe the people who run Twitter are smarter than me, but personally I just don't see a way to get rid of any of those 4 features without fundamentally changing the product. And this is a product we know makes money, has a strong network effect, and fills an important niche in the media. Twitter's problems seem inherent to any Twitter-like medium, and society seems like it will always have a need for a Twitter-like medium as long as it's technologically feasible.

So to sum up: Twitter is a great tool, and creates lots of social value. Personally, I have no intention of quitting, and most of the existing users don't seem likely to quit either. But it's possible that by exacerbating polarization and fostering large-scale social conflict, Twitter will end up destroying more social value than it creates.

If this is the case - if I haven't exaggerated Twitter's social costs or underrated its benefits - what's the solution? The government could just ban Twitter, or heavily police it like in China. That seems like it would have very high costs for society, since a government that does that has basically chucked free speech out the window (just look at China). So I think that society's only real way out of Twitter Hell is to innovate its way out. Solve a dystopian tech problem by inventing utopian tech - a new product that fulfills the useful social role of Twitter but avoids the negative externalities, by somehow fostering peaceful, friendly, constructive dialogue. Or modify Twitter in ways that fix its problems, which of course I haven't thought of yet. If stirrup-using horse nomads keep conquering civilization, invent the gun. 

Which is basically a cop-out. "Invent magical new technology that solves the problem." Check. But since the march of technology only goes forward, if you stumble into a dystopian cul-de-sac, there's generally no way out but forward into the unknown.


Tuesday, December 06, 2016

More about the Econ 101 theory of labor markets


I've received many good responses to my post about the "Econ 101" (undifferentiated competitive partial equilibrium short run) theory of labor markets. But there are a couple responses that I got many times, which I think are worth discussing in detail.

To recap: I pointed out that the Econ 101 theory can't simultaneously reconcile the small sizes of the labor market responses to immigration shocks and minimum wage increases. I said this implies that the Econ 101 theory isn't a good way to think about labor markets, and that instead we should pick another go-to mental model - general equilibrium, or search and matching theory, or monopsony, etc.

Two common responses I got were:

1. "Immigration doesn't just shift labor supply; it shifts labor demand too."

2. "Labor is probably very heterogeneous, so one S-D graph shouldn't have to fit both these facts at once."

The first response is a very good one. I like it a lot. What it means is that we should always think about labor markets in terms of general equilibrium, not partial equilibrium. 

An essential part of partial equilibrium thinking (or as David Andolfatto likes to call it, "Marshall's Scissors") is that the supply and demand curves can shift independently of each other. This is not apparent just from drawing the graphs, but if you think about it, it's obvious. If any shock - a storm, a policy change, a change in the price of a substitute or complement, etc. - shifts both curves in the same direction, the effect of shocks on both price and quantity will be indeterminate. Econ 101 doesn't allow you to reason from a price change, but it should allow you to reason from a price change and a quantity change.

Immigration shifts labor demand because of general equilibrium effects. Realizing that more labor is now available in a certain city, more companies will move there or start up there, in order to take advantage of the demand for goods and services from the new immigrants. That will push the price of labor back up to where it was before the immigrants came. If that shift happens quickly, studies won't show any sizable dip in wages from the influx of immigrants. General equilibrium is harder to think about than partial equilibrium (quick mental exercise: what are the general equilibrium results of a minimum wage hike?), but in the case of labor, the evidence shows that the simpler theory just won't cut it.

OK, on to response #2. It might be the case that immigrants don't compete directly with native-born workers, but fill economic niches that previously were mostly left unfilled. It might be that if we sliced the data in just the right way, we could find a slim subgroup of people whose wages are clobbered by immigration. In fact, George Borjas has tried to do exactly this. But any properly trained empirical researcher will recognize this as data mining. 

Meanwhile, some people claim that minimum wage and immigration affect two very different segments of the labor market. That's certainly possible, but both types of studies are usually about low-wage, low-education workers. Many limit themselves to teenagers, and find the same thing. How much more granularity do you want?

The problem with invoking extreme heterogeneity to explain seemingly incommensurate labor market facts is that it turns predictive theory into useless just-so stories. Assuming unobservable characteristics - or mucking around in the data til you find some that seem to work - introduces free parameters into the model, meaning it can never really be compared with data. Heterogeneity assumptions are just the proverbial epicycles. Without some assumptions about which segment of the labor market the theory applies to, the Econ 101 theory becomes totally useless.

So while some degree of labor market heterogeneity is real, the more it gets invoked, the more it weakens the overall theory. Theories should always be penalized for sticking in more parameters.

Anyway, those were two thoughtful and interesting categories of responses. I also got a few that were...well...a little less deep.  ;-)

Saturday, December 03, 2016

An econ theory, falsified


What does it mean to "falsify" a theory? Well, you can't falsify a theory globally - even if it's shown to be false under some conditions, it might hold true far away or in the future. And almost every theory is falsifiable to some degree of precision, since almost every theory is just an approximation of reality.

What falsification really means - or should mean, anyway - is that a theory is shown to not work as well as we'd like it to under a well-known set of conditions. So since people have different expectations for a theory - some demand that theories work with high degrees of quantitative precision, while others only want them to be loose qualitative guides - whether a theory has been falsified will often be a matter of opinion.

But there are some pretty clear-cut cases. One of them is the "Econ 101" theory of the labor market. This is a model we all know very well - it has one labor supply curve and one labor demand curve, one undifferentiated type of labor and one single wage.

OK, so what are some empirical things we know about labor markets? Here are two stylized facts that, while not completely uncontroversial, are pretty one-sided in the literature:

1. A surge of immigration does not have a big immediate negative impact on wages.

2. Modest minimum wage hikes do not have a big immediate negative impact on employment.

George Borjas disputes the first of these, but he's just wrong. A few economists (and MANY pundits dispute the second, but the consensus among academic economists is pretty solid.

The first fact alone does not falsify the Econ 101 theory of labor markets. It could be the case that short-run labor demand is simply very elastic. Here's a picture of how that would work:


Since labor demand is elastic, the supply shift from a bunch of immigrants showing up in the labor market doesn't have a big effect on wages in this picture.

BUT, this is impossible to reconcile with the second stylized fact. If labor demand is very elastic, minimum wage should have big noticeable negative effects on employment (represented by the amount of green between the blue and red lines on this graph):


By the same token, if you try to explain the second stylized fact by making both labor supply and demand very inelastic, then you contradict the first stylized fact. You just can't explain both of these facts at the same time with this theory. It cannot be done.

So the Econ 101 theory of labor supply and labor demand has been falsified. It's just not a useful theory for explaining labor markets in the short term (the long term might be a different story). It's not a good approximation. It doesn't give good qualitative intuition. And it's especially bad for explaining the market for low-wage labor, which is the market that most of the aforementioned studies concentrate on.

What is a better theory of the labor market? Maybe general equilibrium (which might say that immigration creates its own demand). Maybe a model with imperfect competition (which might say that minimum wage reduces monopsony power). Maybe search and matching theory (which might say that frictions make all short-term effects pretty small). Maybe a theory with very heterogeneous types of labor. Maybe something else.

But this theory, this simple Econ 101 short-run partial-equilibrium price theory of undifferentiated labor, has been falsified. If econ pundits, policy advisors, and other public-facing econ folks were scientifically minded, we'd stop using this model in our discussions of labor markets. We'd stop casually throwing out terms like "labor demand" without first thinking very carefully about how that concept should be applied. We'd stop using this framework to think about other policies, like overtime rules, that might affect the labor market.

Sadly, though, I bet that we will not. We will continue using this falsified theory to "organize our thoughts" - i.e., we'll keep treating it as if it were true. So we will continue to make highly questionable policy recommendations. The fact that this theory is such a simple, clear, well-understood tool - so good for "organizing our thinking", even if it doesn't match reality - will keep it in use long after its sell-by date. That's what James Kwak calls "economism", and I call "101ism". Whatever it's called, it's not very scientific.


Updates

In a follow-up post, I think about a couple of common responses to this post.

Tuesday, November 29, 2016

You and whose army?


"Political power grows out of the barrel of a gun"
- Mao Zedong


I recently read a very good history of the Spanish Civil War, entitled -- appropriately enough -- "The Spanish Civil War," by Paul Preston.


You should get it and read it, especially if you live in the U.S. Everyone talks about 1930s Germany as a parallel for what the U.S. is going through right now, but I'm pretty sure 1930s Spain is by far the better analogy.

Although you should read the book, let me try to give a brief summary. 

By the 1930s, Spain had been in decline for about three centuries. Its last imperial possessions had been lost in the Spanish-American War. It was economically backward and highly unequal. Big landowners controlled the economy, and the Catholic Church controlled the culture. Lots of people wanted this to change, and joined various leftist movements - communists, socialists, and anarchists. In response, lots of other people joined right-wing movements - fascists, religious fundamentalists, and monarchists. A shaky democracy was established in 1931. At first the leftists won, and implemented some reforms, but two years later, the rightists won and reversed all the reforms. Then the leftists won again, and the rightist-dominated military decided that it was time to stop messing around with all this democracy crap, and launched a coup. 

The coup succeeded in about half the country, and the two halves then proceeded to go to war with each other. The rightists got help from Hitler and Mussolini, the leftists got more halfhearted help from Stalin. Atrocities were committed on both sides, but the rightists were somewhat worse. The leftists fought among themselves, while the rightists were generally unified. The rightists, with greater population, more military veterans, more unification, and more effective outside help, steadily defeated the leftists. They shot hundreds of thousands of people, raped untold numbers of women, and in general terrorized the parts of the country that had supported the leftists. They then maintained a fascist regime for a few decades until people got tired of it, and democracy returned.

OK, so now you know about the Spanish Civil War. How is this a parallel for America?

Like Spain in the 1930s, we have a country geographically divided into "red' and "blue" regions. Like Spain, we are suffering from a relative decline in international power and prestige, as well as deep-rooted economic and institutional dysfunction on many levels. Like Spain, we have an increasingly bitter, intransigent conflict between the right and the left. 

And like Spain, the military leans to the right (though perhaps not quite as much). In America, the right also controls most of the roughly 300 million privately owned guns

This means that if the U.S. had a civil war along currently existing left-right lines - i.e., Republican voters vs. Democratic voters - the right would win. It would probably win more quickly and decisively than the Spanish right won. This is not just because of military sympathies and gun ownership, of course. The American right has a population advantage among men, who are more likely to fight in war than women. It also has greater organization, being mostly unified by religion (Christianity), race (white), and a shared vision of history. As for foreign intervention, Russia would probably be on the side of the American right, while there is no foreign great power that would obviously intervene to help the American left.

What would be the consequences of a rightist victory in that kind of civil war? Lots and lots of people would die, many more of them on the left than on the right. Nonwhites, religious minorities, and suspected leftist sympathizers would be the victims of many massacres. Right-wing paramilitaries would rape many leftist and minority women, as in Spain. The U.S. economy would crater, hurting red and blue America alike. A dysfunctional, repressive regime would set in, with atrocities probably continuing for decades. The country might break up, or might eventually have a Spain-like democratic restoration, but the U.S. would be a much poorer country and certainly a second-rate global power. (As for me, if I'm still alive, I'll be in Canada or Australia or Japan writing angry, drunken blog posts denouncing the right and lamenting the fall of America - like Pablo Picasso, but without the artistic talent.)

I think many on the intellectual, elite left in America fail to realize this danger, or the probability of this scenario. From most left-leaning intellectuals I see only increasing stridency and demands for ideological purity. I see increasing demands that anyone affiliated with the left denounce the founders of the U.S. as white supremacists, paint American history as one of genocide and atrocity, and see politics mostly through the lens of identity. Even on the center-left, there is an increasing tendency to paint all Republican voters as irredeemable racists, who can only be overcome by the weight of demographic numbers. 

I worry about this stridency. I worry that this attitude, and these tactics, depend crucially on the assumption that we live in a constitutional, democratic regime that is so unshakably stable that raised fists, angry op-eds, and the ballot box will always be able to prevail. I worry that they have forgotten Mao's adage that "power grows out of the barrel of a gun," and that the other side - as the sides are currently drawn - has all the guns.

Does this mean I think the left should buy guns, join the U.S. military en masse, and prepare to win a civil war? Well, I think greater military participation by those on the left wouldn't be a bad thing, for any number of reasons, but overall, no. I think that realistically, there's no way for the American left to reach military parity with the American right in the next few decades. And since a civil war would be so devastatingly bad for everyone in America (as it was for Spain), it should be avoided for the sake of all Americans, not just the prospective losers.

Instead, I think the left should focus on reaching out and broadening its tent. Instead of relentlessly enforcing purity, I think the left should try to win over many of the folks who switched their votes from Obama in 2012 to Trump in 2016. 

If Paul Preston's book has one big weakness, it's that it ignores the normal people who fought for the right in the Spanish Civil War. There is endless discussion of the social conditions that led leftists to take up arms, but when it comes to the right, all of the focus is on Franco and the other military leaders. Yet Spain's right had more of the nation's populace on its side than did the left, and ordinary people joined Franco's army in large numbers. What drove these people to fight for Franco? Was it religion and tradition? Economic fear of the power of organized labor? I'll have to read more books to find out. But the point is, something drove all those people to support Franco. 

Surely there are levers of persuasion, coalition, and rhetoric that could have been employed to bring some of those Spaniards over to the side of the left. And surely there are levers of persuasion, coalition, and rhetoric that could prevent large chunks of conservative America from supporting a rightist putsch, should it come to that. If the part of the American right willing to fight a civil war could be limited to the racist "alt-right", then those who stood for democracy, constitutionalism, and the continued existence of a free and lawful republic would surely prevail.

Friday, November 25, 2016

Are current trends in econ methodology just fads?


The Economist has an article about the booms in machine learning and randomized controlled trials (RCTs). The article is written in a snarky tone, and mostly talks around the question of whether these methodologies are overhyped, but overall it seems to be making a case that they are:
[J]udging by the tendency of those writing economic papers to follow the latest fashion, a “herd” would be [the] best [collective noun to describe economists]. This year the hot technique is machine learning, using big data... 
Economists are prone to methodological crazes...[N]ew methods also bring new dangers; rather than pushing economics forward, crazes can lead it astray, especially in their infancy... 
A paper by Angus Deaton, a Nobel laureate and expert data digger, and Nancy Cartwright, an economist at Durham University, argues that randomised control trials, a current darling of the discipline, enjoy misplaced enthusiasm... 
Machine learning is still new enough for the backlash to be largely restricted to academic eye-rolling.
As its main piece of evidence for the faddishness of economics, the article presents the following graph:


To me, this graph (which is just for NBER working papers) shows the opposite of what the article claims. Looking at the chart, I see a bunch of more-or-less monotonically increasing lines. Remember that the y-axis here is percent of total papers, so if these techniques are fads, we'd expect these lines to mean-revert. Instead, almost all the lines just go up and up for 15 to 30 years. To me, that says most of these things are not overhyped fads - at least, not yet.

There are two possible exceptions. Lab experiments had a brief downturn for a few years starting in around 2002, though they shortly resumed their upward climb, and are now way above their 2001 peak. DSGE models have been decreasing slowly since 2010, though they're still strongly up over the last decade.

Given the seeming non-faddishness of the lines on this chart, a better hypothesis would seem to be that these new techniques are driven by new technology. The internet and computerization have made it much easier to collect, transfer, and analyze data. Processing power and software packages like Dynare have made it much easier to numerically solve DSGE models. These are factors that the Economist article does not consider.

If new technology, not academic herd behavior, is responsible for most of the methodological trends of the last 30 years, it implies that the changes are here to stay. New technology doesn't go away (unless you live in an RBC model, which we don't). It's possible that the boom in empirical methods in general is working through a backlog of old theories that were not testable until recently, and that the empirical wave will subside once that task is complete. But that's very different from empirical techniques being fads.

(I do think there is a possibility that DSGE is somewhat of a fad, and that the decline in the last 5 years is a new trend instead of a blip. This is partly because of definitions. OLG models are dynamic general equilibrium models, and many are stochastic, but they aren't called "DSGE". But I also think DSGE might decline because theory in general is declining.)

In any case, the Economist article does not marshal any strong arguments that machine learning has been overdone. Its only actual evidence comes from the book Weapons of Math Destruction. That book is about algorithmic decision-making can have unintended, morally dubious consequences for society. It has little to do with the question of whether machine learning techniques are useful for econometrics. The book itself is important and well-written, but the Economist article's reference to it seems random and out of place.

As for RCTs, the Economist's argument against them comes entirely from the famous paper by Angus Deaton and Nancy Cartwright. It will be interesting to see whether this argument eventually stems the tide of RCT usage. But I highly doubt that RCTs will go away any time soon, since for many questions there is simply no other technique in existence that can provide credible answers. RCTs have, importantly, never gone away in medicine.

So I don't think the Economist article gives us much reason to believe that machine learning and RCTs are faddish. Yes, it's true that economists (like everyone) don't generally use new tools optimally when they first come out, and learn better ways to use them as time goes on. Yes, it's true that methodologies can influence which questions get asked (the "streetlight" problem), and that open-minded economists should try to break out of the mental boxes their methodologies create. But it's not yet appropriate to conclude that new empirical techniques represent fleeting fads, as opposed to real progress.

Monday, November 21, 2016

Steve Bannon and the Last Crusade


I heavily doubt Steve Bannon is the anti-Semite many on the left now claim he is. It's mostly based on one thing that his wife claimed that he said, about not wanting to send his kids to school with whiny Jewish girls. It's hearsay, about one thing he supposedly said in private years ago, which isn't even that anti-Semitic. Bannon has also publicly stated that he has "zero tolerance" for the anti-Semitic elements of the alt-right. (This Breitbart article, by David Horowitz, is sometimes cited as evidence of anti-Semitism, but it's actually just criticizing Bill Kristol for not being sufficiently pro-Israel!)

I also hear a lot of claims that Bannon is a white nationalist. Some are based on stuff he allowed to be published at Breitbart (e.g., this), but many seem to rely on one thing he said while interviewing Donald Trump, in which he worried that too many immigrant CEOs would reduce "civic society." That's not something I agree with, since I'm strongly in favor of skilled immigration. But it certainly doesn't peg him as a white nationalist, especially when he vigorously and publicly and explicitly denies being a white nationalist. So if you think he's B.S.-ing about that, your case will have to rely entirely on Breitbart articles. (UPDATE: Since this post was written, we've learned a lot more about Bannon, but that's material for another post. The remainder of this post stands on its own...)

So what does Bannon believe in? The only lengthy articulation of his worldview that I know of comes from this 2014 speech. Essentially, Bannon's worldview, as laid out in this interview, seems to have three main pillars:

1. The fruits of capitalism should be more broadly distributed.

2. The West is in a war with radical Islam and must prevail.

3. Secularism contributes to the weakness of the West.

Here's where he talks about Pillar #1, his economic philosophy:
[C]apitalism really generated tremendous wealth. And that wealth was really distributed among a middle class, a rising middle class, people who come from really working-class environments... 
But there’s a strand of capitalism today — two strands of it, that are very disturbing...One is state-sponsored capitalism...The second form of capitalism that I feel is almost as disturbing, is what I call the Ayn Rand or the Objectivist School of libertarian capitalism...It is a capitalism that really looks to make people commodities, and to objectify people...So I think the discussion of, should we put a cap on wealth creation and distribution?... 
The central thing that binds [my movement] together is a center-right populist movement of really the middle class, the working men and women in the world who are just tired of being dictated to by what we call the party of Davos...[T]here are people in New York that feel closer to people in London and in Berlin than they do to people in Kansas and in Colorado, and they have more of this elite mentality that they’re going to dictate to everybody how the world’s going to be run.
This "center-right populism" is basically a cross between FDR, Bernie Sanders, and Ross Douthat. Bannon also lambastes "crony capitalism", and says that he thinks a Judeo-Christian ethic facilitates a more equitable form of capitalism.

Bannon criticizes secularism, which is pretty standard for religious conservatives, and which also reminds me of Ross Douthat. In fact, Bannon's ideas sound a lot like the "reform conservatism" that had been making the intellectual rounds before Trump showed up on the scene.

But the one place where Bannon comes out very strongly against an external enemy is when he talks about radical Islam:
[W]e’re at the very beginning stages of a very brutal and bloody conflict...the people in this room, the people in the church, [need to] bind together and really form what I feel is an aspect of the church militant...to fight for our beliefs against this new barbarity that’s starting.. 
[I]t’s a very unpleasant topic, but we are in an outright war against jihadist Islamic fascism. And this war is, I think, metastasizing far quicker than governments can handle it... 
[L]ook at what’s happening in ISIS...That war is expanding and it’s metastasizing to sub-Saharan Africa. We have Boko Haram and other groups that will eventually partner with ISIS in this global war, and it is, unfortunately, something that we’re going to have to face, and we’re going to have to face very quickly...[W]e’re now, I believe, at the beginning stages of a global war against Islamic fascism... 
I believe you should take a very, very, very aggressive stance against radical Islam...If you look back at the long history of the Judeo-Christian West struggle against Islam, I believe that our forefathers kept their stance, and I think they did the right thing. I think they kept it out of the world, whether it was at Vienna, or Tours, or other places… It bequeathed to use the great institution that is the church of the West.
Bannon's view is that radical Islam is attacking the West, and must be defeated by a united Judeo-Christian West.

This is part of a very very long strain of thought. Europeans and Middle Easterners have been fighting each other for basically all of recorded history. Two heavily populated regions, mostly but not completely separated by natural barriers, naturally tend to come into conflict at their borders. The millennium of wars between Christendom and the Islamic Umma was actually a sequel to the wars between the Greco-Romans and the Persians, and maybe even to the Trojan War and the Late Bronze Age Collapse. So this is a clash of civilizations that has been going on essentially forever.

Bannon's call for a "church militant" and a "church of the West" is basically similar to the Holy Leagues that fought the Ottomans in the 1500s. It's not a call to invasion, like the original Crusades, but rather a defensive move. Bannon is calling on the Catholic Church in particular, but also Christianity, Western capitalism, and all other unifying institutions of the West, to act as unifying and motivating forces to fight this struggle.

This is perfectly understandable. Al-Qaeda killed thousands of innocent American civilians on 9/11, and carried out a bunch of other smaller attacks on the West. ISIS has attacked the West a few times, and has horrified the world with its gruesome videos. Barbaric indeed.

But I believe that Bannon fundamentally misunderstands what's going on with radical Islam. Some of the malign energy of al-Qaeda, ISIS, and other radical Islamic groups has been directed against the West and against Christians, yes. But most of it has been directed at other Muslims in Muslim countries. Only a very small part of what we're witnessing is a continuation of the eternal clash between Europe and the Middle East. Most of it is an internal civil war within the Islamic Umma.

Let's look at the main wars currently being fought by radical Islamic forces. These are:

  • Syrian Civil War (~470,000 dead)
  • 2nd Iraqi Civil War (~56,000 dead)
  • Boko Haram Insurgency (~28,000 dead)
  • War in Afghanistan (126,000 dead)
  • Somali Civil War (~500,000 dead)
  • War in Northwest Pakistan (~60,000 dead)
  • Libyan Civil War (~14,000 dead)
  • Yemeni Civil War (~11,000 dead)
  • Sinai Insurgency (~4,500 dead)

This is a lot of dead people - maybe about 2 million in all, counting all the smaller conflicts I didn't list. But almost all of these dead people are Muslims - either radical Islamists, or their moderate Muslim opponents. Compare these death tolls to the radical Islamist terror attacks in the West. 9/11 killed about 3,000. The ISIS attack in Paris killed 130. The death tolls in the West from radical Islam have been three orders of magnitude smaller than the deaths in the Muslim world.

Three orders of magnitude is an almost inconceivable difference in size. What it means is that only a tiny, tiny part of the wars of radical Islam is bleeding over into the West. What we're seeing is not a clash of civilizations, it's a global Islamic civil war. The enemy isn't at the gates of Vienna - it's at the gates of Mosul, Raqqa, and Kabul.

And radical Islam is losing the global Islamic civil war. In Syria and Iraq, ISIS is losing. In Nigeria, Boko Haram is losing. In all of these wars except for possibly Afghanistan, radical Islamic forces have been defeated by moderate Islamic forces.

Sometimes that's because of Western aid to the moderates. But much of it is just because a medievalist regime holds very, very little appeal for the average Muslim in any country. Practically no one wants to live under the sadist, totalitarian control of groups like ISIS. These groups are fierce, but their manpower is small and their popular support is not very large anywhere.

So I think Bannon should relax. Radical Islam will punch itself out. It's a brief, violent outpouring of reaction against internet-borne modernity, and against stagnant and repressive local regimes. It has weak popular appeal, little organization, few adherents, few weapons, and almost no safe territory anywhere on the planet. The Western attempts to help local Muslims defeat radical Islam, which have been largely successful everywhere, have not required a church militant or a Crusading spirit - in fact, they were pretty cheap and low-risk.

Many conservatives also fear that Muslim immigrants will become a fifth column in the U.S., a group with strong anti-American sentiments, committed to destroying the country from within. In fact, nothing like this is happening. Muslim immigrants in the U.S. are marrying out of the faith at increasing rates. The same pressures of modernity that have increased secularism among Jews and Christians are secularizing Muslims in the West. A lot of American Muslims now celebrate Christmas. (A few Muslims in the West, spurred by the incredibly bad example of ISIS, are even converting to Christianity, which just goes to show how radical Islam is backfiring.)

In other words, secularism isn't a dagger in the heart of Western resistance to radical Islam. It's one of the key forces that will eventually cause Muslims in the West to assimilate into broader Western society - just as it has done for non-Orthodox Jews, and many others.

So I think Steve Bannon should rethink his view on the war against radical Islam. If you think secularism is bad for society, fine. But we definitely don't need to transform our society in order to resist a radical Islamic menace. In fact, the menace was always mostly a danger to other people, far away. And they're whupping its ass. Meanwhile, Islam in general does not look like a threat to the Western way of life.