Here is the scene. The American administration, having wound down a military entanglement in a distant part of the world, hopes to head off a humanitarian crisis by admitting refugees. The governor of a large state, who happens to represent the opposing political party, has expressed opposition to the plan. The domestic economy is on shaky ground; the governor's sentiments are echoed by Senators and activists affiliated with the same party. Among other things, opponents of the refugees worry that they pose a threat to Americans and will not assimilate into society.
This scene played out in 1975. The refugees were from Vietnam; Gerald Ford was president. California's Jerry Brown played the role of the contrarian governor.
Vietnamese refugees did arrive in the United States. Today more than one million Vietnamese immigrants live in the United States. And far from becoming an isolated group in society, these migrants have moved very rapidly toward the American mainstream. This figure, taken from my most recent report on immigrant assimilation for the Manhattan Institute, shows that since the 1970s newly arrived Vietnamese immigrants have been very distinct from native-born Americans. As distinct as immigrants from Mexico. Relative to Mexican immigrants, however, those born in Vietnam lose their distinctiveness much more rapidly.
Why did Vietnamese immigrants assimilate so rapidly? As refugees, their prospects of returning home were bleak. They came to think of the United States as home, as the place where their children would construct lives for themselves. Immigrants from Cuba are the most assimilated Latin Americans for much the same reason.
Should we expect anything different from Syrian refugees? Are they different somehow? Do the experiences of European nations, which have in some cases struggled to incorporate immigrants into their societies, suggest caution?
About five years ago, I conducted an analysis comparing the assimilation of Muslim immigrants -- or more precisely, immigrants born in predominantly Islamic nations -- across eight nations: Canada, the United States, and six European countries. The data, summarized in the figure below, reveal a stark contrast in the experiences of immigrants in North America and Europe. Across a range of measures, the gaps between Muslim immigrants and the native born are much lower in the United States than elsewhere.
Almost half of Muslim immigrants to the United States are naturalized citizens. In Switzerland, the comparable figure is 10%; in Italy the rate is even lower. Here, 48% of Muslim immigrants own their own home. In Austria, only 12% do. Only Canada -- a nation whose leaders have wholeheartedly embraced Syrian refugees -- outshines the U.S. when it comes to immigrant assimilation.
Does the United States face a risk of importing terrorists if it permits Syrian refugees to enter the country? Sure it does! But this risk needs to be kept in perspective. In the United States today there are nearly two million residents born in predominantly Muslim countries. We must weigh the risk of admitting enemies against the risk of creating them by means of xenophobic rhetoric.
Those inclined to oppose the admission of Syrian refugees may not be inclined to consider historical evidence. But this evidence is clear. When the United States has followed its humanitarian impulses -- in admitting refugees from Vietnam, Cuba, and other parts of the world -- the result has been to introduce an immigrant group that shows a strong civic commitment to the United States. When it has given in to fear, most notably during World War II, we have never grown to be proud of the result.
It's been nearly a year since my last post here. Since my last post, I've moved across the country, from Durham NC to Seattle, Washington. By moving here, I've done a small part to exacerbate a troubling trend in the Emerald City: as the population grows, housing is becoming less affordable.
What should be done about it? The first instinct of many, including members of Seattle's city council, is rent control. Others note that the root cause of affordability problems is a shortage of housing, and propose that the city just build its way out of the affordability problem. I'm going to first explain why neither of these options will solve the problem. Then I'm going to explain why a wacky and widely discredited option -- public housing -- might actually work here.
Just as a matter of background information, Seattle is expensive. Median gross rent (which builds in the cost of utilities whether paid by the landlord or tenant) stood at about $1,172 in 2013 Census data and has surely gone up since then. It's in the ballpark of New York City ($1,228) and Boston ($1,263). Rent in more affordable cities ranges from under $700 in places you probably wouldn't want to live (Cleveland, Peoria) to around $900 in reasonably priced yet vibrant cities (Denver, Raleigh).
Politically speaking, rent control is an extreme long shot in Seattle. The city would need the Washington legislature to repeal a law forbidding rent control statewide. One house of said legislature is controlled by Republicans.
Supposing something crazy happened and the state gave Seattle permission to enact rent control, the most likely form of the ordinance would be a San Francisco-style restriction on rent increases from year to year. This would prevent the stories you often hear around here regarding landlords raising the rent by hundreds of dollars when the lease runs out.
The problem here is that a new rent control ordinance would be closing the barn door when the horse is already gone. Limiting future increases does nothing to address the fact that rent is already expensive today. And the example of San Francisco is hardly heartening: in spite of having rent control on the books since 1979, the City by the Bay is one of the few in the United States that makes Seattle look cheap: median gross rent in the most recent Census data equals $1,491.
Fundamentally, rent control does nothing to alleviate a housing shortage and in fact will tend to exacerbate it (if you believe in standard market economics, which not all rent control activists do). Some tenants get the benefit of rents that rise more slowly than inflation, but other tenants are left by the wayside.
Seattle is a surprisingly low-density city. A city of 650,000 residents occupying just under 84 square miles of land, our population density is less than half that of San Francisco and less than a third that of New York City. If Seattle had the density of Boston -- not really much of a high rise city, but one with a much higher proportion of rowhouses and multi-family dwellings -- it would have a population of about 1.1 million. To house that many people, Seattle would need to add over 150,000 homes to its current stock of 315,000. To match NYC density, Seattle would need over 600,000 new housing units.
Nobody in Seattle is proposing anything that drastic. The mayor hopes to build just 50,000 new units. But suppose that we got audacious and decided to triple that number -- setting the stage for Seattle to become a city of over a million residents, matching Boston's population density. Would that close the affordability gap?
Well, look at it this way. Boston is already at Boston-level density, with prices that actually exceed Seattle's. Around the world, there are likely hundreds of thousands of people who would like to live in a place like Seattle if there were only room for them. If we build it, they will come. And keep the prices from changing much at all.
Even if it were possible to build our way to affordability, there is this tricky political issue that 46.8% of Seattle families -- who own their homes -- have absolutely no interest in seeing housing values decline, because that means a reduction in their net wealth. They may form a minority of the population, but almost certainly a majority of engaged voters in the local electorate.
There are two basic strategies for being a place with affordable housing. One is to be a place that nobody particularly wants to live. This is the secret to Detroit's unheralded affordability success, and Cleveland's, and Flint's, and Scranton's. These places built housing decades ago for huge numbers of people, the majority of whom have since left, leaving their homes behind. Unless the Duwamish waterway starts to catch fire on a regular basis and tech startups start to go the way of the auto industry, that recipe won't work in Seattle.
The second recipe involves thinking of a "place" as a metro area rather than a city, and being content to have the affordable housing out on the periphery of the metro area. De facto, that's what is happening across the country -- as the suburbs of New York extend into Pennsylvania, the DC suburbs into West Virginia, and so forth. This is the de facto pattern in many European cities as well. Even Seattle has lower-cost housing for those willing to live an hour outside the city. The median rent in Tacoma is only $906 -- comparable to places like Charlotte and Denver.
So would the people and leaders of Seattle be content with consigning lower income families to locations outside the City? Perhaps not. So what, then, can we do?
Public housing projects haven't really been in the zeitgeist for at least 20 years. We've been tearing them down nationwide, not building them. They have been derided as socially dysfunctional, breeding grounds for dependency and crime. But they could, in fact, be part of the solution in Seattle.
Public housing would permit the City to effectively target assistance. Rent control is a blunt instrument, benefitting rich and poor renters alike. In fact, if tenants need to pay bribes to jump the queue for rent controlled units, the benefits may skew towards the rich. Public housing can be reserved for those who meet certain criteria. It needn't be the poorest of the poor -- as originally designed, public housing projects were intended to serve middle class families as well.
Seattle's public housing stock is small. Boston, with about 20% more residents than Seattle, serves more than twice as many residents with its subsidized housing programs. To come up to Boston's service level, Seattle would need about 10,000 additional public housing units, roughly tripling the number it has today.
Where would the City put these units? Here's a suggestion that would make class warriors happy. Seattle has four city-owned golf courses occupying perhaps 375 acres of land in four different parts of the city. That's enough space to develop 10,000 units at a density of under 30 units per acre. We're not talking high-rises, folks -- that's equivalent to the density of Boston's Back Bay, including the parkland along the Charles River.
How would the City pay for these units? Back in the old days, the Federal government could be counted on to help out. Even if not, the cost would be manageable. If you could build these units at a cost of $200,000 each, the total expenditure would be $2 billion. Financed at 3.5% -- about the going rate for a place with Seattle's credit rating -- the interest costs would be about $70 million a year. Maybe half that amount could be paid with rental income from tenants, allowing some of that rent to cover operating and maintenance costs. The other half could be covered with the equivalent of a property tax rate increase of 30 cents per $1,000 of valuation -- thus the owner of a million dollar home would pay about $300 a year so that 10,000 families could afford to live in the city.
Would we be setting ourselves up to create pockets of crime and dependency where putters once putted? Here's my argument. The problems we associate with housing projects occurred in industrial cities that were undergoing industrial decline. The projects housed people who witnessed the utter evaporation of economic opportunity. Do you think that is going to happen in Seattle?
You can call me crazy. But if you can't imagine that this solution would work -- at least for 10,000 families -- can you imagine anything that will?
Last week, in a public address commemorating the 60th anniversary of the Supreme Court's Brown v. Board decision, Michelle Obama warned that "today, by some measures, our schools are as segregated as they were back when Dr. [Martin Luther] King gave his final speech."
This is a stark statement -- made even more stark by the fact that virtually no school integration occurred between the Brown decision in 1954 and Dr. King's assassination in Spring 1968. Southern communities operating separate school districts for whites and blacks were in no hurry to merge them. And even once merged, residential segregation -- which reached its highest point at mid-century -- implied that a reliance on neighborhood schools would perpetuate segregation. It would take a series of further court cases, beginning in 1968, to make integration happen
Can it really be true, then, that America's public schools are as segregated now as they were at a point where the word "busing" had not even entered our vocabulary? It all boils down to the definition of the word "segregation."
Consult your favorite dictionary and the term segregation will most likely incorporate the word "separation." By this definition, schools are segregated when students of different races attend separate schools.
By this definition, schools are in fact much less segregated today than they were in 1968. The series of court decisions that began in 1968 dramatically reduced the degree to which white and black students attended separate schools. In more recent years, courts have reversed those decisions in some cases. At the same time, though, America's neighborhoods have become significantly more integrated than they once were. These two countervailing trends have more or less canceled one another out. This is the conclusion of a comprehensive review released by Sean Reardon and Ann Owens last fall. Students of different races are much less likely to attend separate schools than they were in 1968.
This explains Michelle Obama's use of the term "by some measures." She's right that there are some measures employed by school segregation researchers that establish different trends. But they do not define segregation as the tendency for students to attend separate schools. Instead, they define segregation as the tendency for black students to attend schools with a high concentration of nonwhite students.
Back in 1968, attending a majority-nonwhite school was exceptional. Nationwide, white students accounted for nearly 4 out of every 5 public school attendees. In most parts of the country, "nonwhite" was a euphemism for "black." The fact that over three-quarters of black students attended a school where the majority of their classmates were black served as a stark indicator of efforts to keep students of different races in separate schools.
Things are very different today. For one thing, the public school population has changed. Only 52% of public school students are white. For another, "nonwhite" no longer means "black." Attending a majority-nonwhite school is no longer all that exceptional. The National Center for Education Statistics projects that within 10 years the public school population will be majority nonwhite. So schools could be 100% integrated -- every school a microcosm of the nation -- and every school would be majority nonwhite. Segregation measures based on the two different concepts -- separation versus exposure to nonwhite majorities -- would yield exactly opposite conclusions.
In summary, "some measures" of segregation are highly misleading because they mistake demographic trends for actual decisions to send students of different races to different schools. Stephan and Abigail Thernstrom pointed this out a week ago.
Now, to be fair, there is definitely resegregation going on in some parts of the country. In districts that stop busing, there is a definite uptick in the tendency for students of different races to attend different schools. High-poverty schools often struggle to attract and retain high-quality teachers. But we no longer live in a world where it is possible to judge a school by the color of students' skin. Majority-nonwhite schools are the emerging norm. Many of these schools -- not enough, but many of them -- do an excellent job of educating children. My own children attend a public school where over 70% of their classmates are nonwhite. It's a great school. We need to focus our efforts on the schools that aren't succeeding, which is not the same thing as the schools serving nonwhite students.
Do immigrants take jobs away from Americans?
Imagine a world where there is a finite amount of work to be done, regardless of the size of the population. In this world, it's pretty easy to say that any increase in population reduces the amount of work per capita. If society hoped to keep its residents fully employed, it would do whatever it could to limit population growth. Restricting immigration is the easy way to do it, but nations that have sought to limit their population have resorted to much more nefarious policies than that.
We don't live in a world, or more specifically a nation, where there is a finite amount of work to be done.
First off, the amount of work to be done in most industries depends directly on the size of the population. The education and health sectors now account for nearly one-quarter of the jobs in the American economy, and demand for education and health rises directly with population. Construction jobs (6% of the workforce) depend in large part on how many new houses we need to build. Retail trade jobs (11% of the workforce) depend on the number of customers available. Transportation jobs (5% of the workforce) depend on how many people and things need transporting. Even much of the manufacturing industry (10% of the workforce, down from 16% two decades ago) ties directly to population size. There are more American jobs in food and beverage manufacturing than there are in the motor vehicle, aircraft, ship building, furniture, apparel, steel, farm machinery, computer and peripheral, electrical equipment, and household appliance manufacturing industries combined.
Your standard economics-101 view of the labor market, where the demand curve slopes down and thus any infusion of new workers lowers wages, doesn't tell the whole story. Immigrants increase both the supply of labor and the demand for labor. Think about it this way. Would your job be more or less secure if the population of your community suddenly plunged by 13%? That's the difference between having and lacking immigrants in the typical American community.
The second key point to consider is that when immigrants don't take jobs, there is no guarantee that the job will be made available to a native instead. In the manufacturing industry and elsewhere, employers have moved millions of jobs abroad in the past few decades. Had it not been for immigration, they would have moved even more. Adding an extra 1,000 immigrants to the economy keeps 46 manufacturing jobs here in the United States.
The third reason immigrants don't take jobs from Americans is that they are disproportionately likely to create their own. And when immigrants start their own business, they often end up creating more jobs for others.
The key statistic to understanding how immigration benefits native employment is to consider the impact of immigration on native population. If immigrants took American jobs, we'd see a pattern whereby an inflow of immigrants to a community led natives to leave in response. In fact, we see the opposite pattern. For every 1,000 immigrants who enter a county, the native population of that county increases by about 270. If you're looking to see where opportunities are in the United States today, find a place where immigrants are.
Tomorrow, the school board governing North Carolina's largest school district will consider a proposal to prohibit teachers from awarding the grade of zero to students who fail to turn in work. Some advocates hope to convince the board that the minimum grade on any assignment -- even for a student who fails to turn in the assignment -- should be fifty.
The argument against zero goes something like this. Students receive one and either become discouraged or rationally compute that it is mathematically impossible to recover from the grade to pass the course and thus give up, increasing their risk of dropping out and other adverse outcomes.
The pro-zero side points out the perverse nature of handing out half-credit for no work. Students might rationally conclude that there isn't much point in doing the work. So they might pass more courses, but won't necessarily learn any more, and some -- those motivated by the fear of a zero in the first place -- might learn less.
What's the "right" thing to do? That depends, as it turns out, on what you think schools are for in the first place.
The root of the problem, as pointed out by independent consultant Ken O'Connor, is the separation of grading standards from learning. There are good intentions behind the separation. If you were educated in American schools, you most likely have heard some version of the phrase "if you hand in your homework, it is basically impossible to fail this course." That's a reassuring statement for students who might otherwise be intimidated by the subject matter. But it leads directly to the debate over zeros: when it is impossible to fail except by neglecting to hand in your work, the only students failing will be those who neglect to hand in work.
Effort-based grading leads to another problem: students can pass courses without mastering the subject matter. The consequences of this practice in the K-12 system can be seen in the higher education system. The majority of North Carolina high school graduates entering the state's two-year community colleges need to retake high-school level coursework in at least one subject before they can proceed to courses that earn college credit.
The fundamental question underlying the entire debate is as follows: under what circumstances should students fail? One can think of this as a multiple choice question:
a) Students should fail when they do not demonstrate mastery of course subject matter. This is the old-school definition of failure.
b) Students should fail when they do not turn in assignments. This is a kinder, gentler policy, forgiving those who try hard but do not master the subject matter. In a world where persistence matters more than knowledge, this policy makes more sense.
c) Students should never (or perhaps just rarely) fail. This policy makes the most sense if you believe that staying in school matters the most, regardless of what you may or may not learn while there.
So, the question of when students should fail in turn boils down to a basic question about the true function of education. Are we endowing students with knowledge that will make them happier and more productive? Are we reinforcing "non-cognitive" skills -- showing up, putting in a good faith effort? Or are we primarily interested in handing out diplomas, based on the notion that having the sheepskin makes the greatest difference?
As originally conceived, race-based preferences in college admissions were seen as a means of redressing past wrongs. Conceived in this manner, affirmative action was bound to lead to a debate regarding whether wrongs had been addressed "enough." This debate was bound to be rancorous for two basic reasons. First, there is no way to objectively determine whether past wrongs have been addressed "enough." Second, while the parties to the argument might appeal to universal principles the plain fact is that they also have a self-interest in the outcome. You might bring nothing to the debate but principle, but you can't stop your sparring partner from thinking, deep in their heart, that you're just out to win more than your fair share.
From Bakke v. California Board of Regents through Grutter v. Bollinger, a second justification for race-conscious admissions emerged, appealing to diversity as an essential ingredient in higher education. By suggesting that diversity could be something good for everyone, this appeal promised to elevate the affirmative action debate above a zero-sum tussle over scarce resources. If it is really true that students of all races benefit from exposure to diversity, then we needn't worry about whether past wrongs have been redressed enough. We should just adopt diversity-enhancing policies and procedures in perpetuity.
The promise of debate elevation hasn't worked out in practice. Partly this is because the claim that diversity is good for you even if it means a lower likelihood of admission for your kids has not won over the median voter, even in reliably left-of-center states. We're telling the median voter that diversity is good for them, but they're reacting to the information the same way they might to the message that broccoli is good for them. "That's great, but could I please have fries with that?"
Another problem with the "diversity is good for you" argument, which might help explain why the median voter hasn't bought it, is that it's actually kind of hard to produce solid empirical evidence to back up the assertion. It's easy to imagine how diversity might enhance your learning in a small discussion course on social stratification, American history, or even marketing. It's harder to see how it helps you in a huge lecture course where students seldom speak, or in a math, science, or engineering course where the answers are cut and dried. Peter Arcidiacono and I spent some time trying to uncover evidence that college graduates fared better when exposed to diversity in the classroom, but ultimately couldn't find anything conclusive.
The Sotomayor v. Roberts debate in the wake of Schuette v. Coalition to Defend Affirmative Action might spell the end of the "eat your peas and vote for affirmative action" era. The move from courts to the ballot box hasn't worked well for the defenders of race-consciousness; the future of this argument will be in the courts.
One might ask why it is that voters can't see fit to leave us universities alone. Why tell us how to do our thing? Put another way, how can it be that the people who actually determine university admissions policies can be so much more pro-diversity than the median voter?
It's simple, really. A significant component of the joy of working at a university is bearing witness to the transformation of lives. Sometimes those transformations come about because of things we do, sometimes we just happen to be there at a stage in a student's life where they turn a corner for reasons that have little to do with us. In both cases, it is a great thing to witness.
There is some joy in seeing the child of a lawyer be accepted to a top law school with the help of a recommendation letter you wrote. There is some joy in seeing the child of an investment banker land a job with a top consulting firm. There is much greater joy, however, in watching the son of a bodega owner, or the daughter of a single working mom, gain entry to a world of knowledge and opportunity that their parents could only dream of.
While one could argue that it doesn't make sense to base college admissions solely on what makes the faculty happy, there is also evidence that selective universities accomplish more for society at large when they enroll students from disadvantaged backgrounds. Recent work by Stacy Dale and Alan Krueger finds that students from privileged backgrounds do about equally well after graduation whether they go to a highly selective institution or a somewhat less-selective one. Black, Hispanic, and disadvantaged students, by contrast, actually gain something from going to the more selective institution.
The selective University that wants to accomplish the most good for society, in other words, would skew its admissions process toward disadvantaged applicants. Consider not whether the applicant is on third base, to use a tortured baseball/political analogy, but how the applicant got to where they are. The applicant who managed to hit a double will, on average, benefit more from attending your college than the applicant born on third.
And thus, in a world where you can't convince a majority of the Supreme Court that race matters, and you can't convince a majority of voters that increasing diversity is good for their college-going kids, there is a third reason to favor skewing the college admissions process toward the disadvantaged. Quite simply, my dear voters and judges, it is what you should do if you want to reap the greatest return from your public investment in a selective university.
The folks on the editorial board at the New York Times have made another argument for raising the minimum wage, namely that it's good for business.
This is not entirely a crazy argument. The "efficiency wage" theory of labor economics suggests that paying workers a bit more than the going rate makes them happy and pays off in terms of increased loyalty, lower turnover, etc. Probably 95% of labor economics textbooks will accompany discussion of this subject with a sidebar about Henry Ford and the $5 day. Ford famously paid more than any other carmaker in the industry and enjoyed the same type of low turnover -- even in back-breaking, physically demanding assembly line jobs -- touted in the NYT editorial.
Think about this another way, though, and the argument is that basically business owners are too dumb to figure out that paying higher wages will make them more profitable, so the government needs to save them from themselves. I don't mean to dismiss that argument out of hand; certainly H.L. Mencken wouldn't. But any time somebody asks you to believe that, it's worth considering alternative hypotheses.
The NYT talking point that Wal-Mart wants a higher minimum wage is a bit disingenuous, since their average wages exceed the minimum they are basically just asking the government to make their competitors pay more.
The Gap has voluntarily agreed to increase worker pay for the majority of its workforce (albeit to a level that's still below the proposed $10.10 minimum wage). Bear in mind that the Gap sells self-branded clothing, and consumer willingness-to-pay for Gap jeans depends largely on how much they value the brand. So chalk that one up as a shrewd marketing maneuver. Expect Abercrombie & Fitch to follow suit quickly. The model doesn't work so well for businesses that sell things like grapefruit and cat food -- identically packaged and marketed items sold by competing retailers.
And Costco, the subject of a Harvard Business Review article back in 2006, has a dirty little secret about its famously high wages that is patently obvious if you just read the article with calculator in hand. Compared to Sam's Club, Costco logs 16% more in sales while employing 38% fewer workers. Put these two ratios together and you discover that Sam's employs 87% more workers per dollar of sales than Costco does. Costco has clearly traded higher wages for fewer jobs.
So is that what we want? Higher paying jobs but many fewer of them? That might end up working for some businesses, but what is it going to do for the people we intend to help?
Suppose I suggested that the government adopt an anti-poverty program with the following characteristics:
This is a basic description of how the minimum wage operates. This isn't the right-wing spin on it either, points 1-3 were derived directly from a New York Times editorial this morning. There's plenty of controversy regarding the CBO's highly publicized but very rough estimates of how many jobs would disappear following a minimum wage increase, but one could glibly assume no negative effect on employment and still come to the conclusion that the minimum wage is a highly symbolic but dreadfully ineffective way to lift families out of poverty.
The minimum wage is too blunt an instrument to effectively fight poverty
The positive way to think about the minimum wage is that it attempts to ensure that any person trying to raise a family by being a full-time worker will earn enough to get by. By the CBO estimates, some 16 million workers would be affected by an increase of the minimum wage to $10.10 per hour, and some 900,000 would be lifted from poverty. Thus if the goal of the minimum wage increase were to lift families from poverty, estimates suggest it will have a 6% success rate. Flip that around, and you've got a 94% failure rate. And this is assuming no adverse impact on employment whatsoever.
In the 94% of cases where workers receive wage increases but do not experience a lift from poverty, there are two basic things going on. In most cases, the families benefitting from the wage increase aren't lifted from poverty because they weren't in poverty in the first place. Minimum wage workers are often secondary or tertiary workers in their household -- including teenagers. In some cases, however, the increase in the minimum wage would not be sufficient to escape poverty. For a full-time worker raising a family of 4, the income earned from working 2,000 hours per year at $10.10 per hour would not be sufficient to rise above the federal poverty line. Many minimum wage workers are part-time workers; even if their job continues to exist after the increase there is no guarantee they'll be able to work the same number of hours.
The minimum wage is largely a tax on food
Minimum wage workers are found across a wide range of industries, but the highest concentrations happen to be in industries tied to the production of food, based on data from the 2010 American Community Survey. More than one-tenth of minimum wage workers work in the restaurant or food service industry. Also represented in the list of top 5 industries employing minimum wage workers are grocery stores, discount and department stores, and K-12 education -- it isn't the teachers making minimum wage, but the cafeteria workers and related staff. Crop production sits just outside the top five.
A truly progressive antipoverty policy would transfer resources from the wealthy to the deserving poor. Presumably the goal of a minimum wage increase would be to transfer resources from the highly paid executives and wealthy shareholders of major corporations to their low-paid workers. To understand why this just can't work out in practice, consider the case of Wal-Mart.
Wal-Mart's most recent annual report shows that the company paid shareholders about $5.4 billion in dividends, and paid their top executives somewhere around $60 million. Suppose we zeroed out those numbers -- forced the executives to forfeit 100% of their pay, and shareholders 100% of their dividends -- and transfered the money to the company's 2.1 million domestic employees. We'd have enough to give each of them about $2,583 per year. For a full time worker, this would amount to a raise of roughly $1.29 per hour.
Wiping out the shareholders and top executives, in other words, would be sufficient to fund less than half of the proposed $2.85 increase in the federal minimum wage. Granted, the average Wal-Mart employee already earns between $12 and $13 per hour, but the basic message is clear. Minimum wage work occurs largely in low-margin industries, where there just aren't a whole lot of profits to be plundered. The real fat cats of the economy, working in knowledge industries or on Wall Street, don't employ a whole lot of minimum-wage workers.
To pay for the wage increase, then, costs would have to be passed along to consumers in the form of higher prices. Given the heavy reliance on minimum wage work at nearly all stages of the food supply chain, the higher prices would be most apparent in a family's food budget. Ask yourself, what type of family spends the highest share of its income on food? Not the wealthy.
There is a better way
The intent of the minimum wage is to raise the payoff from work for society's most vulnerable people. It effectively asks business owners to cover the costs, and requires them to spend a large amount of extra money on raising the payoff from work for a larger group of citizens who are not society's most vulnerable people. These business owners, who by the nature of their business are already serving many vulnerable people as their clientele, cover the costs in part by passing them on to their customers. They rob Paul to pay Paul.
If our goal as a society is to ensure that no person who devotes themselves to full-time work should find themselves living below the poverty line, there is an alternative strategy that is simultaneously more efficient and more progressive. It's the Earned Income Tax Credit.
I am an economist by training, so perhaps it is no surprise that I think the EITC beats the minimum wage. But the selling points of the EITC bear repeating, particularly since the drumbeat for expanding the EITC as an alternative to a minimum wage increase is not exactly deafening.
The EITC effectively multiplies earnings for families that work but don't make much money by doing so. It delivers significant amounts of cash to single parents raising a family on the basis of low-wage work, but not a penny to the teenage child of a high-income family delivering pizza for a bit of spending money.
The EITC doesn't ask business owners to bear the cost of society's goal alone, but rather spreads the burden through the full government system of taxation. If you want the wolves of Wall Street to pay for our social investment in the lives of the vulnerable, the EITC will do it but the minimum wage won't.
And from a pragmatic perspective, the will in Congress to pass an EITC expansion this year might actually exist, given the bipartisan focus on inequality this year. Do you really think the House is going to pass a minimum wage increase anytime soon?
There are criticisms of the EITC. Some employers might use the EITC as an excuse to cut wages, but coupled with the existing state and federal minimum wages it isn't really possible to do that for the workers we're ostensibly trying to help. Moreover, even if we accept estimates that only 73 cents of every dollar spent on the EITC lands in the hands of a deserving family, that surely beats the ratio we'd get with a higher minimum wage.
The minimum wage, in short, makes for good symbolism but bad policy.
[Note: the original Wal-Mart numbers I had in this post were taken from some poorly annotated figures I jotted down a few weeks ago. Walmart reports paying $1.59 in dividends per share in fiscal 2013, with a total of about 3.374 billion shares outstanding. So that's returing quite a bit less than the $100 billion I originally cited, which would have made for a much greater windfall per worker if redistributed.]
NC Governor Pat McCrory, along with leaders of the NC General Assembly, announced this morning that they intend to raise the state minimum teacher salary from $30,800 to $35,000 over the next two years. This announcement comes in response to a chorus of calls to increase teacher compensation in North Carolina, which has languished in the years since the 2008 recession.
Many of these calls make reference to average teacher compensation, and any AP statistics student should be able to tell you that there's more than one way to raise the average. You could take the $200 million the General Assembly is committing to this first round of teacher pay increases, give it to one person, and have the exact same effect on the average as giving it in equal measure to all teachers. Critical commentary has already started to trickle in about this proposal, from those worried that focusing resources on beginning teachers will leave out many veterans.
For now, experienced teachers must be content with the governor's promise that this is just the first step in reforming teacher compensation in North Carolina.
There are several reasons why raising starting salaries makes sense as a first step.
Good teachers are worth much more than even the $68,050 at the top of the NC salary schedule. For this reason I hope, as most observers would, that this is just the first step in a more comprehensive reform. But it's a good first step, and I am hopeful.
Tacloban, the Philippine city ravaged by Typhoon Haiyan last week, appears on the brink of complete abandonment, with the mayor urging surviving residents to flee. Relief supplies are to be found just outside the city, but a range of physical and human obstacles are making it difficult if not impossible to deliver them.
Will Tacloban recover in the long run? Politics will play a role -- can the government afford to subsidize relocation into the area in the same way the United States did after Hurricane Katrina? Geologic considerations might factor in as well. The Leyte Gulf, after all, looks kind of like an upside-down funnel, and Tacloban's location at the narrow part of the funnel poses the same sort of hydrologic challenges as New York's location at an interior corner of the East Coast. There might be some argument for buying out the owners of what remains and starting over in a less dangerous location. But ultimately, rebuilding is a question of economics. Before the typhoon, it made sense for some 200,000 people to live in Tacloban. Will it make sense now?
For some time, the conventional wisdom among economists was that cities bounced back from disasters. Hiroshima and Nagasaki are two data points often cited in support of this view. The atomic devastation of these Japanese cities was different in some respects, however. The bombs targeted the center of town, leaving the outer ring of the metropolis more or less intact. Sufficient infrastructure remained intact in Hiroshima to allow limited streetcar service to resume just three days after the bomb. There was a fundamental logic in taking a devastated landscape in the middle of a sizable suburban ring and making a city out of it.
Most importantly, Hiroshima and Nagasaki were thriving cities before the war. Demand for residence in those cities, in other words, was strong -- and the rational response to a reduction in the "supply" of those cities was to rebuild.
Port-au-Prince, Haiti, is a more current example of the phenomenon. The city was devastated by the 2010 Earthquake, but given the almost utter lack of economic opportunity elsewhere in Haiti -- Port-au-Prince is home to 28% of Haiti's population but over 90% of the nation's manufacturing jobs -- there was no other place for the residents to go. It is the only part of the country with reliable electric service, and possesses the only modern port and air facilities.
A couple of years ago I made the argument that the same dynamics would not take root in cities experiencing decline before disaster struck, thinking specifically of New Orleans post-Katrina but also of some of the German cities affected by firebombing during World War II. When a city is in decline, a certain segment of the population remains there primarily because the housing becomes cheaper over time. Able-bodied workers might leave a city with no jobs, but a family living on disability or social security would not be bothered by a lack of economic opportunity. As the able-bodied leave the city, the houses remain, and the excess supply puts downward pressure on prices. For renters, this is nothing but good news. For owners, the news is not so good, but the disappearence of home equity makes it more difficult to contemplate affording life elsewhere.
When a disaster comes along and destroys the housing stock, the "excess supply" problem is solved, in a somewhat brutal sense, and downward pressure on prices ceases. This is exactly what happened in New Orleans. No longer a cheap place to live, the city remains about 25% smaller than it was in the 2000 Census. The demographics of the city have shifted predictably as well, skewing towards a more affluent population.
Which of these possible trajectories best fits Tacloban? Population statistics indicate that the region has experienced steady growth in recent decades. By regional standards it has been economically prosperous, and its waterfront location brings obvious risks but also certain advantages. In the coming weeks the city may well experience a near-total evacuation along the lines of post-Katrina New Orleans. But if history is any guide that is where the similarity will end.