YOUR SOURCE FOR RELIABLE RESEARCH


What You'll Find

For example:


Citation Generator
X
APA
MLA
Chicago (for footnotes)
Chicago (for bibliographies)

Introductory Notes

This research is based upon the most recent available data in 2015. Unless otherwise stated, all dollar figures are indexed for inflation to produce numbers that are consistent in terms of the years 2013/2014.

In keeping with Just Facts’ Standards of Credibility, all facts are cited based upon availability and relevance, not to slant results by singling out specific years that are different from others. Likewise, data associated with the effects of education in different geographical areas represent random, diverse places in which such data is available.

Many of the facts below show associations between education and other variables such as earnings. These links may be caused in part or whole by factors that are related to education but not necessarily caused by education. For example, individuals with high intelligence and discipline tend to excel in education and obtain more of it, and they also tend to earn more income regardless of their education. Hence, the higher incomes of people with more education may be partly caused by factors beyond education.[1] [2] [3] [4] [5] Likewise, student achievement can be influenced by factors outside of their formal education, such as family stability and cultural influences.[6] [7]

In attempting to isolate the effect of a single factor on a certain outcome, researchers often use statistical techniques to control for the effects of other variables. However, these techniques cannot objectively rule out the possibility that other factors are at play. This is known as omitted variable bias.[8] [9] [10] [11] [12] [13]  Moreover, the most common method used to control for multiple variables is prone to other pitfalls that can lead to false conclusions about causes and effects.[14] [15] [16]

The potential for drawing flawed conclusions can be significantly reduced by examining time-series data surrounding the educational experiences of large, randomized groups of students. Therefore, Just Facts focuses on this type of data, which is known as experimental data. An example of this is the outcomes of students who won and did not win a random lottery for admission to a certain educational program. Studies of such data can limit the impact of many variables and allow for sound conclusions about cause and effect, but these studies can also have limitations.[17] [18] [19] [20] [21] [22] [23] [24] [25] [26]

Unless otherwise indicated, the caveats above apply to all of the data and studies cited below.

Collective Spending

* During 2013, federal, state, and local governments in the U.S. spent $826 billion on formal education. This amounts to 4.9% of the U.S. gross domestic product, 15% of government current expenditures, and $6,749 for every household in the U.S.[27] [28] These figures do not include:

  • land purchases for schools and other facilities.[29]
  • some of the costs of durable items like buildings and computers.[30]
  • the unfunded liabilities of post-employment non-pension benefits (like health insurance) for government employees.[31] [32] [33] [34] [35] [36] [37]

* Relative to other types of government spending in 2013, education spending was:

  • 32% lower than spending for healthcare.
  • 9% higher than spending for national defense.
  • 2.4 times higher than spending for public order and safety, including law enforcement, courts, prisons, fire protection, and immigration enforcement.[38]

* During 2013, private consumers and nonprofit organizations in the U.S. spent about $329 billion on formal education. This amounts to 2.0% of the U.S. gross domestic product and $2,691 for every household in the U.S.[39] [40] [41] [42] [43]

* Relative to other spending by private consumers and nonprofit organizations in 2013, education spending was:

  • 36% lower than spending on motor vehicles and parts.
  • 26% lower than spending on clothing and footwear.
  • 25% higher than spending on alcoholic beverages.[44]

Collective Outcomes

Earnings

* In 2013, U.S. residents aged 25 and older had average cash earnings of $32,851.[45] Cash earnings do not include non-cash compensation, such as employee fringe benefits.[46]

* In 2013, 65% of U.S. residents aged 25 and older had at least some cash earnings, and 35% did not have any cash earnings.[47]

* Among U.S. residents aged 25 and older who had cash earnings in 2013, average cash earnings were $50,316. Among these same people, median cash earnings were $37,332.[48]

* Click here for more data on education and earnings.


Practical Literacy

* Per the National Center for Education Statistics (NCES):

As a part of their everyday lives, adults in the United States interact with a variety of printed and other written materials to perform a multitude of tasks. A comprehensive list of such tasks would be virtually endless. It would include such activities as balancing a checkbook, following directions on a prescription medicine bottle, filling out a job application, consulting a bus schedule, correctly interpreting a chart in the newspaper, and using written instructions to operate a voting machine.
A common thread across all literacy tasks is that each has a purpose—whether that purpose is to pay the telephone bill or to understand a piece of poetry. All U.S. adults must successfully perform literacy tasks in order to adequately function—that is, to meet personal and employment goals as well as contribute to the community.[49]

* In 2003, NCES assessed the English literacy skills of U.S. residents aged 16 and older. The full assessment was nationally representative except for 5% of the population who were completely illiterate in English and Spanish or unable to answer very simple questions.[50]

* Below are some examples of the questions posed in the full assessment, along with the portions of people who answered them correctly:

• 82% correctly answered this question requiring the ability to search and interpret text:

Refer to the chart to answer the following question. For the year 2000, what is the projected percentage of Black people who will be considered middle class?

National Assessment of Adult Literacy Question N120601

[51]

• 60% correctly answered this question requiring the ability to search text, interpret it, and calculate using addition:

Refer to the medicine label to answer the following question. The patient forgot to take this medicine before lunch at 12:00 noon. What is the earliest time he can take it in the afternoon?

National Assessment of Adult Literacy Question C080101

[52]

• 46% correctly answered this question requiring the ability to search text, interpret it, and calculate using multiplication:

Refer to the article below to answer the following question. Suppose that a family’s budget for one year is $29,500 and that there is one child in the family. Using the percentage given in the article, calculate how much money would go toward raising the child for that year.

National Assessment of Adult Literacy Question N130901

[53]

• 18% correctly answered this question requiring the ability to search text, interpret it, and calculate using multiplication and division:

Refer to the advertisement for the Carpet Store on page three of the newspaper to answer the following question. Suppose that you want to carpet your living room which is 9 feet by 12 feet, and you purchase DuPont Stainmaster carpet at the sale price. Using the calculator, compute the total cost, excluding tax and labor, of exactly enough carpet to cover your living room floor.

National Assessment of Adult Literacy Question N091001

[54]

• 11% correctly answered this question requiring the ability to examine a data table, draw inferences from it, and accurately express them:

Refer to the table on the next page to answer the following questions. Using the information in the table, write a brief paragraph summarizing the extent to which parents and teachers agreed or disagreed on the statements about issues pertaining to parental involvement at their school.

National Assessment of Adult Literacy Question N100701

[55]


Whole-Person Development

* In 1841, Horace Mann, the “father” of the modern public education system in the U.S., wrote:

The Common [i.e., public] School is the institution which can receive and train up children in the elements of all good knowledge, and of virtue, before they are subjected to the alienating competitions of life. This institution is the greatest discovery ever made by man;—we repeat it, the Common School is the greatest discovery ever made by man.
Let the Common School be expanded to its capabilities, let it be worked with the efficiency of which it is susceptible, and nine tenths of the crimes in the penal code would become obsolete; the long catalogue of human ills would be abridged; men would walk more safely by day; every pillow would be more inviolable by night; property, life, and character held by a stronger tenure; all rational hopes respecting the future brightened.[56] [57] [58] [59]

* In 2009, the Pentagon estimated that 65% of all 17- to 24-year-olds in the U.S. were unqualified for military service because of weak educational skills, poor physical fitness, illegal drug usage, medical conditions, or criminal records.[60] In January 2014, the commander of the U.S. Army Recruiting Command estimated this figure at 77.5%.[61] In June 2014, the Department of Defense estimated this figure at 71%.[62]

* Per a 2001 study of high school dropouts published in the American Economic Review:

  • “It is common knowledge outside of academic journals that motivation, tenacity, trustworthiness, and perseverance are important traits for success in life.”
  • “It is thus surprising that academic discussions of skill and skill formation almost exclusively focus on measures of cognitive ability and ignore noncognitive skills.”
  • “Studies … demonstrate that job stability and dependability are traits most valued by employers as ascertained by supervisor ratings and questions of employers….”
  • “Our finding … demonstrates the folly of a psychometrically oriented educational evaluation policy that assumes cognitive skills to be all that matter.”[63]

K–12 Spending

* In the 2011-12 school year, governments in the U.S. spent an average of $12,401 for every student enrolled in K–12 public schools.[64] [65] These figures do not include:

* A nationally representative poll of 4,000 U.S. adults commissioned in 2015 by Education Next and the Kennedy School of Government at Harvard University found that Americans on average estimate that their local public schools spend $6,307 per student.[80] [81]

* In the 2011-12 school year, the average class size in public schools was 22.8 students, and the average spending per classroom was $283,000.[82] This excludes the items in bullet points above.[83]

* Excluding the items in the bullet points above, the average inflation-adjusted annual spending per public school student has risen by 21 times since 1919:

Inflation-Adjusted Public School Spending Per Student

[84]

* Since the early 1970s, U.S. school districts with higher percentages of minority students have on average spent about the same amount per student as districts with smaller portions of minority students.[85] [86] [87] [88]

* In the 2011-2012 school year, public school revenues came from the following sources:

Source

Portion of Finances

Federal Government

10%

State Governments

45%

Local

45%

   Property Taxes

36%

   Other Government Revenues

7%

   Private Revenues

2%

[89]

* In the 2011-2012 school year, private consumers, nonprofit organizations, and governments spent an average of about $6,469 for every student enrolled in private K–12 schools.[90] [91] [92] [93] [94] [95]

* The average class size in private schools is 18.8 students.[96] In the 2011-12 school year, the average spending per classroom was about $122,000.[97]

* In the 2011-12 school year, the average full tuition for students in K–12 private schools was $11,089. Full tuition or “sticker price” is “the highest annual tuition charged for a full-time student.” The actual amounts paid by individuals are lower if they receive discounts for reasons such as having low income, siblings in the school, or a parent who is a teacher. For different types of private schools, the average full tuition was as follows:

School Type

Tuition

Catholic

$7,114

Other religious

$8,973

Nonsectarian

$22,210

[98] [99]

* A nationwide study of 11,739 homeschooled students during the 2007-08 school year found that parents spent a median of $400 to $599 per student on “textbooks, lesson materials, tutoring, enrichment services, testing, counseling, evaluation,” and other incidentals.[100] [101] Regarding these findings:

  • The study was based on a survey with a response rate of approximately 19%.[102] Thus, the results are not definitive.[103] [104] [105]
  • Adjusted for inflation into 2014 dollars, the median annual cost to educate a homeschooled student ranged from $457 to $684.[106]
  • These figures do not account for the cost of parental time investment or the value of being able to live in areas without regard for the quality of the local schools.[107]

Spending by Function

* In the 2011-12 school year, 53% of public education spending was used for student instruction.[108] (This excludes state administration, unfunded pension liabilities, and post-employment benefits.[109]) The remainder was spent on:

Function

Portion of Total

Property purchases and building construction

8%

Operations and maintenance

8%

Administration

6%

Student guidance, health, attendance, and speech pathology services

5%

Instructional staff services, such as curriculum development, training, and computer centers

4%

Student transportation

4%

Food services

3%

Interest on school debt

3%

Other

4%

[110]

* In the 2011-12 school year, 70% of public school expenditures were spent on government employee benefits and salaries.[111] (This excludes state administration, unfunded pension liabilities, and post-employment benefits.[112])


Teacher Compensation

* In the 2011-12 school year, the average base salary for full-time public school teachers was $54,796. This ranged from a low of $38,368 in South Dakota to a high of $72,574 in New York. Base salaries do not include benefits, bonuses, or supplemental pay for extracurricular activities like coaching and student activities.[113]

* In the 2011-12 school year, the average base salary for full-time private school teachers was $41,507, or about 24% less than full-time public school teachers.[114]

* In March 2014, the average immediate cost of compensating full-time public school teachers was $58.72 per contract hour. This includes salaries, bonuses, supplemental pay, and some benefits.[115] The following caveats apply to this figure:

  • Immediate costs do not include unfunded pension liabilities and post-employment benefits like health insurance.[116] [117] [118] [119]
  • Contract hours do not include the added time that teachers work beyond their contractual schedules for lesson preparation and other nonclassroom activities.[120]

* In March 2014, the average immediate cost of compensating full-time private school teachers was $44.63 per contract hour, or about 24% less than full-time public school teachers.[121] [122] The following additional caveats apply to these figures:

  • The costs that are not included (unfunded pension liabilities and post-employment benefits) are common in the government sector and rare in the private sector.[123] [124]
  • In 2010, full-time private school teachers worked an average of 11% more hours than full-time public school teachers.[125]

* In the 2011-12 school year, the average immediate cost of compensating full-time public school teachers was $80,145, including salaries, bonuses, supplemental pay, and some benefits:

Immediate Costs of Teacher Compensation

Category

Dollars

Portion of Total

Base salary

$54,796

68%

Benefits

$23,563

29%

Bonuses and supplemental pay

$1,786

2%

Total compensation

$80,145

100%

[126]

* Across the states, the average immediate cost of compensating full-time public school teachers in the 2011-12 school year ranged from $57,000 in South Dakota to $105,000 in New York.[127] [128]

* Full-time public school teachers work an average of 1,490 hours per year, including time spent for lesson preparation, test construction and grading, providing extra help to students, coaching, and other activities.[129] [130] [131] [132]

* Full-time private industry employees work an average of 2,045 hours per year, including time spent working beyond their assigned schedules at the workplace and at home.[133]

* Accounting for the disparity between the annual work hours of full-time public school teachers and full-time private industry workers, the annualized immediate cost of employing teachers in the 2011-12 school year was an average of $110,000 per teacher.[134] [135]

* Across the states, the annualized immediate cost of employing full-time public school teachers in the 2011-12 school year ranged from an average of $78,000 in South Dakota to an average of $145,000 in New York.[136] [137]

K–12 Outcomes

General

* In the U.S., all 50 states provide children with at least 13 years of taxpayer-financed education from kindergarten through 12th grade.[138]

* The average public school year is 179 days, and the average school day is 6.7 hours not including transportation and extracurricular activities.[139]

* In 2012, approximately 88% of K–12 students were enrolled in public schools, 9% were enrolled in private schools, and 3% were homeschooled.[140] [141]

* Among public school students who began high school in 2009, 78% graduated within four years. This was true for:

  • 83% of white students.
  • 71% of Hispanic students.
  • 66% of black students.[142]

* In 2013, U.S. residents aged 25 and older:

  • with some high school education who did not graduate high school earned an average of $12,720 in cash compensation.[143] [144]
  • with a high school degree and no further education earned an average of $20,765 in cash compensation.[145] [146]

* Click here for more data on education and earnings.


College Readiness

* In 2014, 57% of high school students who graduated that year took the ACT college readiness exam. Among these graduates, 26% met ACT’s college readiness benchmarks in all four subjects (English, reading, math, and science). For each subject, the rates of college readiness were as follows:

  • English – 64%
  • Reading – 44%
  • Arithmetic – 43%
  • Science – 37%[147]

* Among high school students who graduated in 2014 and took the ACT college readiness exam, the following racial/ethnic groups met ACT’s college readiness benchmarks in at least three of the four subjects:

  • Asian – 57%
  • White – 49%
  • Pacific Islander – 24%
  • Hispanic – 23%
  • American Indian – 18%
  • African American – 11%[148]

International Comparisons

* In reading literacy tests administered by the Progress in International Reading Literacy Study to 4th grade students during 2011, U.S. students ranked 6th among 45 nations. The average score of U.S. students was 11% above the average of all tested nations.[149]

* In reading literacy tests administered by the Program for International Student Assessment to 15-year-old students during 2012, U.S. students ranked 17th among 34 developed nations. The average score of U.S. students was the same as the average of all tested nations.[150] [151] [152]

* U.S. students outperformed the following nations on the 4th-grade reading exam but underperformed them on the 15-year-old reading exam: Canada, Poland, New Zealand, Australia, Netherlands, Belgium, Germany, France, Norway, and the United Kingdom. U.S. students did not move ahead of any other nation between the 4th grade and 15 years old.[153] [154]


* In math tests administered by the International Mathematics and Science Study to 4th grade students during 2011, U.S. students ranked 7th among 50 nations. The average score of U.S. students was 9% above the average of all tested nations.[155]

* In math tests administered by the Program for International Student Assessment to 15-year-old students during 2012, U.S. students ranked 27th among 34 developed nations. The average score of U.S. students was 3% below the average of all tested nations.[156] [157] [158]

* U.S. students outperformed the following nations on the 4th-grade math exam but underperformed them on the 15-year-old math exam: Netherlands, Poland, Belgium, Germany, Austria, Australia, Ireland, Slovenia, Denmark, New Zealand, Czech Republic, United Kingdom, Norway, Portugal, Italy, Spain, and the Slovak Republic. U.S. students did not move ahead of any other nation between the 4th grade and 15 years old.[159] [160]


* In 2013, Randi Weingarten, president of the American Federation of Teachers labor union, stated:

When people talk about other countries out-educating the United States, it needs to be remembered that those other nations are out-investing us in education as well.[161]

* In 2011, the U.S. ranked 5th among 32 developed nations in average spending per full-time K–12 student. The average spending per U.S. student was 35% above the average of these nations, and U.S. 15-year-olds ranked 16th in reading and 26th in math.[162] [163] [164]

* Among the same nations, U.S. 15-year-olds did not match or outperform any nation in both reading and math that outspent the U.S. The following nations matched or outperformed the U.S. in both reading and math while spending less than the U.S.:

Nation

U.S. Spending

Premium

Math Advantage

over U.S.

Reading Advantage

over U.S.

Belgium

10%

7%

2%

Netherlands

15%

9%

3%

Denmark

16%

4%

0%

Ireland

20%

4%

5%

United Kingdom

22%

3%

0%

Germany

24%

7%

2%

Australia

26%

5%

3%

France

27%

3%

2%

Finland

29%

8%

5%

Japan

30%

11%

8%

New Zealand

34%

4%

3%

South Korea

55%

15%

8%

Poland

95%

8%

4%

Estonia

96%

8%

4%

[165] [166] [167]


Historical Perspective

* In 1885, the Jersey City, NJ, school district spent an average of $13.24 over the course of the year for each of the 14,926 students in average daily attendance.[168] Adjusted for inflation in 2014 dollars, this is an average of $336 per student per year.[169]

* Below are the arithmetic and algebra questions from the 1885 high school entrance exam in Jersey City, NJ. In order to enter high school, students had to score at least 75%. A copy of the full test and the names and scores of all passing students are shown in this footnote.[170]

Arithmetic

  1. If a 60 days note of $840 is discounted at a bank at 4½% what are the proceeds?
  1. Find the sum of √16.7281 and √.72¼.
  1. The interest of $50 from March 1st to July 1st is $2.50. What is the rate?
  1. What is the cost of 19 cwt. 83 lb. of sugar at $98.50 a ton? What is discount? A number?
  1. Divide the difference between 37 hundredths and 95 thousandths by 25 hundred thousandths and express the result in words.
  1. The mason work on a building can be finished by 16 men in 24 days, working 10 hours a day. How long will it take 22 men working 8 hours a day?
  1. A merchant sold a quantity of goods for $18,775. He deducts 5% for cash and then finds that he has made 10%. What did he pay for the goods?
  1. A requires 10 days and B 15 days to do a certain piece of work. How long will it take A and B working together to do the work?
  1. By selling goods it 12½% profits, a man clears $800. What was the cost of the goods, and for what were they sold?
  1. A merchant offered some goods for $1170.90 cash, or $1206 payable in 30 days. Which was the better offer for the customer, money being worth 10%?

Algebra

  1. Define Algebra, an algebraic expression, a polynomial. Make a literal trinomial.
  1. Write a homogeneous quadrinomial of the third degree. Express the cube root of 10ax in two ways.
  1. Find the sum and difference of 3x−4xy+7cd−4xy+16, and 10ay−3x−8xy+7cd−13.
  1. Express the following in its simplest form by removing the parentheses and combining: 1−(1−a)+(1−a+a2)−(1−a+a2−a3).
  1. Find the product of 3+4x+5x2−6x3, and 4−5x−6x2.
  1. Expand each of the following expressions and give the theorem for each: [a+4]2, [a2−10]2, [a+4] [a−4].
  1. Divide 6a4+4a3x−9a2x2−3ax3+2x4 by 2a2+2ax−x2.
  1. Find the prime factors of x4−b4 and x3−l.
  1. Find the greatest common denominator of 6a2+11ax+3x2 and 6a2+7ax−3x2.
  1. Divide [x2−2xy+y2]/ab by [x−y]/bc and give the answer in its lowest terms.
  1. Change [2x2+5]/[x+3] to a mixed quantity.

* Between 1919 and 2011, the national average inflation-adjusted annual spending per public school student in daily attendance rose from $788 to $13,210.[171] This does not include state administration spending, unfunded pension liabilities, and post-employment benefits for government workers.[172]

Higher Education Spending

* In the 2012-13 school year, the average spending by colleges per full-time-equivalent student was as follows:

Control of Institution[173]

4-Year Colleges

2-Year Colleges

Public

$38,073

$13,416

Private nonprofit

$50,413

$18,173

Private for-profit

$17,013

$15,392

[174] [175] [176]

* In the 2012-13 school year, the breakdown of spending by colleges on various functions was as follows:

Function

Public

Private Nonprofit

Private For-Profit

Instruction[177]

27%

33%

25%

Research[178]

10%

11%

0%

Public service[179]

4%

1%

Academic support[180]

7%

9%

65%

Student services[181]

5%

8%

Institutional support[182]

8%

13%

Hospitals[183]

10%

10%

0%

Auxiliary enterprises[184]

7%

9%

2%

Operation and maintenance of plant[185]

6%

Included in other functions

Other

17%

5%

8%

[186]

* During 2013, federal, state and local governments spent $167 billion on higher education.[187] This amounts to 85% of all spending by public and private colleges on functions that directly contribute to the education of students and the general public.[188] [189] This government spending does not include student loans (discussed below).[190]

* Between 1963 and 1980, the average annual sticker price for tuition, fees, room, and board for full-time undergraduate students at all colleges fell by 12%. Between 1980 and 2013, these rates rose by 150%:

Inflation-Adjusted College Tuition. Fees, Room, and Board

[191]

* The data above on college “sticker prices” are based on the published rates of colleges. The amounts paid by individual students are lower if they receive discounts, scholarships, or financial aid.[192]


Student Loans

* The federal government offers student loans that can be used to attend college, vocational schools, or trade schools.[193]

* There are different types of federal student loans, each with its own set of conditions and interest rates.[194] Most of these loans generally require borrowers to pay back their loans within 10 years of finishing college.[195]

* Per the U.S. Treasury, the federal government creates loan programs so that people who are “unable to afford credit at the market rate” or have a “high risk” of defaulting can borrow money at “an interest rate lower than the market rate.”[196]

* Per the U.S. Congressional Budget Office, “When the government extends credit, the associated market risk of those obligations is effectively passed along to citizens….”[197]

* For people with good credit histories, the market rates on private student loans are sometimes lower than the rates on federal student loans.[198]

* In 1965, the 89th Congress and Democratic President Lyndon B. Johnson created a program to finance student loans for higher education. These loans were issued by private lenders and guaranteed against default by the federal government.[199] [200] [201]

* In 1993, the 103rd Congress and Democratic President Bill Clinton created a program to finance student loans directly from the U.S. Treasury. The law required that increasing portions of all new federal student loans be made through this program.[202] The bill passed Congress with 85% of Democrats voting for it and 100% of Republicans voting against it.[203]

* In 2010, the 111th Congress and Democratic President Barack Obama passed a law requiring that all new federal student loans be financed directly from the U.S. Treasury.[204] [205] [206] The bill passed Congress with 89% of Democrats voting for it and 100% of Republicans voting against it.[207]

* Between 2003 and the first quarter of 2015, the inflation-adjusted amount of student loans owed by Americans rose by 288%.[208]

* In the first quarter of 2015, Americans owed $1.2 trillion dollars in student loans, or more than any other type of consumer debt except for mortgages.[209] [210] [211]

* In 2013, the 90+ day delinquency rate for student loans exceeded that of credit cards for the first time since reliable data on this measure became available in 2003.[212] In the first quarter of 2015, the student loan delinquency rate was 32% higher than any other major category of consumer loan:

Balance of Consumer Loans 90+ Days Delinquent

[213] [214]

* In the context of student loans:

  • “default” means that no payments have been made for at least 270 days.
  • “deferment” means that payments have been postponed for reasons such as “returning to school, military service, or economic hardship.”
  • “forbearance” means that payments have been temporarily suspended or reduced because of financial hardship.[215]

* In June 2014, 50% of all federal student loan balances were actively being repaid or less than 270 days delinquent. The remainder fell into the following categories.

  • 13% still in school
  • 12% in deferment
  • 11% in forbearance
  • 9% in default
  • 4% in a grace period
  • 1% other[216]

* In January 2015, 14.1% of federal student loans that were due to begin repayment in 2009 and 2010 were in default, and 43.6% were in forbearance.[217]

* Per a 2014 report by the U.S. Treasury Borrowing Advisory Committee:

A key concern is that students are taking on student loans because historically an education has been correlated with economic mobility; however, today an average of 40% of students at four-year institutions (and 68% of students in for-profit institutions) do not graduate within six years, which means they most likely do not benefit from the income upside from a higher degree yet have the burden of student debt.[218]

* Per Deborah J. Lucas, director of the MIT Center for Finance and Policy and former chief economist of the Congressional Budget Office[219]:

Government credit programs may have adverse consequences that must be weighed against their expected benefits. One concern is that credit subsidies will distort the allocation of capital in the economy and crowd out productive investments by households and firms.
A related concern is that credit subsidies tend to affect the price of goods and services so as to reduce the benefits to the intended beneficiaries. Consider the mortgage guarantees offered to first-time home buyers by the FHA [Federal Housing Authority]. The program increases the demand for housing, which in turn puts upward pressure on home prices. Such price increases benefit current homeowners at the expense of first-time home buyers, possibly offsetting the value of the mortgage subsidy. As another example, some observers point to the easy and low-cost access to federal student loans as fueling the steep rise in the cost of higher education in the last decade.
Easier access to credit markets is not always advantageous to program participants. Unsophisticated borrowers, such as some college students and first-time homebuyers, may not be fully aware of the costs and risks associated with accumulating high debt loans. Consumer protection and disclosure laws usually do not extend to the government, and there is the possibility that it will inadvertently offer poorly designed products that can harm consumers. …
A well-understood consequence of government credit provision is that it tends to create incentives for greater risk taking, particularly when a borrower becomes financially distressed. The reason is that a debtor with a guaranteed debt benefits from the upside if a gamble pays off, whereas the government shares in the losses if the gamble fails.[220]

* Since 1976, federal law has prohibited people from reneging on federal student loans by filing for bankruptcy (except in rare cases).[221] [222] [223] [224]

* In March 2015, President Obama instructed his administration to “develop recommendations for regulatory and legislative changes for all student loan borrowers, including possible changes to the treatment of loans in bankruptcy proceedings….”[225]

* Federal laws authorize more than 50 federal student loan forgiveness and repayment programs. Such programs reduce or eliminate student loan debt for various reasons, such as having income below certain thresholds or being a government employee for one to ten years.[226]

* In June 2014, President Obama announced that he will be issuing regulations that limit student loan payments to 10% of borrowers’ monthly incomes and forgive all loans after 20 years of payments, or 10 years for government employees.[227]

* In June 2015, the Obama administration announced that it was forgiving the federal student loans of people who attended schools owned by Corinthian Colleges, Inc., a for-profit company that filed for bankruptcy under allegations of fraud.[228] [229] Per the administration’s press release:

  • Loans will be forgiven for students whose schools closed down while they were in attendance.
  • Loans will be forgiven for people “who believe they were victims of fraud, regardless of whether their school closed.”
  • Refunds will be issued to people for any student loan payments they already made.
  • The administration “will develop new regulations to clarify and streamline loan forgiveness” for other people who attended other colleges.[230]

Accreditation

* To receive a federal student loan to attend a specific college, the college must be accredited. This means that it must be officially certified as an institution that delivers quality education.[231]

* The process of accreditation takes place at least once every 10 years and is generally conducted by private non-profit agencies. These agencies are sanctioned by the Department of Education, which is under the authority of the U.S. president.[232] [233]

* Accrediting agencies have the power to sanction colleges by denying, suspending, or revoking their accreditation. These agencies can also take interim actions, such as placing colleges on probation and requiring them to submit financial reports.[234]

* In January 2015, the U.S. Government Accountability Office published the results of an investigation of accrediting agencies and the Department of Education from October 2009 through March 2014. The study found that:

  • the accreditors responsible for accrediting for-profit colleges “were no more likely to issue terminations or probations to schools with weaker student outcomes compared to schools with stronger student outcomes….” This includes outcomes such as graduation rates, dropout rates, and student loan default rates.
  • “for 36 of the 93 schools receiving federal student aid funds that were placed on probation by their accreditors in fiscal year 2012, we found no indication of follow-up activities by [the Department of] Education between the beginning of fiscal year 2012 and December 2013.”
  • a Department of Education “official noted that her team would never respond to accreditor probations because they occur too frequently to track and would disrupt other work.”[235]

* Per the study’s conclusion:

These findings raise questions about whether existing accreditor standards are sufficient to ensure the quality of schools, whether [the Department of] Education is effectively determining if these standards ensure educational quality, and whether federal student aid funds are appropriately safeguarded.[236]

* Five months after the results of this investigation were published, the Obama administration issued a press release stating:

Over the past six years, the Education Department has taken unprecedented steps to hold career colleges accountable for giving students what they deserve: a high-quality, affordable education that prepares them for their careers.[237]

Federal Accounting

* When the federal government lends money for student loans, the government doesn’t report these amounts as outlays in the federal budget. Instead, the budget reflects only what the government projects it will lose or gain on these loans.[238] [239]

* Under federal budget rules, the federal government typically projects that it will make money on student loans. Thus, the more money the government loans, the better the budget appears to be.[240]

* Federal budget rules do not account for the market risk of issuing student loans. Market risk stems from the possibility that the economy will perform worse than the government projects, which would increase default rates and have other negative effects on returns from these loans.[241]

* Per estimates made by the Congressional Budget Office in 2012:

  • The federal government projects that it will reap an average profit of 9% on the student loans that it makes between 2010 and 2020.
  • If the federal government accounted for the market risk of this loans, it would project an average loss of 12%.[242]

Fraud

* For the 2012 tax year, 12.2 million tax filers (claiming 13.4 million students) received $19 billion in higher education tax credits.[243] Tax credits decrease the taxes that people must pay on a dollar-for-dollar basis, and some are refundable, which means that households with credits that exceed their income taxes receive the difference as cash payouts from the government. Per the IRS Inspector General, “the risk of fraud for these types of claims is significant.”[244] [245] [246] [247] [248]

* In 2015, the IRS Inspector General published an investigation of higher education tax credits for the 2012 tax year. The investigation found that 3.6 million tax filers (claiming 3.8 million students) received $5.6 billion in credits “that appear to be erroneous based on IRS records.” Some examples include:

  • 1.6 million filers (claiming 1.7 million students) who received $2.5 billion in credits, even though the educational institutions listed on their tax forms were not eligible for the credits.
  • filers claiming 419,827 students who received at least five years of credits, even though they are legally limited to four years of credits.
  • 2,148 tax filers who received $3.9 million in credits for people who were incarcerated for the entire year.[249]

Higher Education Outcomes

General

* Institutes of higher learning are also known as colleges, universities, and post-secondary schools.[250] Such institutions award:

  • associate degrees for completing a program that typically requires 2-4 full-time school years.
  • baccalaureate (or bachelor’s) degrees for completing a program that typically requires 4-5 full-time school years.
  • master’s degrees, which typically require 1-2 full-time years of graduate school after obtaining a bachelor’s degree.[251]
  • doctoral academic (or Ph.D.) degrees, which typically require 5-10 years of full-time graduate school. The coursework for such degrees is largely geared toward people who intend to conduct research or become a professor, although it typically provides little instruction in how to teach.[252] [253] [254] [255]
  • doctoral professional degrees, which require at least two years of full-time college work before entering the program and then at least six full-time years in the program. The coursework for such degrees is largely geared toward people who intend to practice in fields such as medicine, dentistry, law, and theology.[256] [257] [258]

* As of the fall of 2015, roughly 20.2 million students are attending U.S. colleges. Among these students:

  • 43% are males, and 57% are females.
  • 35% are at 2-year colleges, and 65% are at 4-year colleges.
  • 62% are attending full time, and 38% are attending part time.[259]

* Between 1960 and 2013, the portion of recent high school graduates (aged 16-24) enrolled in college:

  • increased from 45% to 66%.
  • increased from 54% to 64% for males.
  • increased from 40% to 68% for females.
Portion of High School Graduates Aged 16-24 Enrolled In College

[260]

* Among recent high school graduates of different racial/ethnic groups, the rates of college enrollment in 2013 were:

  • 80% for Asians.
  • 69% for whites.
  • 60% for Hispanics.
  • 57% for African Americans.[261]

Graduation Rates

* Among full-time, new college students who entered a 2-year college in 2010, 29% graduated from the same institution within 150% of the normal time required to do so. This was true for:

  • 63% of students at for-profit colleges.
  • 54% of students at nonprofit colleges.
  • 20% of students at public colleges.
  • 32% of female students.
  • 26% of male students.
  • 35% of Asian students.
  • 34% of Hispanic students.
  • 29% of white students.
  • 26% of mixed-race students.
  • 24% of American Indian students.
  • 24% of black students.[262]

* Among full-time, new college students who entered a 4-year college in 2007, 39% graduated from the same institution within four years.[263]

* Among full-time, new college students who entered a 4-year college in 2007, 59% graduated from the same institution within six years. This was true for:

  • 65% of students at nonprofit institutions.
  • 58% of students at public institutions.
  • 32% of students at for-profit institutions.
  • 62% of female students.
  • 56% of male students.
  • 70% of Asian students.
  • 68% of mixed-race students.
  • 63% of white students.
  • 52% of Hispanic students.
  • 41% of black students.
  • 41% of American Indian students.[264]

Earnings

* In 2013, people aged 25 and older:

  • with some college but did not graduate earned an average of $26,537 in cash compensation.
  • with an associate’s degree and no further education earned an average of $31,298 in cash compensation.
  • with a bachelor’s degree and no further education earned an average of $46,987 in cash compensation.
  • with a master’s degree and no further education earned an average of $58,620 in cash compensation.
  • with a doctoral degree earned an average of $88,987 in cash compensation.
  • with a professional degree earned an average of $114,830 in cash compensation.[265] [266]

* Click here for more data on education and earnings.


Effort and Grades

* Between 1961 and 2003, the average time spent by full-time college students on educational activities like attending class and studying dropped from roughly 40 hours per week to 27 hours per week.[267]

* On non-holiday weekdays during the school year, full-time college students spend an average of:

  • 3.3 hours on educational activities, as compared to 5.8 hours for high school students who are employed and 6.6 hours for high school students who are not employed.
  • 4.0 hours on leisure activities and sports, as compared to 3.6 hours for high school students who are employed and 4.4 hours for high school students who are not employed.[268]

* During the 2005-06 and 2006-07 school years, full-time students at 4-year colleges spent an average of about:

• 27-28 hours per week or 16-17% of their time on educational activities.

• 43 hours or 26% of their time per week on leisure activities and sports.[269]

* In 1960, roughly 15% of college course grades were A’s. By 1988, approximately 31% of grades were A’s. By 2009, about 43% of grades were A’s.[270]


Practical Skills

* The Collegiate Learning Assessment (CLA) is a test designed to measure the “core outcomes” of higher education, including “critical thinking, analytical reasoning, problem solving, and writing.”[271] This assessment evaluates how well college students perform “real-world tasks that are holistic and drawn from life situations.”[272] [273]

* In 2014, Professor Richard Arum of New York University and Assistant Professor Josipa Roksa of the University of Virginia published a study using the CLA to measure the “critical thinking, complex reasoning, and writing skills” of 1,666 full-time students who entered 4-year colleges in the fall of 2005 and graduated in the spring of 2009. The authors found that:

  • if the test “were rescaled to a one-hundred-point scale, approximately one-third of students would not improve more than one point over four years of college.”
  • “after four years of college, an average-scoring student in the fall of his or her freshman year would score at a level only eighteen percentile points higher in the spring of his or her senior year. Stated differently, freshmen who entered higher education at the 50th percentile would reach a level equivalent to the 68th percentile of the incoming freshman class by the end of their senior year.”
  • “students attending high-selectivity institutions improve on the CLA substantially more than those attending low-selectivity institutions, even when models are adjusted for students’ background and academic characteristics. … While students in more selective institutions gain more on the CLA, their gains are still modest….”[274] [275]

* Using test questions from the National Center for Education Statistics’ adult test of practical literacy, the American Institutes for Research assessed the literacy skills of 1,827 graduating college students in 2003. These students were randomly selected from across the U.S., and each was graded as Proficient, Intermediate, Basic, or Below Basic on three different types of literacy:[276]

1) Prose Literacy, which is the ability to “search, comprehend, and use information from continuous texts,” such as “editorials, news stories, brochures, and instructional materials.” Students who were proficient in this included:

  • 38% of males and 37% of females at 4-year colleges.
  • 24% of males and 22% of females at 2-year colleges.
  • 42% of whites, 29% of Hispanics, 23% of Asians/Pacific Islanders, and 16% of blacks at 4-year colleges.
  • 27% of whites, 22% of Hispanics, 11% of blacks, and 7% of Asians/Pacific Islanders at 2-year colleges.[277]

2) Document Literacy, which is the ability to “search, comprehend, and use information from noncontinuous texts,” such as “job applications, payroll forms, transportation schedules, maps, tables, and drug or food labels.” Students who were proficient in this included:

  • 43% of males and 38% of females at 4-year colleges.
  • 24% of males and 24% of females at 2-year colleges.
  • 45% of whites, 35% of Hispanics, 20% of Asians/Pacific Islanders, and 17% of blacks at 4-year colleges.
  • 28% of whites, 18% of Asians/Pacific Islanders, 15% of Hispanics, and 17% of blacks at 2-year colleges.[278]

3) Quantitative Literacy, which is the ability to “identify and perform computations … using numbers embedded in printed materials,” such as “balancing a checkbook, figuring out a tip, completing an order form, or determining the amount of interest on a loan from an advertisement.” Students who were proficient in this included:

  • 39% of males and 30% of females at 4-year colleges.
  • 20% of males and 16% of females at 2-year colleges.
  • 40% of whites, 20% of Asians/Pacific Islanders, 19% of Hispanics, and 5% of blacks at 4-year colleges.
  • 24% of whites, 14% of Hispanics, 7% of blacks, and 3% of Asians/Pacific Islanders at 2-year colleges.[279]

* The study also found:

  • “The literacy of students in 4-year public institutions was comparable to the literacy of students in 4-year private institutions.”
  • “Prose literacy was higher for students in selective 4-year colleges, though differences between selective and nonselective 4-year colleges for document and quantitative literacy could not be determined because of the sample size.”
  • “College students come from a variety of economic backgrounds, with some students supporting themselves and others relying on their families to pay for tuition and other necessities. Despite variations in income, most differences in the literacy of students across income groups were not significant.”[280]
College Student Literacy Scores and Family Income

[281]


* In 2007, the Association of American Colleges and Universities commissioned a poll of employers who hire people with bachelor’s degrees to assess their views of recent college graduates. The poll included 301 employers, had a margin of sampling error of plus or minus 5.7 percentage points, and found the following results:

  • Employers give recent college graduates “the highest marks for teamwork, ethical judgment, and intercultural skills, and the lowest scores for global knowledge, self-direction, and writing.”
  • On a scale of 1-10, in “none of the 12 areas tested does a majority of employers give college graduates a high rating (or ‘8,’ ‘9,’ or ‘10’) for their level of preparedness.”

Area

Average rating

Portion of employers who gave …

8-10 ratings

1-5 ratings

Teamwork

7.0

39%

17%

Ethical judgment

6.9

38%

19%

Intercultural skills

6.9

38%

19%

Social responsibility

6.7

35%

21%

Quantitative reasoning

6.7

32%

23%

Oral communication

6.6

30%

23%

Self-knowledge

6.5

28%

26%

Adaptability

6.3

24%

30%

Critical thinking

6.3

22%

31%

Writing

6.1

26%

37%

Self-direction

5.9

23%

42%

Global knowledge

5.7

18%

46%

[282]

Comparative Earnings

* In 2013, U.S. residents aged 25 and older had average cash earnings of $32,851.[283] Cash earnings do not include non-cash compensation, such as employee fringe benefits.[284] For varying levels of education, average cash earnings were as follows:

Average Cash Earnings of People Aged 25+

[285] [286]

* In 2013, 65% of U.S. residents aged 25 and older had at least some cash earnings, and 35% did not have any cash earnings. For varying levels of education, the rates were as follows:

Portion of People Aged 25+ With Cash Earnings

[287] [288]

* Among U.S. residents aged 25 and older who had cash earnings in 2013, average cash earnings were $50,316. Among these same people, median cash earnings were $37,332.[289] For varying levels of education, median cash earnings were as follows:

Median Cash Earnings of People Aged 25+ With Earnings

[290] [291]

Preschool Spending

* During 2014, private consumers and nonprofit organizations in the U.S. spent $14.9 billion on day care and preschools/nursery schools.[292] [293] [294] [295] [296]

* During 2005, the federal government funded 69 programs that provided or subsidized education and/or childcare for children under the age of five.[297]

* The largest federal education/childcare program for preschoolers is called “Head Start.”[298] During the federal government’s 2013 fiscal year, Head Start served 1,082,264 children and 6,391 pregnant women at some point during the year.[299]

* In its 2014 fiscal year, the federal government spent an average of $9,272 for each person enrolled in Head Start. This does not include additional funds from state governments.[300]

* Federal law requires that at least 90% of Head Start enrollees have incomes below 130% of the federal poverty line. To determine if law was being enforced, the U.S. Government Accountability Office (GAO) conducted 15 undercover tests of Head Start centers in six states from October 2008 through April 2010. The investigation found:

  • “In 8 instances staff at these centers fraudulently misrepresented information, including disregarding part of the families’ income to register over-income children into under-income slots.”
  • “At no point during our registrations was information submitted by GAO’s fictitious parents verified, leaving the program at risk that dishonest persons could falsify earnings statements and other documents in order to qualify.”
  • One Head Start staffer “explained that families often lie about being separated or divorced in order to reduce their income and that Head Start is not strict about checking whether that is true.”
  • The “lack of documentation made it virtually impossible to determine whether only under-income children were enrolled in spots reserved for under-income children.”[301]

Preschool Outcomes

General

* During 2013, 31% of all 3-to-4 year-olds in the U.S. were enrolled in government-run education programs. In 1970, this figure was 9%.[302]

* During 2013, 24% of all 3-to-4 year-olds in the U.S. were enrolled in private education programs. In 1970, this figure was 12%.[303]

* In 2013, President Obama called on Congress to fund certain initiatives that would allow every child in the U.S. from birth to age five to have access to government-run early learning programs. Specifically, he called for funding to:

  • provide “new, full-day” Early Head Start programs for children from birth to age three.
  • allow all four-year-olds from families with incomes at or below 200% of the poverty line to be enrolled in government preschools.[304]

* In May 2015, U.S. Senator Patty Murray (D - WA) introduced a bill that would enact much of President Obama’s early learning agenda. As of August 2015, the bill had 24 cosponsors, including 23 Democrats and a self-described “democratic socialist” who caucuses with the Democrats. As of August 2015, the Senate, which had a Republican majority, had not taken any action on this bill.[305] [306] [307] [308] [309] [310]


Head Start

* The largest federal education/childcare program for preschoolers is Head Start, which “provides comprehensive educational, social, health, and nutritional services to low-income preschool children and their families.”[311] [312] [313] [314]

* Head Start operates mostly during the school year and has full-day and part-day programs. When Head Start programs are in session, the average participant attends about 24 to 28 hours per week.[315] [316]

* From 2002 through 2008, the U.S. Department of Health & Human Services conducted a nationally representative study of 3- and 4-year-old children whose parents had applied for enrollment in Head Start and were found to be eligible. The study included 4,667 children from high-poverty communities. The design and results were as follows:

  • The children were randomly assigned to groups that were either enrolled in Head Start or not enrolled in Head Start due to a lack of available slots.
  • Among the children not enrolled in Head Start, about 60% were placed by their parents in other types of preschool programs.
  • The researchers measured 41 outcomes relating to the children’s educational performance, physical health, emotional development, and parental interactions up through 3rd grade.
  • The researchers found that “there were initial positive impacts from having access to Head Start, but by the end of 3rd grade there were very few impacts,” and among these, some were positive and some were negative with no “clear pattern” in either direction.[317]

High/Scope Perry Program

* From 1962-1967, a Ph.D. public school administrator named David Weikart led a study of 123 preschool-aged children in a town near Detroit named Ypsilanti, Michigan. This famous study is known as the “High/Scope Perry Preschool” study, because HighScope is the name of the research firm that Weikart later founded, and the study was conducted on children who lived near the Perry Elementary School in Ypsilanti.[318] [319] [320] [321]

* The study’s design was as follows:

  • To be included in the study, children had to be 3-4 years old, African American, impoverished, and have an IQ ranging from 70-85 (as compared to the national average of 111 at the time).[322] [323]
  • The children were randomly assigned to groups that were either enrolled in the preschool program or not enrolled.[324] [325]
  • The preschool curriculum was “centered around play that is based on problem-solving and guided by open-ended questions” like ‘What happened? How did you make that? Can you show me? Can you help another child?’ [326]
  • Most of the children who attended the program did so for two years but some for only one year.[327] [328]
  • The children in the program attended preschool for 2.5 hours per weekday from mid-October through May. A teacher also visited each student once per week at his or her home for 1.5 hours.[329] [330] Per child, this is a total of 14 hours per week, 462 hours per year, or 924 total hours for those who attended two full years.[331]
  • The child/teacher ratio ranged from 5:1 to 6:1.[332]
  • The preschool program cost about $21,000 per student in inflation-adjusted 2014 dollars. Adjusted for the cost growth of public schooling since 1965, the program cost about $55,000 per student.[333] [334] [335]
  • When the study participants were ages 4-10, 12, 14, 17-19, 27, and 40, researchers measured “numerous factors” relating their careers, finances, criminal history, education, intellect, and personality.[336] [337] [338]
  • The sample groups that were evaluated consisted of roughly 25 males and 25 females who were in the program and 25 males and 25 females who were not.[339] [340]

* The authors of a 2008 paper in the Journal of the American Statistical Association examined the outcomes of the four Perry sample groups and found the following statistically significant outcomes at different ages.

  • At age 5, the average IQs of males and females in the program were respectively 11 and 13 points higher than those not in the program.
  • At age 18, females in the program had an 84% graduation rate, as opposed to 35% for those not in the program.
  • At age 19, 5% of the females in the program had been arrested, as opposed to 42% of those not in the program.
  • At age 19, 40% of the females in the program were unemployed, as opposed to 71% of those not in the program.
  • At age 27, females in the program had been arrested an average of 0.32 times, as opposed to 2.3 times for those not in the program.
  • At age 27, 40% of the females in the program were married, as opposed to 8% of those not in the program.[341]

* The authors of the study also found:

  • “In contrast to females, males appear to not derive lasting benefits” from the Perry program.
  • Studies of two other preschool programs with children from similar backgrounds have replicated the early IQ and female graduation rate outcomes of the Perry program.
  • Previous studies that found other benefits from the Perry program have “serious statistical” problems, because “the samples are very small,” and the researchers failed to account for a common issue with studies that measure numerous outcomes: seemingly significant results “emerge simply by chance, even if there are no” actual effects.[342] [343]
  • Studies of another preschool program with children from similar backgrounds who spent 10 times as many hours in preschool have not replicated the large reductions in criminality that statistically flawed studies of the Perry program have found.[344] [345] [346]

* Using “novel statistical approaches” to account for “small sample sizes” and a “corrupted randomization” process in the original study, researchers at the University of Chicago found several other statistically significant outcomes between the four Perry sample groups at different ages. For example:

  • At age 27, 80% of the females in the program were employed, as opposed to 55% of those not in the program.
  • At age 40, females in the program had been arrested an average of 2.2 times, as opposed to 4.8 times for those not in the program.
  • At age 19, 70% of the males in the program were employed, as opposed to 50% of those not in the program.
  • At age 27, males in the program earned an average of $2,310 per month, as opposed to $1,430 for those not in the program.
  • At age 40, males in the program had been arrested an average of 8.2 times, as opposed to 12.4 times for those not in the program.[347] [348]

* Given the sample sizes of the four Perry groups (roughly 25 each), the approximate margin of error with 95% confidence for any outcome is ± 20 percentage points.[349] [350] Per an academic textbook on statistical analysis by University of Pennsylvania professor Paul D. Allison:

There’s very little information in a small sample, so estimates of correlations are very unreliable. … Almost anyone would consider a sample less than 60 to be small, and virtually everyone would agree that a sample of 1,000 or more is large.[351]

* Policymakers and activists have pointed to the Perry program as a reason to enact universal government preschool.[352] [353] [354] [355] [356] [357] Per an academic book on applied statistics by Harvard Ph.D. and social physiologist Rebecca M. Warner:

  • “Researchers in the behavioral and social sciences almost always want to make inferences beyond their samples,” but this is “always risky.”
  • It is “questionable to generalize” the results of a study to populations who are “drastically different” from the subjects of a study.[358] [359]

* The subjects of the Perry study (black, impoverished, IQ from 70-85) represented 2% of the U.S. population and 16% of the African American population at the time the study was conducted.[360] [361]


Abecedarian Project

* From 1972-1977, researchers at the University of North Carolina led a study of 111 preschool-aged children in the area of Chapel Hill, NC. This study is known as the “Abecedarian Project,” because that was the name of the main curriculum used in the program.[362] [363] [364]

* The study’s design was as follows:

  • Children included in the study “were believed to be at risk of retarded intellectual and social development.” Most were African Americans whose mothers had about 10 years of education and an IQ of 85. Roughly 75% of the children were from single-parent households, and 55% of the households were receiving cash welfare.[365]
  • The children were randomly assigned to groups that were either enrolled in the preschool program or not enrolled.[366] [367]
  • The preschool curriculum was focused on “developing cognitive, language, and social skills.” [368] [369]
  • The children in the program attended from shortly after birth (at an average age of 4.4 months) until they began kindergarten.[370]
  • The children in the program attended preschool for 8-10 hours per weekday and 50 weeks per year. Per child, this is 40-50 hours per week, 2,000-2,500 hours per year, and a total 8,000-10,000 hours for those who attended for four years. This is roughly 10 times more hours than the Perry program.[371] [372]
  • The child/teacher ratio ranged from 3:1 to 6:1.[373]
  • Based on the child/teacher ratio, the total classroom time, and the cost growth of public schooling since the 1970s, the Abecedarian program would cost about $222,000 per student to implement today.[374]
  • When the study participants were ages 2-8, 12, 15, 18, 10, and 21, researchers measured numerous factors relating their careers, criminal history, education, intellect, and personality.[375]
  • The sample groups who were evaluated consisted of roughly 25 males and 25 females who were in the program and 25 males and 25 females who were not.[376] [377]

* The authors of a 2008 paper in the Journal of the American Statistical Association examined the outcomes of the four Abecedarian sample groups and found the following statistically significant outcomes at different ages:

  • At age 12, the average IQ of females in the program was 8 points higher than those not in the program.
  • At age 21, 40% of the females in the program were in college, as opposed to 11% of those not in the program.
  • At age 21, 4% of the females in the program were marijuana users, as opposed to 36% of those not in the program.[378]

* The authors of the study also found:

  • Previous studies that found other benefits from the Abecedarian program have “serious statistical” problems, because “the samples are very small,” and the researchers failed to account for a common issue with studies that measure numerous outcomes: seemingly significant results “emerge simply by chance, even if there are no” actual effects.[379] [380]
  • The Abecedarian subjects did not show significant reductions in criminality like previous studies of the Perry program had found, even though the Abecedarian children spent 10 times as many hours in preschool.[381] [382] [383]

* Policymakers and activists have cited the Abecedarian Project as a reason to enact universal government preschool.[384] [385]

School Choice

Overview

* Laws in all 50 U.S. states generally compel people to:

  • pay taxes that fund government-run K–12 schools.[386] [387] [388]
  • send their children to specific public schools based on physical boundaries around their homes unless they:
    • pay additional money for private school.
    • spend additional money and/or time for homeschooling.[389] [390]

* School choice initiatives allow parents to select the schools their children attend with part or all of the costs paid by their taxes or other government revenues. This can include:

  • public schools outside a child’s school district.
  • charter and magnet schools. [391] [392] [393] [394]
  • private schools.
  • tutors and homeschools.[395]

* In the U.S., government revenues regularly fund the education of students who attend private colleges and universities but rarely students who attend private K–12 schools.[396] [397] [398]

* In other economically advanced nations—like Austria, Canada, Spain, France, Hungary, Australia, New Zealand, and the Netherlands—government revenues commonly fund the education of students who attend private K–12 schools and sometimes those who are homeschooled.[399]

* In different nations, governments exercise varying amounts of centralized control over public and private schools. Public schools in some countries have more autonomy than private schools in others.[400]

* Per the academic serial work Handbook of Research on School Choice:

Much of the debate over school choice is based on the premise that there is a public monopoly over the provision of schooling and that schools are inefficient, in part, because of the absence of competition. If families could be treated as consumers and had the right to freely choose which kind of education they would prefer for their children, choice advocates assert that both government and non-government schools would improve….[401]

Costs

* In the 2011-12 school year, the average spending per student enrolled in public K–12 schools was $12,401.[402] [403] This excludes state administration spending, unfunded pension liabilities, and post-employment benefits.[404]

* In the 2011-12 school year, the average spending per student enrolled in private K–12 schools was about $6,469.[405] [406] [407] [408] [409] [410]

* Per the academic textbook Antitrust Law:

Monopoly pricing confronts the consumer with false alternatives: the product that he chooses because it seems cheaper actually requires more of society’s scarce resources to produce. Under monopoly, consumer demands are satisfied at a higher cost than necessary.[411] [412] [413]

* Per the U.S. Supreme Court’s unanimous decision in Abood v. Detroit Board of Education:

A public employer, unlike his private counterpart, is not guided by the profit motive and constrained by the normal operation of the market.
Although a public employer, like a private one, will wish to keep costs down, he lacks an important discipline against agreeing to increases in labor costs that in a market system would require price increases.[414]

* Governments are subject to certain types of competition, because people and businesses sometimes migrate to locations where governments provide better value for their tax dollars, and because voters sometimes remove politicians for reasons such as increasing taxes and government spending.[415] [416]


Effects on Students

NOTE: In order to curb the methodological trickery that besets public policy debates, Just Facts has developed Standards of Credibility that call for the presentation of “data in its rawest comprehensible form.” However, the results of all experimental studies on the academic outcomes of students who experience school choice are more processed than Just Facts would prefer. Thus, instead of ignoring them or attempting to analyze all of the raw data, Just Facts has briefly summarized all of these studies and documented their results in the footnotes below.[417]

* At least 11 experimental (or quasi-experimental) studies have been conducted on the academic outcomes of students who experience school choice. Ten of them found statistically significant positive effects on certain groups of students, and none found statistically significant negative effects.[418] [419] [420] [421] [422] [423] [424] [425] [426] [427]

* In a 2014 interview, Bill O’Reilly asked Barack Obama, “Why do you oppose school vouchers when it would give poor people a chance to go to better schools?” Obama replied:

Actually—every study that’s been done on school vouchers, Bill, says that it has very limited impact if any.
I’ve taken a look at it. As a general proposition, vouchers has not significantly improved the performance of kids that are in these poorest communities.[428]

* A 2010 experimental study of a school voucher initiative in the District of Columbia published by the Obama administration’s Department of Education found the following statistically significant results:

  • Students who applied for a voucher and did not win a lottery to receive one had a graduation rate of 70%.
  • Students who applied for a voucher and won a lottery to receive one had a graduation rate of 82%.
  • Students who applied for a voucher, won a lottery to receive one, and then used it had a graduation rate of 91%.[429] [430]

* Per a 2004 report by the Civil Rights Project at Harvard University, the Urban Institute, Advocates for Children of New York, and the Civil Society Institute:

In an increasingly competitive global economy, the consequences of dropping out of high school are devastating to individuals, communities and our national economy. At an absolute minimum, adults need a high school diploma if they are to have any reasonable opportunities to earn a living wage. A community where many parents are dropouts is unlikely to have stable families or social structures.[431] [432] [433]

* The 2012 Democratic Party Platform states:

Too many students, particularly students of color and disadvantaged students, drop out of our schools, and Democrats know we must address the dropout crisis with the urgency it deserves.[434]

* In 2013, the Journal of Policy Analysis and Management published an experimental study of the same District of Columbia voucher initiative by the same lead author. The study found the following statistically significant results:

  • “The impact of using a [voucher] scholarship was an increase of 21 percentage points in the likelihood of graduating. The positive impact of the program on this important student outcome was highly statistically significant.”
  • “Our analysis indicated a marginally statistically significant positive overall impact of the program on reading achievement after at least four years.”
  • “We did find evidence to suggest that scholarship use boosted student reading scores by the equivalent of about one month of additional learning per year.”[435]

* In 2011, the Quarterly Journal of Economics published an experimental study of a public school choice initiative in the 20th largest school district in the nation (Charlotte-Mecklenburg, North Carolina). The study compared the adult crime outcomes of male students who won and did not win a lottery for their parents’ first choice of school. The author found the following statistically significant results:

  • “Across various schools and for both middle and high school students, I find consistent evidence that winning the lottery reduces adult crime.”
  • “The effect is concentrated among African American males and youth who are at highest risk for criminal involvement.”
  • “Across several different outcome measures and scalings of crime by severity, high-risk youth who win the lottery commit about 50% less crime.”
  • “They are also more likely to remain enrolled and ‘on track’ in school, and they show modest improvements on school-based behavioral outcomes such as absences and suspensions.”[436] [437]

* Per a 2006 book about school choice written by Harvard professors William G. Howell and Paul E. Peterson:

No publicly funded voucher program offers all students within a political jurisdiction the opportunity to attend the private school of their choice. All are limited in size and scope, providing vouchers only to students who come from low-income families, who attend “failing” public schools, or who lack a public school in their community.
Most publicly funded voucher programs today are so small that they do little to enrich the existing educational market.
Most privately funded voucher programs operating today promise financial support for only three to four years.
In the short term, vouchers may yield some educational benefits to the low-income families that use them. But sweeping, systemic change will not materialize as long as small numbers of vouchers, worth small amounts of money, are offered to families for short periods of time. The claims of vouchers’ strongest advocates as well as those of the most ardent opponents, both of whom forecast all kinds of transformations, will be put to the test only if and when the politics of voucher programs stabilizes, support grows, and increasing numbers of educational entrepreneurs open new private schools.[438]

Effects on Government Schools

* The primary measure of school resources is spending per student.[439] [440] [441]

* School choice initiatives that allow students to attend private schools typically increase the funding per student in public schools, because public schools do not have to educate students who leave and because private schools typically spend less per student than public schools.[442] [443]

* Certain school costs are fixed in the short term (like buildings), and thus, the cost savings of educating fewer students occurs in steps instead of linearly. This means that private school choice programs can temporarily decrease the funding per student in public schools.[444]

* In 2011, the journal Education Next published an experimental study of a Florida school choice initiative that offered private and public school vouchers to students enrolled in chronically failing public schools. The study compared the academic gains of public school students whose schools were eligible for vouchers and public school students whose schools were not eligible for vouchers. The study found the following statistically significant results:

  • On the Florida Comprehensive Assessment Test, “gains in test scores were 15 points higher among those schools whose students were eligible for vouchers than the gains among the rest of Florida’s public schools. Schools whose students were on the verge of becoming eligible also made greater gains.”
  • “The same pattern—of greater gains among schools facing competition or the threat thereof—was witnessed on the national Stanford-9 exam, confirming that the gains reflect genuine improvements in learning rather than teaching to the test or cheating.”
  • After one year, “the gains among [chronically failing] schools whose students were eligible for vouchers were enough to erase almost one-fifth of the [achievement] gap between their average score in the 2001-02 school year and the average score of all other Florida public schools.”[445]

* In 2013, the Journal of School Choice: International Research and Reform published a systematic review of 21 “high-quality” studies about the academic outcomes of U.S. students who remain in public schools after other students leave through choice programs. This review was designed to measure the “effects of competition on traditional public schools whose enrollments are threatened” by private school choice programs. The author found:

  • “All but one of these 21 studies found neutral/positive or positive results” on public school students.
  • None of the studies found negative results on public school students.
  • The experimental studies, which are studies that are able to determine casual effects, “unanimously find positive impacts on student academic achievement.”
  • The only study to find no effects across all subjects … was restricted to a relatively small number of participants in the year this study was conducted. Furthermore, a ‘hold-harmless’ provision ensured that public schools were insulated from the financial loss from any students that transferred into private schools with a voucher. The absence of a positive competition effect is thus unsurprising, given these design features.”[446]

Politics

* According to donations reported to the Federal Election Commission, the following education groups were among the top 100 organizations that gave the most money to federal candidates, parties, political action committees, and related organizations during the 2002-2014 election cycles:

Group

Rank in the Top 100

Total Contributions

Portion to Democrats & Liberal Groups

National Education Association (NEA)

4

$92,972,656

96%

American Federation of Teachers (AFT)

6

$69,757,113

99%

[447] [448] [449]

* The NEA and AFT are labor unions.[450] For facts about the accuracy of union donations reported to the Federal Election Commission, visit Just Facts’ research on labor unions.

* In 2009, the president of the NEA sent an open letter to Democrats in the U.S. House and Senate stating that “opposition to [private school] vouchers is a top priority for NEA.”[451]

* The 2012 Democratic Party Platform supports “public school options for low-income youth, including magnet schools, charter schools, teacher-led schools, and career academies.” The platform is silent on all other forms of school choice.[452]

* The 2012 Republican Party Platform supports school choice for “all children” through “charter schools, open enrollment requests, college lab schools, virtual schools, career and technical education programs, vouchers, or tax credits.”[453]
 

* The President of the United States appoints justices to the Supreme Court. These appointments must be approved by a majority of the Senate.[454] Senate rules allow for a “filibuster,” in which a vote to approve a Supreme Court justice can be blocked unless three-fifths of the senators (typically 60 out of 100) agree to let it take place.[455] [456]

* Once seated, federal judges serve for life unless they voluntarily resign or are removed through impeachment, which requires a majority vote of the House of Representatives and a two-thirds majority vote in the Senate.[457]

* In 2002, the U.S. Supreme Court ruled (5-4) that a school choice initiative in Cleveland was constitutional (details below). Five of the seven justices appointed by Republicans ruled that it was constitutional, and both of the justices appointed by Democrats ruled that it was not.[458]


Positions and Actions

* A nationally representative poll of 4,000 U.S. adults commissioned in 2015 by Education Next and the Kennedy School of Government at Harvard University found that the following portions of Americans:

  • are opposed to giving “all families with children in public schools a wider choice, by allowing them to enroll their children in private schools instead, with government helping to pay the tuition”:
    • 57% of teachers
    • 43% of whites
    • 36% of the general public
    • 30% of parents
    • 18% of African Americans
    • 13% of Hispanics
  • have ever enrolled their own children in private K–12 schools:
    • 22% of teachers
    • 18% of whites
    • 14% of the general public
    • 14% of parents
    • 14% of African Americans
    • 8% of Hispanics[459] [460]

* An analysis of U.S. Census data from the year 2000 by the Thomas B. Fordham Institute (a proponent of school choice) found that the following portions of parents were sending at least one of their own children to a private K–12 school:

  • 12.2% of all households with children
  • 17.5% of urban households with children
  • 21.5% of urban public school teacher households with children[461] [462]

* The following opponents of private school choice personally attended and also sent their own children to private K–12 schools:

* The American Civil Liberties Union (ACLU) opposes taxpayer-funded private school choice programs. One of the ACLU’s arguments for this stance is that:

School voucher schemes would force all taxpayers to support religious beliefs and practices with which they may strongly disagree.[481]

* The ACLU supports taxpayer-funded abortions. With regard to whether all taxpayers should be forced to support practices with which they may strongly disagree, the ACLU asks the following rhetorical question:

What about those who are morally or religiously opposed to abortion?

And answers:

Our tax dollars fund many programs that individual people oppose.[482]

Affluence and Connections

* Per the academic serial work Handbook of Research on School Choice:

It may be misleading … to distinguish traditional public schools as “unchosen.” Some parents choose to live near excellent public schools and thereby choose their children’s schools by residential location.[483] [484]

* Per the academic reference book 21st Century Geography, “economically depressed populations with limited access to resources … have restricted choices on where they can live….”[485]

* In 2013, homes in top-ranked school districts cost an average of $50 more per square foot than homes in average-ranked school districts.[486]

* In 2009, Barack Obama’s Secretary of Education, Arne Duncan, was asked, “Where does your daughter go to school, and how important was the school district in your decision about where to live?” Duncan replied:

She goes to Arlington public schools. That was why we chose where we live, it was the determining factor. That was the most important thing to me. … I didn’t want to try to save the country’s children and our educational system and jeopardize my own children’s education.[487]

* In 2014, families living in Arlington, Virginia, had a median cash income of $137,337, which was the highest of all counties in the United States.[488] [489]

* When Arne Duncan was the chief executive of the Chicago public school system, his office contacted school principals to help the children of politically connected parents get into better public schools. Per a 2010 Chicago Tribune article:

Whispers have long swirled that some children get spots in the city’s premier schools based on whom their parents know. But a list maintained over several years in Duncan’s office and obtained by the Tribune lends further evidence to those charges.
The log is a compilation of politicians and influential business people who interceded on behalf of children during Duncan’s tenure.
After getting a request … [Duncan’s staffers] would look up the child’s academic record. If the student met their standard, they would call the principal of the desired school.
[A Duncan staffer] said the calls from his office were not directives to the principals—no one was ever told they had to accept a student. Often, students did not get any of their top choices but were placed in larger, less competitive, but still desirable schools….
The initials “AD” are listed 10 times as the sole person requesting help for a student, and as a co-requester about 40 times. [A Duncan staffer] said “AD” stood for Arne Duncan, though Duncan’s involvement is unclear.[490]

Court Rulings

* In the 2002 case of Zelman v. Simmons-Harris, the U.S. Supreme Court ruled (5-4) that a school choice initiative in Cleveland was constitutional. This program provided tuition aid for students:

to attend participating public or private schools of their parent’s choosing and tutorial aid for students who choose to remain enrolled in public school. Both religious and nonreligious schools in the district may participate, as may public schools in adjacent school districts. Tuition aid is distributed to parents according to financial need, and where the aid is spent depends solely upon where parents choose to enroll their children.[491]

* The Zelman case hinged upon:

  • the “Establishment of Religion” clause in the First Amendment to the Constitution, which prohibits Congress from making any law “respecting an establishment of religion, or prohibiting the free exercise thereof.”
  • the Fourteenth Amendment to the Constitution, which, among other things, made the First Amendment applicable to state and local governments.”[492] [493] [494]

* Per the majority ruling in Zelman:

The Ohio program is entirely neutral with respect to religion. It provides benefits directly to a wide spectrum of individuals, defined only by financial need and residence in a particular school district. It permits such individuals to exercise genuine choice among options public and private, secular and religious. The program is therefore a program of true private choice. In keeping with an unbroken line of decisions rejecting challenges to similar programs, we hold that the program does not offend the Establishment Clause.[495]

* Per a dissent by Justice David Souter:

In the city of Cleveland the overwhelming proportion of large appropriations for voucher money must be spent on religious schools if it is to be spent at all, and will be spent in amounts that cover almost all of tuition. The money will thus pay for eligible students’ instruction not only in secular subjects but in religion as well, in schools that can fairly be characterized as founded to teach religious doctrine and to imbue teaching in all subjects with a religious dimension.[496]

* Per a concurrence by Justice Sandra Day O’Connor, the Cleveland school choice program:

pales in comparison to the amount of funds that federal, state, and local governments already provide religious institutions. … Although data for all states is not available, data from Minnesota, for example, suggest that a substantial share of Pell Grant and other federal funds for college tuition reach religious schools.[497]

* Per a dissent by Justice John Paul Stevens:

I am convinced that the Court’s decision is profoundly misguided. Admittedly, in reaching that conclusion I have been influenced by my understanding of the impact of religious strife on the decisions of our forbearers to migrate to this continent, and on the decisions of neighbors in the Balkans, Northern Ireland, and the Middle East to mistrust one another. Whenever we remove a brick from the wall that was designed to separate religion and government, we increase the risk of religious strife and weaken the foundation of our democracy.[498]

* Per a concurrence by Justice Clarence Thomas, the Cleveland program:

does not force any individual to submit to religious indoctrination or education. It simply gives [poor] parents a greater choice as to where and in what manner to educate their children. This is a choice that those with greater means have routinely exercised.[499]

* State supreme courts have ruled differently regarding whether various school choice programs are prohibited by their respective constitutions.[500] For example, the states of Florida and Indiana both enacted school choice programs that allowed certain children to attend private schools, but:

  • in 2006, the Florida Supreme Court ruled (5-2) that the program violated the states’ constitution, which calls for a “uniform, efficient, safe, secure, and high quality system of free public schools….”[501]
  • in 2013, the Indiana Supreme Court ruled (5-0) that the program did not violate the states’ constitution, which calls for a “general and uniform system of Common Schools.”[502]

* As of 2015, the Friedman Foundation for Educational Choice has identified 59 school choice programs in 28 states and the District of Columbia.[503]

Common Core

Overview

* Per the official Common Core website:

The Common Core is a set of high-quality [K–12] academic standards in mathematics and English language arts/literacy (ELA). These learning goals outline what a student should know and be able to do at the end of each grade.[504]

* The Common Core standards were developed and are maintained by the Common Core State Standards Initiative (CCSSI), which is a joint project of the Council of Chief State School Officers and the National Governors Association’s Center for Best Practices.[505] [506] [507]

* The Council of Chief State School Officers is a nonprofit organization controlled by the chief public education officials of each state, the District of Columbia, the U.S. military, and each U.S. territory.[508]

* The National Governors Association is an organization funded by the states and controlled by the governors of 55 U.S. states, territories and commonwealths.[509] The Center for Best Practices is a nonprofit organization that is “an integral part of the National Governors Association” but is “funded through federal grants and contracts, fee-for-service programs, private and corporate foundation contributions, and NGA’s Corporate Fellows program.”[510]

* The Bill and Melinda Gates Foundation was the primary funder of Common Core. This organization is controlled by Bill Gates, the wealthiest person in the world.[511] [512]

* In July 2009, CCSSI announced the names of 29 people that it had chosen to write the Common Core standards. The press release stated that:

  • these individuals were organized into two “working groups” consisting of 15 people for math and 14 people for ELA.
  • a “feedback group” will offer “expert input on draft documents,” but “final decisions” on the standards will be made by the working groups.
  • all “deliberations” of the working groups will “be confidential throughout the process.”[513]

* In September 2009, CCSSI announced that six governors and six chief state school officers had appointed a 29-person “validation committee” of education experts to review and certify the Common Core standards.[514]

* As a condition of being on the validation committee, each member had to agree to keep all “deliberations, discussions, and work” of the committee “strictly confidential” in perpetuity.[515] [516]

* In June 2010, the validation committee issued a report certifying the standards. The report listed 24 people who had signed the standards and:

  • stated that CCSSI had “convened a 25-member Validation Committee” (as opposed to the 29 members it actually convened[517]).
  • did not explicitly state that four of the committee members listed among the authors of the report refused to certify the standards.[518] [519]

* At least two of the committee members who declined to certify the standards have publicly criticized them and the process by which they were created.[520] [521]

* In November 2009, the Obama administration issued regulations governing how states could compete for $4.35 billion in federal education funds under its “Race to the Top” program. These regulations required states to demonstrate their “commitment to adopting a common set” of K–12 education standards. The regulations also stipulated that states would earn “high points” if they adopted the same standards as the “majority of the States in the country.”[522] [523] [524] [525]

* By September 2011, 44 states and the District Of Columbia had adopted the Common Core standards.[526] [527]

* By early 2014, 45 states had adopted the Common Core math and ELA standards, and one had adopted just the ELA standards. Six of these 46 states adopted the standards through legislative action, and 40 adopted the standards through decisions made by state boards of education or chief education officials.[528]

* In March and June of 2014, three state legislatures and governors passed laws withdrawing their states from Common Core.[529]

* CCSSI maintains a list of states and territories that are actively implementing Common Core.[530]

* CCSSI asserts that the Common Core standards:

  • “represent what American students need to know and do to be successful in college and careers.”[531]
  • “are for the benefit of all students.”[532]
  • “are research- and evidence-based.”[533]

* Two of the Common Core validation committee members who refused to validate the standards assert that they:

  • “barely prepare students for attending a community college, let alone a 4-year university.”
  • employ approaches to math and geometry that have yielded “bad outcomes” for students.
  • are not justified by “suitable research.”[534] [535] [536] [537]

* Organizations other than CCSSI are developing common standards for science, world languages, and arts.[538]


Centralization and Decentralization

* In the field of education, “centralization” refers to the transfer of decision-making authority from individuals, teachers, schools, and local governments to state or national governments. “Decentralization” is the opposite of centralization.[539] [540]

* Common Core is a form of educational centralization, because it specifies “a single body of knowledge and skills that students … will be expected to possess.”[541]

* The alleged benefits of centralizing education include but are not limited to:

  • More equity between schools with regard to standards, curriculum, testing, graduation requirements, funding, and teacher qualifications.[542] [543]
  • Reduced costs through economies of scale, so that certain tasks are not repeated needlessly.[544] [545]
  • Increased effectiveness in nations where students have similar cultural, ethnic, and linguistic backgrounds.[546]
  • Greater likelihood of equipping students with broader skills that transcend local and regional variations.[547]
  • The ability to rapidly spread educational improvements across schools.[548] [549]

* The alleged benefits of decentralizing education include but are not limited to:

  • More flexibility for educators to teach and motivate students based upon their personal aptitudes, backgrounds, interests, and goals.[550] [551] [552] [553] [554]
  • Reduced bureaucracy leading to less costs, “bureaucratic stagnation, centralized inefficiencies, and corruption.”[555] [556] [557] [558]
  • Increased opportunity for communities to be involved in education and greater ability for them to effect change.[559] [560]
  • Higher likelihood of equipping students with skills that are needed in the areas where they live.[561] [562]
  • Less proliferation of counterproductive or ineffectual polices favored by central authorities.[563]

* The following factors make it difficult to determine the effects of centralization and decentralization:

  • the complexity of measuring centralization.[564] [565] [566] [567] [568]
  • dynamics that may cause centralization to be helpful in some settings and harmful in others.[569] [570]
  • numerous conflating variables that affect students and school systems.[571] [572]
  • a dearth of experimental studies on this issue.[573] [574] [575]

* CCSSI asserts that a “root cause” of U.S. academic stagnation has been “an uneven patchwork of academic standards that vary from state to state and do not agree on what students should know and be able to do at each grade level.”[576]

* In October 2015, Just Facts asked CCSSI to provide “specific studies” that prove academic stagnation has been caused by differing state education standards.[577] CCSSI responded but did not provide such research.[578]

* Between 1920 and 2012, the portion of K–12 public school funding provided by:

  • local governments decreased from 83% to 45%.
  • state governments increased from 16% to 45%
  • the federal government increased from 0.3% to 10%.[579]

* As the federal and state governments have funded a growing share of K–12 school expenses, the U.S. education system has become increasingly centralized. This has transferred decision-making power from community schools to higher levels of government through:

  • a decrease in public school districts from about 117,00 in 1940 to 15,000 in 2000.[580]
  • district control over school personnel and curriculums.[581]
  • state-mandated “uniform standards across grade levels, schools, and districts.”[582]
  • state requirements on teacher certification, unionization, and binding arbitration.[583] [584] [585] [586] [587] [588] [589] [590]
  • federal laws and regulations that require and incentivize states to adopt various standards, assessments, and policies.[591] [592] [593] [594]

* Per a 1980 academic book on the U.S. education system:

[T]he American assumption is that communities constitute the unit most capable of running the schools. While the state may mandate that districts’ boundaries be redrawn, the notion that a particular state might be capable of running all schools within its boundaries is unthinkable in the American context.[595]

* Per a 1997 academic book on education decentralization:

Site-based management [SMB] is a business derivative of decentralization and participatory decision-making. The intent of site-based management is to improve student performance by making those closest to the delivery of services—teachers and principals—more autonomous, resulting in their being more responsive to parents and students concerns.
While many schools in the United States claim to implement SBM, very little decision-making is truly decentralized. In most cases SBM is only a subset of the various types of decisions that are made at the district level. … The illusion of autonomy based on SBM is often constrictive because the district office retains the final authority or limits the range of decision-making….[596]

Math

* The complete Common Core math standards are available here.[597]

* R. James Milgram, Emeritus Professor at Stanford University’s Department of Mathematics, was only mathematician who served on Common Core’s validation committee.[598] [599] [600] He refused to certify the standards and has been critical of them.[601]

* Other mathematicians have supported and opposed the standards.[602] [603] [604]


* CCSSI asserts that the math standards “call for speed and accuracy in calculation.”[605]

* The math standards require first graders to “think of whole numbers between 10 and 100 in terms of tens and ones” and solve problems such as:

  • “8 + 6” with “strategies” like this: “8 + 2 + 4 = 10 + 4 = 14”
  • “13 – 4” by “decomposing” numbers like this: “13 – 3 – 1 = 10 – 1 = 9”
  • “6 + 7” by “creating equivalent but easier or known sums” like this: “6 + 6 + 1 = 12 + 1 = 13”[606]

* The math techniques above are illustrated in the following videos produced by a local NBC television station. The station made these videos so that parents can help students “who find the math lessons confusing.” The lessons are taught by a local public school math teacher:[607]

Addition using Base 10 for 1st Grade & Older

Subtraction Using Place Value Chart (2nd Grade)

* CCSSI asserts that the Common Core standards are “research- and evidence-based.”[608]

* In October 2015, Just Facts asked CCSSI to provide “specific studies” that prove the math strategies above are effective.[609] CCSSI responded but did not provide such research.[610] [611] [612]


* The Common Core math standards compel students to explain “why a particular mathematical statement is true or where a mathematical rule comes from.”[613]

* The following sample question and answer are from a teaching guide for 3rd grade Common Core math from the North Carolina Department of Public Instruction:

Question: “What do you notice about the numbers highlighted in pink in the multiplication table? Explain a pattern using properties of operations.”
Common Core Math Verbalization Problem
Answer: “When (commutative property) one changes the order of the factors they will still gets the same product, example 6 x 5 = 30 and 5 x 6 = 30.”[614]

* Per W. Stephen Wilson, Ph.D. mathematician, professor of mathematics at Johns Hopkins University, and Common Core supporter:[615] [616]

There will always be people who believe that you do not understand mathematics if you cannot write a coherent essay about how you solved a problem, thus driving future STEM [science, technology, engineering and math] students away from mathematics at an early age. A fairness doctrine would require English language arts (ELA) students to write essays about the standard [math] algorithms, thus also driving students away from ELA at an early age. The ability to communicate is NOT essential to understanding mathematics.[617]

* Per CCSSI:

There is a world of difference between a student who can summon a mnemonic device [i.e., reminder of as rule] to expand a product such as (a + b)(x + y) and a student who can explain where the mnemonic comes from. The student who can explain the rule understands the mathematics, and may have a better chance to succeed at a less familiar task such as expanding (a + b + c)(x + y). Mathematical understanding and procedural skill are equally important….[618]

* In October 2015, Just Facts asked CCSSI to provide “specific studies” proving that forcing student to verbalize “why a particular mathematical statement is true” improves their math education.[619] CCSSI responded but did not provide such research.[620] [621] [622]


* The Common Core math standards require students to solve math problems by using “concrete models,” “drawings,” and “objects.”[623] The following video shows an example of this:

Homework Helper: Division with a Remainder (4th Grade & Up)

* Per a 1989 meta-study of student learning styles published in Educational Leadership and republished in 2002 in the California Journal of Science Education:

  • “Learning style is a biologically and developmentally imposed set of personal characteristics that make the same teaching method effective for some and ineffective for others.”
  • Students have differing sensory learning preferences, such as sight, sound, and touch.
  • Students learn better and achieve higher test scores when they are taught with instructional resources that correspond to their sensory preferences.[624] [625]

* CCSSI asserts that the Common Core standards “are for the benefit of all students.”[626]

* In October 2015, Just Facts asked CCSSI to provide “specific studies” that prove drawing pictures and using objects improve students’ math abilities.[627] CCSSI responded but did not provide such research.[628] [629] [630]


English Language Arts

* The complete Common Core standards for “English Language Arts & Literacy in History/Social Studies, Science, and Technical Subjects” are available here.[631]

* Sandra Stotsky was the only expert on K–12 English language arts (ELA) standards who served on Common Core’s validation committee.[632] The validation committee report states that she is an:

Endowed Chair in Teacher Quality at the University of Arkansas’s Department of Education Reform and Chair of the Sadlier Mathematics Advisory Board
Stotsky has abundant experience in developing and reviewing ELA standards. As senior associate commissioner of the Massachusetts Department of Education, she helped revise pre–K–12 standards. She also served on the 2009 steering committee for NAEP reading and on the 2006 National Math Advisory Panel.[633]

* Stotsky refused to certify the standards and has been critical of them.[634] [635]

* Per CCSSI, the ELA standards:

  • “set grade-specific standards but do not define the intervention methods or materials necessary to support students who are well below or well above grade-level expectations.”[636]
  • “focus on what is most essential,” but “they do not describe all that can or should be taught. A great deal is left to the discretion of teachers and curriculum developers.”[637]
  • “intentionally do not include a required reading list. Instead, they include numerous sample texts to help teachers prepare for the school year and allow parents and students to know what to expect during the year.”[638]
  • require “much greater attention to a specific category of informational text—literary nonfiction—than has been traditional.”[639] [640]
  • require students to “evaluate the argument and specific claims in a text, assessing whether the reasoning is valid and the evidence is relevant and sufficient….”[641]
  • require students to “evaluate a speaker’s point of view, reasoning, and use of evidence and rhetoric, identifying any fallacious reasoning or exaggerated or distorted evidence.”[642]

* The ELA standards assert that “a particular standard was included in the document only when the best available evidence indicated that its mastery was essential for college and career readiness in a twenty-first-century, globally competitive society.”[643]


Impact on Curriculum and Teaching

* CCSSI states that the Common Core standards are “not a curriculum.”[644]

* Per a 2003 academic book about middle school education standards:

No issue currently impacts the middle level school more than curriculum reform based on state and national standards. … In fact, the aligning of curriculum and instruction to specific state content standards has become a universal teaching skill now taught in colleges of education and practiced in literally all school districts. Does this mean that the content of middle level curriculum is being controlled by the content of state standards, and, to some degree, the content of the state tests that are based on these standards? Certainly, without a doubt.[645]

* In 2009, Bill Gates, the primary financial backer of Common Core, wrote that “identifying common standards is not enough. We’ll know we’ve succeeded when the curriculum and the tests are aligned to these standards.”[646] [647]

* In 2010, the Common Core validation committee wrote that “alignment of curricula and assessments to the Common Core State Standards … will be essential to the staying power and lasting impact of the standards.”[648]


* CCSSI asserts that the Common Core standards “do not dictate how teachers should teach.”[649]

* In 2014, Bill Gates wrote that the Common Core standards “are a blueprint of what students need to know, but they have nothing to say about how teachers teach that information.”[650]

* The Common Core ELA standards state that they “do not mandate such things as a particular writing process or the full range of metacognitive strategies that students may need to monitor and direct their thinking and learning.”[651] [652] The Common Core math standards do not contain a similar statement.[653]

* The Common Core math standards dictate the specific teaching processes and learning strategies shown in the examples above.

* Mathematician and Common Core supporter Hung-Hsi Wu has written that the Common Core math standards “say explicitly what needs to be taught about” about the “process of reasoning” for solving equations.[654] In a commentary for American Educator, Wu detailed how Common Core requires the use of certain processes for adding fractions and mandates that these processes be taught over three years from grades 3 through 5. The first part of the 3rd grade teaching process is as follows:

Briefly, in grade 3, students learn to think of a fraction as a point on the number line that is “so many copies” of its corresponding unit fraction. For example, 5/6 is 5 copies of the unit fraction 1/6 (and 1/6 is 1 copy). When we represent a fraction as a point on the number line, we place a unit fraction such as 1/6 on the division point to the right of 0 when the unit segment from 0 to 1 is divided into 6 equal segments. It is natural to identify such a point with the segment between the point itself and 0. Thus, as shown below, 1/6 is identified with the red segment between 0 and 1/6, 5/6 is identified with the segment between 0 and 5/6, etc. Then, the statement that “5/6 is 5 copies of 1/6” acquires an obvious visual meaning: the segment from 0 to 5/6 is 5 copies of the segment from 0 to 1/6.[655]

* For more facts about the impact of Common Core on teaching processes, see the forthcoming section on standardized tests.


Standardized Tests

* Tests (standardized and otherwise) can be used to:

  • “diagnose students’ strengths and weaknesses.”
  • “serve as the basis for teacher reflections on their instructional effectiveness.”
  • help “teachers to identify students who need additional instruction, special services, or more advanced work.”[656]
  • motivate students to learn and cognitively assist them in this process.[657] [658] [659]
  • help colleges and employers evaluate potential students and job candidates.[660] [661]
  • help parents, taxpayers, and policymakers evaluate the effectiveness of educators and education policies.[662]

* Per a 1980 academic book on the U.S. education system:

If no standardization exists, schools can postulate anything as satisfying graduation requirements. The development of standardized testing constitutes a response to this problem in the United States, but many graduates are led to believe that they have received a certain kind of education when, in reality their achievement is low.[663]

* When parents and governments don’t have access to valid information about student outcomes, school employees have leeway to minimize their workloads and favor their own interests over that of the students. Per a 2005 paper in the journal Education Economics, standardized exams can help remedy this problem “by supplying information about the performance of individual students relative to the national (or regional) student population.”[664] [665]

* Standardized tests can provide valid information about student outcomes if they accurately measure the desired effects of education. In education literature, this is called test validity. Per the Encyclopedia of Educational Psychology:

Validity is the extent to which a test measures what it was designed to measure. This means that tests are designed for specific purposes, and each test must have its own validity for the purpose for which it was designed. … That is, a test may consistently measure the wrong thing. Establishing test validity is thought to be a more complex process than establishing test reliability because establishing validity depends on the judgments to be made based on test results and how the results will be used. It is necessary to collect information as evidence that a test provides a true measure of such abstractions. To validate that tests provide true measures, certain information or evidence must be collected depending on the type of validity to be determined.[666]

* Per the Encyclopedia of Measurement and Statistics:

  • Standardized “test scores should never be used for purposes that are not validated.”
  • “The process of validation is a responsibility of the test developer and the sponsor of the testing program.”
  • “A technical report or test manual” should show “the argument and evidence supporting each intended test score interpretation or use.”[667]

* In 2010, the Obama administration awarded $330 million to two state-led consortiums to develop standardized tests that are aligned to Common Core:[668]

  1. The Partnership for Assessment of Readiness in College and Career (PARCC)[669]
  2. The Smarter Balanced Assessment Consortium (SBAC)[670]

* In 2011, the Obama administration announced that it would exempt states from various requirements of federal education law if the states adhered to four conditions. The first of these was to adopt “college- and career-ready standards” and administer standardized tests aligned with these standards.[671] [672] [673] CCSSI refers to Common Core as “college- and career-readiness standards.”[674]

* Among the 46 states that adopted the Common Core standards, at least 26 became members of the PARCC consortium at some point, and at least 31 became members of the SBAC consortium at some point.[675] [676]

* In the 2014-15 school year, the first time the PARCC and SBAC tests were administered, 11 states and the District of Columbia used the PARCC exam, and 18 states used the SBAC exam.[677] [678] [679]

* Since the 2014-15 school year, several states that previously used the PARCC and SBAC exams have announced that they will not use them in the future.[680] [681] [682] [683]

* PARCC and SBAC each maintain a list of states that are current members of the consortiums.[684] [685]

* Per a 2001 book on educational assessments published by the National Academies of Science:

[P]olicy makers see large-scale assessments of student achievement as one of their most powerful levers for influencing what happens in local schools and classrooms. Increasingly, assessments are viewed as a way not only to measure performance, but also to change it, by encouraging teachers and students to modify their practices.[686]

* David Coleman was a lead writer for the Common Core ELA standards, a cofounder of an organization that “played a leading role in developing” the standards, and one of the key people who lobbied Bill Gates to fund Common Core.[687] [688] [689] In 2011, Coleman stated that the Common Core standards:

are worthy of nothing if the assessments built on them are not worthy of teaching to, period. … [T]he great rule that I think is a statement of reality, though not a pretty one, which is teachers will teach towards the test. There is no force strong enough on this earth to prevent that. … Tests exert an enormous effect on instructional practice, direct and indirect, and it’s hence our obligation to make tests that are worthy of that kind of attention. It is in my judgment the single most important work we have to do over the next two years to ensure that that is so, period.[690]

* In 2012, Coleman became president of the College Board, the organization that produces the SAT college entrance exam and Advanced Placement tests.[691] [692] [693] [694]

* In 2013, Coleman announced that the College Board was going to “redesign the SAT” to “prepare students for the rigors of college and career.”[695]

* In 2014, the College Board published a “conversation guide” for the redesigned SAT that posed the question, “Is the SAT aligned to the Common Core?” The guide answered:

The redesigned SAT measures the skills and knowledge that evidence shows are essential for college and career success. It is not aligned to any single set of standards.[696]

* Starting in March 2016, the redesigned SAT replaces the former version.[697]

Homeschooling

* Homeschooling is the oldest form of education, and it was common practice until public schools became prevalent in the mid-1800s.[698] [699] [700] [701]

* In 2012, approximately 1.77 million children or 3.4% of K–12 students in the U.S. were homeschooled. These figures are not categorical, because some of these students also took some classes and played sports in public and private schools and colleges.[702] [703] [704]

* Depending upon their level of education, parents homeschooled their children at the following rates in 2007:

  • High school diploma or less – 1.4%
  • Vocational/technical or some college – 3.8%
  • Bachelor’s degree/some graduate school – 4.1%
  • Graduate/professional degree – 2.5%[705]

* Homeschooling is legal throughout the U.S. with widely varying state regulations on it. In the state of Washington, parents must be certified as teachers in order to homeschool.[706] [707] [708] [709]

* Homeschooling is permitted in most nations.[710] Germany has generally prohibited homeschooling since 1938 when the Nazi government enacted a law that effectively banned it.[711] [712] [713] Some other nations that ban or strictly limit homeschooling include Bulgaria, Greece, and the Netherlands.[714]

* A 2007 survey of parents who homeschool their children found that they did so for the following reasons:

Reason

Portion of Parents

Concern about the school environment, such as safety, drugs, or negative peer pressure

88%

Desire to provide religious or moral instruction to their children

83%

Dissatisfaction with academic instruction at other schools

73%

Desire to take a nontraditional approach to education

65%

Increased family time, financial considerations, flexibility to travel, or lack of proximity to an appropriate school

32%

Having a child with special needs “other than a physical or mental health problem that the parent feels the school cannot or will not meet”

21%

Having a child with a physical or mental health problem

11%

[715]

* In 2010, the journal Academic Leadership published a nationwide study of 11,739 homeschooled students during the 2007-08 school year. It found that parents spent a median of $400 to $599 per student on “textbooks, lesson materials, tutoring, enrichment services, testing, counseling, evaluation,” and other incidentals.[716] [717] Regarding these findings:

  • The study was based on a survey with a response rate of approximately 19%.[718] Thus, the results are not definitive.[719] [720] [721]
  • The families who participated in the survey were more likely than the general population to have bachelor’s degrees, be married, and not be racial minorities.[722] [723]
  • Adjusted for inflation into 2014 dollars, the median annual cost to educate a homeschooled student ranged from $457 to $684.[724]
  • These figures do not account for the cost of parental time investment or the value of being able to live in areas without regard for the quality of the local schools.[725]

* The same study in Academic Leadership examined the academic performance of 22,584 homeschooled students who took standardized tests administered by three major testing services. This was the broadest sample of homeschooled student test scores ever studied. The researcher found that the average performance of these students ranked in the top 80% of all U.S. students in each of the five academic disciplines examined:

Subject

Average National Ranking

Reading

87%

Language

81%

Math

80%

Science

82%

Social Studies

80%

[726]

* Regarding the findings above, the paper documents that “the above-average nature of these achievement test scores is also consistent” with nine other similar studies. Per the study’s author, Brian D. Ray (Ph.D. in science education):[727]

Comparisons between home-educated students and institutional school students nationwide should, however, be interpreted with thoughtfulness and care. … [This study] is not an experiment and readers should be careful about assigning causation to anything.
 
One could say … “This study simply shows that those parents choosing to make a commitment to home schooling are able to provide a very successful academic environment.” On the other hand, it may be that something about the typical nature and practice of home-based education causes higher academic achievement, on average, than does institutional state-run schooling….[728] [729]

* Per the Encyclopedia of Education Economics & Finance (2014):

More rigorous empirical work is needed regarding the “black box” of homeschooling before definitive conclusions are drawn.
 
At issue are several limitations for the study of homeschooler outcomes. First, there has been no empirical study thus far based on data obtained from a random sample of all homeschoolers. This means that the findings cannot be generalized from the study samples to the entire homeschooling population.[730]

Digital Learning

* Digital learning involves the use of computerized technologies to increase the effectiveness of education or reduce its costs.[731] [732]

* Forms of digital learning include (but are not limited to):

  • Online courses, which allow students to take courses that are not offered at their local public, private, or home schools. These courses also give students flexibility to pursue careers, independent study programs, athletics, and other endeavors.[733] [734] [735] [736]
  • Fully online schools, which “provide a student’s entire education online.” These schools offer accessibility to “hospitalized, homebound, pregnant, incarcerated, or other students in similar uncommon circumstances.”[737]
  • Adaptive learning software and platforms, which teach students through interactive courseware that analyzes each student’s learning style, academic needs, and intellectual abilities. The software then uses this information to deliver content designed to optimize each student’s learning potential. Per The SAGE Encyclopedia of Educational Technology:

Adaptive learning software and platforms, due to their ability to change the content and representations according to a student’s needs, resemble the situation when a personal instructor is available for each individual student.[738] [739]

  • Blended or hybrid learning, which combines traditional face-to-face teaching with digital technologies. Blended learning typically does not yield the cost savings of other digital learning approaches, because it does not reduce the need for school staff, school buildings, or student transportation.[740] [741] [742] [743]

* With regard to students in grades K–12:

  • In 2014, 30 states had fully online schools that operated throughout the state. These schools educated 316,00 children in the 2013-14 school year.[744]
  • In 2014, 26 states had fully online charter schools that primarily operated without physical buildings. These schools educated about 200,000 students In the 2013-14 school year.[745]
  • In 2014, 26 states had publicly funded online schools that allowed students to take supplemental courses. Students took 742,000 courses through these schools in the 2013-14 school year.[746]
  • In 2014, 20 states barred students from enrolling in public school online courses unless the courses were offered by their local school districts.[747]
  • In 2014, 11 states gave students the choice to take online courses by using funding that would otherwise be allocated to each child’s local school district.[748]
  • In 2014, five states required students to complete at least one online course as a requirement for earning a high school diploma.[749]

* With regard to college students:

  • In the 2011-12 school year, 32% of undergraduate students took at least one online class, and 8% took exclusively online classes.[750]
  • In the 2011-12 school year, 36% of graduate students took at least one online class, and 20% took exclusively online classes.[751]

* In 2013, the journal Teachers College Record published an analysis of 45 experimental (and quasi-experimental) studies that measured 50 effects of online and blended learning versus traditional face-to-face classrooms. These studies included K–12 students, college students, and people receiving job-related training. The authors found that:

  • “Among the 50 individual contrasts between online and face-to-face instruction, 11 were significantly positive, favoring the online or blended learning condition. Three significant negative effects favored traditional face-to-face instruction.”
  • In total, the studies showed that students who learned online fared about the same as students in traditional classrooms, and students in blended learning environments performed better than students in traditional classrooms. According to common (yet subjective) statistical conventions, the overall positive effect of blended learning was “small” to “medium.”
  • “Studies using blended learning also tended to involve additional learning time, instructional resources, and course elements that encourage interactions among learners.” These variables and others may have “contributed to the particularly positive outcomes for blended learning.”
  • This analysis of studies does “not reflect the latest technology innovations” since 2009, because “the cycle time for study design, execution, analysis, and publication cannot keep up with the fast-changing world of Internet technology.”[752] [753]

Footnotes

[1] The SAGE Encyclopedia of Educational Technology. Edited by J. Michael Spector. Sage Publications, 2015. Article: “Adaptive Learning Software and Platforms.” By Dr. Kinshuk. Pages 7-10.

Page 9: “Various cognitive abilities of students are crucial for learning. Examples of these abilities include working memory capacity, inductive reasoning ability, information processing speed, associative learning skills, metacognitive skills, observation ability, analysis ability, and abstraction ability.”

[2] Paper: “The Importance of Noncognitive Skills: Lessons from the GED Testing Program.” By James J. Heckman and Yona Rubinstein. American Economic Review, May, 2001. Pages 145-149. <jenni.uchicago.edu>

Pages 145-146:

Studies by Samuel Bowles and Herbert Gintis (1976), Rick Edwards (1976), and Roger Klein et al. (1991) demonstrate that job stability and dependability are traits most valued by employers as ascertained by supervisor ratings and questions of employers although they present no direct evidence on wages and educational attainment. Perseverance, dependability, and consistency are the most important predictors of grades in school (Bowles and Gintis, 1976).

[3] Encyclopedia of Education Economics and Finance. Edited by Dominic J. Brewer and Lawrence O. Picus. Sage Publications, 2014.

Page 498:

Omitted variable bias (OVB) occurs when an important independent variable is excluded from an estimation model, such as a linear regression, and its exclusion causes the estimated effects of the included independent variables to be biased. Bias will occur when the excluded variable is correlated with one or more of the included variables. An example of this occurs when investigating the returns to education. This typically involves regressing the log of wages on the number of years of completed schooling as well as on other demographic characteristics such as an individual’s race and gender. One important variable determining wages, however, is a person’s ability. In many such regressions, a measure of ability is not included in the regression (or the measure included only imperfectly controls for ability). Since ability is also likely to be correlated with the amount of schooling an individual receives, the estimated return to years of completed schooling will likely suffer from OVB.

[4] Report: “Improving Health and Social Cohesion through Education.” Organization for Economic Cooperation and Development, Center for Educational Research and Innovation, 2010. <www.oecd.org>

Pages 31-33:

(a) Reverse causality

One source of endogeneity stems from the possibility that there is reverse causality, whereby poor health or low CSE reduces educational attainment. Poor health in youth might interfere with educational attainment by interfering with student learning because of increased absences and inability to concentrate. It may also lead to poor adult health, thus creating a correlation between education and adult health. Similarly, low CSE such as lack of trust and political interest might also reduce educational attainment. For example, a family with low CSE might reduce their involvement with schools, which might lead to poorer student outcomes.7

The bias due to reverse causality can be re-cast as an omitted variable problem after considering timing issues. Since health and CSE tend to persist over time, past health or CSE can be an important determinant of current health or CSE. Thus, past health or CSE is an omitted variable in equation (1) which is captured by the error term. The extent to which omitting past health or CSE will lead to an omitted variable bias depends on the extent to which past health or CSE is also correlated with the included variable Education. Because the current stock of education depends on past decisions about investments in education, reverse causality generates a correlation between past health or CSE and the individual’s current stock of education.8 If the estimated coefficient picks up the effect of past health or CSE … will be biased towards overestimating the causal effect of education.

(b) Hidden third variables

The second source of endogeneity comes from the possibility that there might be one or more hard-to-observe hidden third variables which are the true causes of both educational attainment and health and CSE.9 In the context of the education-earnings link, the most commonly mentioned hidden third variable is ability.10 The long-standing concern in this line of research has been that people with greater cognitive ability are more likely to invest in more education, but even without more education their higher cognitive ability would lead to higher earnings (Card, 2001). More recently, non-cognitive abilities such as the abilities to think ahead, to persist in tasks, or to adapt to their environments have been suggested as important determinants of both education and earnings outcomes (Heckman and Rubinstein, 2001).

In the context of the education-health link, Fuchs (1993) describes time preference and self-efficacy as his favorite candidates for hidden third variables. People with a low rate of time preference are more willing to forego current utility and invest more in both education and health capital that pays off in the future (Farrell and Fuchs, 1982, Fuchs, 1982). A classic example is the Stanford Marshmallow Experiment in which 4 year-olds were given the choice between eating the marshmallow now or waiting for the experimenter’s return and getting a second marshmallow. When these children were tested again at age 18, Shoda et al. (1990) found a strong correlation between delayed gratification at age 4 and mathematical and English competence. Similarly, people with greater self-efficacy, i.e. those who believe in their ability to exercise control over outcomes, will be more likely to invest in schooling and health. Most studies of the schooling-health link use data sets that do not contain direct or proxy measures of time preference and self-efficacy. Consequently, these variables are typically omitted when estimating equation (1). The resulting omitted variable bias again implies that … will be biased towards overestimating the causal effect of education on health.

In the context of the education-CSE link, Milligan et al. (2004) suggest that the same parents who encourage their children to participate in civic activities might also instill in their children a stronger taste for education.11 It also seems reasonable to suggest time preference and self-efficacy as candidates for hidden third variables behind the education-CSE link. As suggested by the term “social capital”, education capital, health capital and CSE share some common features. In particular, a belief in self-efficacy is a potentially important determinant of civic participation and other aspects of investments in CSE. As in the education-health link, this type of omitted variable bias implies that … will be biased towards overestimating the causal effect of education on CSE.

A few recent studies have explored the issue of biases due to omitting measures of cognitive or non-cognitive skills in the context of the education-health link. Sander (1998) suggests that some of the negative correlation between attending college and smoking in the US can be attributed to differences in cognitive ability. Auld and Sidhu (2005) using the US Armed Forces Qualification Test (AFQT) scores suggest that cognitive ability accounts for roughly one-quarter of the association between education and self-reported health limitations. Kenkel et al. (2006) also use the AFQT score as a measure of cognitive skills and in addition include the Rotter index of the locus of control as a proxy for non-cognitive skills. They find that cognitive ability has strong associations with smoking, but weaker associations with being overweight. Their results for the Rotter index of locus of control12 suggest that men who believe that what happens to them is outside their control are more likely to currently smoke and are less likely to be former smokers. Locus of control is more weakly associated with women’s smoking and is not associated with the probability of being overweight or obese for either men or women. Hence, the empirical evidence from the United States suggests that cognitive and non-cognitive ability might be important omitted variables in many previous studies of the education-health link.

Page 36: “Although studies identifying the causal effect of education on health and CSE should strive to control for hidden third variables such as time preference, in most cases data limitations will severely limit the usefulness of this strategy.”

[5] Book: Higher Education: Handbook of Theory and Research (Volume 28). Edited by Michael B. Paulsen. Springer, 2013. Chapter 6: “Instrumental Variables: Conceptual Issues and an Application Considering High School Course Taking.” By Rob M. Bielby and others.

Page 273:

Some student characteristics may be difficult or impossible to obtain information about in observational datasets, but this does not change the fact that they are confounding factors (Cellini, 2008). Examples of potential unobservable factors in course taking effects research include a student’s enjoyment of the learning process and a student’s desire to undertake and persevere through challenges. It is likely that these unobservable factors contribute to student selection into high school courses and a student’s subsequent choice to attain a bachelor’s degree.

[6] Paper: “What Roles Do Parent Involvement, Family Background, and Culture Play in Student Motivation?” By Alexandra Usher and Nancy Kober. Center on Education Policy, 2012. <eric.ed.gov>

Page 1:

Research has long documented a strong relationship between family background factors, such as income and parents’ educational levels, and student achievement. Studies have also shown that parents can play an important role in supporting their children’s academic achievement. But to what extent do family background and parent involvement affect student motivation, a critical underpinning of academic achievement and success in school?

This paper examines findings from research about the impact of various family background and cultural factors on student motivation, as well as the role of parental beliefs, attitudes, and actions in fostering children’s motivation. The paper does not attempt to be a comprehensive review of the broad literature on family background and achievement, but rather is a sampling of some current findings from the field that appear to impinge on motivation.

[7] Book: Knowing What Students Know: The Science and Design of Educational Assessment. Edited by James W. Pellegrino, Naomi Chudowsky, and Robert Glaser. National Academies Press, 2001. <www.nap.edu>

Page 39: “[A] teacher whose students have higher test scores is not necessarily better than one whose students have lower scores. The quality of inputs—such as the entry characteristics of students or educational resources available—must also be considered.”

Page 40:

As with evaluating teachers, care must be taken not to extend the results of assessments at a particular school to reach conclusions not supported by the evidence. For example, a school whose students have higher test scores is not necessarily better than one whose students have lower test scores. As in judging teacher performance, the quality of inputs—such as the entry characteristics of students or educational resources available—must also be considered.

[8] Book: Introductory Econometrics: Using Monte Carlo Simulation with Microsoft Excel. By Humberto Barreto and Frank M. Howland. Cambridge University Press, 2006.

Page 491:

Omitted variable bias is a crucial topic because almost every study in econometrics is an observational study as opposed to a controlled experiment. Very often, economists would like to be able to interpret the comparisons they make as if they were the outcomes of controlled experiments. In a properly conducted controlled experiment, the only systematic difference between groups results from the treatment under investigation; all other variation stems from chance. In an observational study, because the participants self-select into groups, it is always possible that varying average outcomes between groups result from systematic difference between groups other than the treatment. We can attempt to control for these systematic differences by explicitly incorporating variables in a regression. Unfortunately, if not all of those differences have been controlled for in the analysis, we are vulnerable to the devastating effects of omitted variable bias.

[9] Book: Multiple Regression: A Primer. By Paul D. Allison. Pine Forge Press, 1998. Chapter 1: “What Is Multiple Regression?” <us.sagepub.com>

Page 20:

Multiple regression shares an additional problem with all methods of statistical control, a problem that is the major focus of those who claim that multiple regression will never be a good substitute for the randomized experiment. To statistically control for a variable, you have to be able to measure that variable so that you can explicitly build it into the data analysis, either by putting it in the regression equation or by using it to form homogeneous subgroups. Unfortunately, there’s no way that we can measure all the variables that might conceivably affect the dependent variable. No matter how many variables we include in a regression equation, someone can always come along and say, “Yes, but you neglected to control for variable X and I feel certain that your results would have been different if you had done so.”

[10] Book: Theory-Based Data Analysis for the Social Sciences (Second edition). By Carol S. Aneshensel. SAGE Publication, 2013.

Page 90:

The numerous variables that are omitted from any model are routinely assumed to be uncorrelated with the error term, a requirement for obtaining unbiased parameter estimates from regression models. However, the possibility that unmeasured variables are correlated with variables that are in the model obviously cannot be eliminated on empirical grounds. Thus, omitted variable bias cannot be ruled out entirely as a counterargument for the empirical association between the focal independent and dependent variables in observational studies.

[11] Book: Applied Statistics for Economists. By Margaret Lewis. Routledge, 2012.

Page 413: “In economics, our primary concern is to identify and then include all relevant independent variables as indicated by economic theory.9 Omitting such variables will cause the regression model to be underspecified, with the partial regression coefficients that are affected by the omitted variable(s) will not equal the true population parameters.”

[12] Encyclopedia of Education Economics and Finance. Edited by Dominic J. Brewer and Lawrence O. Picus. Sage Publications, 2014.

Page 498:

Omitted variable bias (OVB) occurs when an important independent variable is excluded from an estimation model, such as a linear regression, and its exclusion causes the estimated effects of the included independent variables to be biased. Bias will occur when the excluded variable is correlated with one or more of the included variables. An example of this occurs when investigating the returns to education. This typically involves regressing the log of wages on the number of years of completed schooling as well as on other demographic characteristics such as an individual’s race and gender. One important variable determining wages, however, is a person’s ability. In many such regressions, a measure of ability is not included in the regression (or the measure included only imperfectly controls for ability). Since ability is also likely to be correlated with the amount of schooling an individual receives, the estimated return to years of completed schooling will likely suffer from OVB.

[13] Book: Higher Education: Handbook of Theory and Research (Volume 28). Edited by Michael B. Paulsen. Springer, 2013. Chapter 6: “Instrumental Variables: Conceptual Issues and an Application Considering High School Course Taking.” By Rob M. Bielby and others.

Page 273:

An additional issue with the aforementioned studies is that none employ strategies to eliminate the influence of unobservable factors on course taking and attainment. Some student characteristics may be difficult or impossible to obtain information about in observational datasets, but this does not change the fact that they are confounding factors (Cellini, 2008). Examples of potential unobservable factors in course taking effects research include a student’s enjoyment of the learning process and a student’s desire to undertake and persevere through challenges. It is likely that these unobservable factors contribute to student selection into high school courses and a student’s subsequent choice to attain a bachelor’s degree. However, none of the studies we examined that employ a standard regression approach accounted for a student’s intrinsic love of learning or ability to endure through difficulties; the failure to account for these unobserved factors may bias the estimates that result from these studies.

[14] Book: Multiple Regression: A Primer. By Paul D. Allison. Pine Forge Press, 1998.

Chapter 1: “What Is Multiple Regression?” <us.sagepub.com>

Page 1: “Multiple regression is a statistical method for studying the relationship between a single dependent variable and one or more independent variables. It is unquestionably the most widely used statistical technique in the social sciences. It is also widely used in the biological and physical sciences.”

Chapter 3: “What Can Go Wrong With Multiple Regression?” <us.sagepub.com>

Page 49:

Any tool as widely used as multiple regression is bound to be frequently misused. Nowadays, statistical packages are so user-friendly that anyone can perform a multiple regression with a few mouse clicks. As a result, many researchers apply multiple regression to their data with little understanding of the underlying assumptions or the possible pitfalls. Although the review process for scientific journals is supposed to weed out papers with incorrect or misleading statistical methods, it often happens that the referees themselves have insufficient statistical expertise or are simply too rushed to catch the more subtle errors. The upshot is that you need to cast a critical eye on the results of any multiple regression, especially those you run yourself.

Fortunately, the questions that you need to ask are neither extremely technical nor large in number. They do require careful thought, however, which explains why even experts occasionally make mistakes or overlook the obvious. Virtually all the questions have to do with situations where multiple regression is used to make causal inferences.

NOTE: Pages 49-65 detail eight possible pitfalls of regression analyses.

Page 65: “The preceding eight problems are the ones I believe most often lead to serious errors in judging the results of a multiple regression. By no means do they exhaust the possible pitfalls that may arise. Before concluding this chapter, I’ll briefly mention a few others.”

Page 67: “Non-experimental data rarely tell you anything about the direction of a causal relationship. You must decide the direction based on your prior knowledge of the phenomenon you’re studying.”

[15] Book: Regression With Social Data: Modeling Continuous and Limited Response Variables. By Alfred DeMaris. John Wiley & Sons, 2004.

Page 9:

Regression modeling of nonexperimental data for the purpose of making causal inferences is ubiquitous in the social sciences. Sample regression coefficients are typically thought of as estimates of the causal impacts of explanatory variables on the outcome. Even though researchers may not acknowledge this explicitly, their use of such language as impact or effect to describe a coefficient value often suggest a causal interpretation. This practice is fraught with controversy….

Page 12:

Friedman … is especially critical of drawing causal inferences from observational data, since all that can be “discovered,” regardless of the statistical candlepower used, is association. Causation has to be assumed into the structure from the beginning. Or, as Friedman … says: “If you want to pull a causal rabbit out of the hat, you have to put the rabbit into the hat.” In my view, this point is well taken; but it does not preclude using regression for causal inference. What it means, instead, is that prior knowledge of the causal status of one’s regressors is a prerequisite for endowing regression coefficients with a causal interpretation, as acknowledged by Pearl 1998.

Page 13: “In sum, causal modeling via regression, using nonexperimental data, can be a useful enterprise provided we bear in mind that several strong assumptions are required to sustain it. First, regardless of the sophistication of our methods, statistical techniques only allow us to examine associations among variables.”

[16] Working paper: “Econometric Methods for Causal Evaluation of Education Policies and Practices: A Non-Technical Guide.” By Martin Schlotter, Guido Schwerdt, and Ludger Woessmann. CESifo Group (Center for Economic Studies, the Ifo Institute, and the Munich Society for the Promotion of Economic Research), December 2009. <poseidon01.ssrn.com>

Page 2:

Using standard statistical methods, it is reasonably straightforward to establish whether there is an association between two things—e.g., between the introduction of a certain education reform (the “treatment”) and the learning outcome of students (the “outcome”). However, whether such a statistical correlation can be interpreted as the causal effect of the reform on outcomes is another matter. The problem is that there may well be other reasons why this association comes about.

Page 4:

The “standard” approach to deal with differences between the treatment and the control group is to try to observe the ways in which the two groups differ and take out the difference in their outcomes that can be attributed to these other observed differences. This is the approach of multivariate models that estimate the effects of multiple variables on the outcome at the same time, such as the classical “ordinary least squares” (OLS) or multilevel modeling (or hierarchical linear models, HLM) techniques. They allow estimating the association between treatment and outcome “conditional” on the effects of the other observed factors.

Page 27:

But obtaining convincing evidence on the effects on specific education policies and practices is not an easy task. As a precondition, relevant data on possible outcomes has to be gathered. What is more, showing a mere correlation between a specific policy or practice and potential outcomes is no proof that the policy or practice caused the outcome. For policy purposes, mere correlations are irrelevant, and only causation is important. What policy-makers care about is what would really happen if they implemented a specific policy or practice—a would it really change any outcome that society cares about? In order to implement evidence-based policy, policy-makers require answers to such causal questions.

[17] Book: Multiple Regression: A Primer. By Paul D. Allison. Pine Forge Press, 1998. Chapter 1: “What Is Multiple Regression?” <us.sagepub.com>

Page 20:

Multiple regression shares an additional problem with all methods of statistical control, a problem that is the major focus of those who claim that multiple regression will never be a good substitute for the randomized experiment. To statistically control for a variable, you have to be able to measure that variable so that you can explicitly build it into the data analysis, either by putting it in the regression equation or by using it to form homogeneous subgroups. Unfortunately, there’s no way that we can measure all the variables that might conceivably affect the dependent variable. No matter how many variables we include in a regression equation, someone can always come along and say, “Yes, but you neglected to control for variable X and I feel certain that your results would have been different if you had done so.”

That’s not the case with randomization in an experimental setting. Randomization controls for all characteristics of the experimental subjects, regardless of whether those characteristics can be measured. Thus, with randomization there’s no need to worry about whether those in the treatment group are smarter, more popular, more achievement oriented, or more alienated than those in the control group (assuming, of course, that there are enough subjects in the experiment to allow randomization to do its job effectively).

[18] Book: The Education Gap: Vouchers and Urban Schools (Revised Edition). By William G. Howell and Paul E. Peterson with Patrick J. Wolf and David E. Campbell. Brookings Institution Press, 2006 (first published in 2002). <www.brookings.edu>

Page 39:

In a perfectly controlled experiment in the natural sciences, the researcher is able to control for all factors while manipulating the variable of interest. …

Experiments with humans are much more difficult to manage. Researchers cannot give out pills or placebos and then ask subjects not to change any other aspect of their lives. To conduct an experiment in the social sciences that nonetheless approximates the natural-science ideal, scientists have come up with the idea of random assignment—drawing names out of a hat (or, today, by computer) and putting subjects into a treatment or control group. When individuals are assigned randomly to one of two categories, one can assume that the two groups do not differ from each another systematically, except in the one respect under investigation.

Page 40:

It is the very simplicity of random assignment that makes such studies so eloquent and their findings so compelling. Simply by comparing what happens to members of the treatment and control groups, analysts can assess whether an intervention makes any difference, positive or negative. Of course, complications inevitably arise. People in the treatment group refuse treatment. People in the control group discover alternative ways of getting the treatment. People fail to report back, or move away, or provide inaccurate information. Still, statisticians have found a variety of ways to correct for such eventualities; such adjustments are discussed in greater detail below.

[19] Working paper: “Econometric Methods for Causal Evaluation of Education Policies and Practices: A Non-Technical Guide.” By Martin Schlotter, Guido Schwerdt, and Ludger Woessmann. CESifo Group (Center for Economic Studies, the Ifo Institute, and the Munich Society for the Promotion of Economic Research), December 2009. <poseidon01.ssrn.com>

Page 29:

In medical research, experimental evaluation techniques are a well-accepted standard device to learn what works and what does not. No-one would treat large numbers of people with a certain medication unless it has been shown to work. Experimental and quasi-experimental studies are the best way to reach such an assessment. It is hoped that a similar comprehension is reached in education, so that future education policies and practices will be able to better serve the students.

[20] Paper: “Private School Vouchers and Student Achievement: An Evaluation of the Milwaukee Parental Choice Program.” By Cecilia Elena Rouse. Quarterly Journal of Economics, May, 1998. Pages 553-602. <faculty.smu.edu>

Page 554:

Ideally, the issue of the relative effectiveness of private versus public schooling could be addressed by a social experiment in which children in a well-defined universe were randomly assigned to a private school (the “treatment group”), while others were assigned to attend public schools (the “control group”). After some period of time, one could compare outcomes, such as test scores, high school graduation rates, or labor market success between the treatment and control groups. Since, on average, the only differences between the groups would be their initial assignment—which was randomly determined—any differences in outcomes could be attributed to the type of school attended.

[21] Paper: “Multiple Inference and Gender Differences in the Effects of Early Intervention: A Reevaluation of the Abecedarian, Perry Preschool, and Early Training Projects.” By Michael L. Anderson. Journal of the American Statistical Association, December 2008. Pages 1481-1495. <are.berkeley.edu>

Page 1483: “The random assignment process makes estimation of causal effects straightforward.”

Page 1484: “Note that no assumptions regarding the distributions or independence of potential outcomes are needed. This is because the randomized design itself is the basis for inference (Fisher 1935), and preexisting clusters cannot be positively correlated with the treatment assignments in any systematic way.”

[22] Webpage: “Panel Data.” Princeton University Library, Data and Statistical Services, 2007. <dss.princeton.edu>

“With panel data, it is possible to control for some types of omitted variables even without observing them, by observing changes in the dependent variable over time. This controls for omitted variables that differ between cases but are constant over time. It is also possible to use panel data to control for omitted variables that vary over time but are constant between cases.”

[23] Paper: “Another Look at the New York City School Voucher Experiment.” By Alan B. Krueger and Pei Zhu. American Behavioral Scientist, January 2004. Page 658-698. <abs.sagepub.com>

Page 660: “Because of random assignment, however, estimates are unbiased even

without conditioning on baseline information….”

Pages 693-694:

Researchers are often unsure as to whether they should or should not control for baseline characteristics when a treatment is randomly assigned. We would advise that key results be presented both ways, with and without baseline characteristics (and with and without varying samples). …

Controlling for baseline characteristics can be justified if their inclusion increases the precision of the key estimates. As a practical matter, however, controlling for baseline characteristics tends to reduce the sample size, which could well offset the decline in residual variance and create a nonrepresentative sample.

Simplicity and transparency are valuable in their own right and can help prevent mistakes. These benefits may be well worth the loss of some precision. A complicated design increases the likelihood of error down the road, for example, in the derivation of weights or in the delineation of strata within which the treatment is randomly assigned. An underappreciated virtue of presenting results without baseline covariates is that the results are transparent and simple, and therefore less prone to human error.

[24] Book: Regression With Social Data: Modeling Continuous and Limited Response Variables. By Alfred DeMaris. John Wiley & Sons, 2004.

Page 10:

Nonetheless, according to the potential response model, the average causal effect can be estimated in an unbiased fashion if there is random assignment to the cost. Unfortunately, this pretty much rules out making causal inferences from nonexperimental data. … Still, hard-core adherence to the potential response framework would deny the causal status of most of the interesting variables in the social sciences because they are not capable of being assigned randomly. Holland and Rubin, for example have made up a motto that expresses this quite succinctly: “No causation without manipulation” (Holland, 1986, p. 959). In other words, only “treatments” that can be assigned randomly to any case at will are considered candidates for exhibiting causal effects. … I agree with others … who take exception to this restrictive conception of causality, despite the intuitive appeal of counterfactual reasoning.

Page 13:

Sobel’s (1988, p. 346) advice is in the same vein: “[s]ociologists might follow the example of epidemiologists. Here, when an association is found in an observational study that might plausibly suggest causation, the findings are treated as preliminary and tentative. The next step, when possible, is to conduct the randomized study that will more definitively answer the causal question of interest.”

[25] Paper: “The Effect of Merger on Deposit Money Banks Performance in the Nigerian Banking Industry.” By Ochei Ailemen Ikpefan and Bayo Liafeez Oyero Kazeem. European Journal of Accounting Auditing and Finance Research, March 2013. Pages 32-49. <www.eajournals.org>

Page 41:

Despite their substantial advantages, panel data pose several estimation and inference problems. Since such data involve both cross-section and time dimensions, problems that plague cross-sectional data (e.g., heteroscedasticity) and time series data (e.g., autocorrelation) need to be addressed (Gujarati 2004). There are several estimation techniques that have been developed to address these problems, though the most prominent of them are the Fixed Effects Model (FEM) and the Random Effects Model (REM). Fixed effects regression is the model to use when you want to control for omitted variables that differ between cases but are constant over time. This model allows for each cross sectional unit to differ in the model in recognition of the fact that each cross sectional unit may have peculiar characteristics of their own. It lets you use the changes in the variables over time to estimate the effects of the independent variables on your dependent variable, and is the main technique used for analysis of panel data. The random effects model will be suitable if you have reason to believe that some omitted variables may be constant over time but vary between cases, and others may be fixed between cases but vary over time as the random effects model can include both types.

[26] Paper: “A Modified General Location Model for Noncompliance With Missing Data: Revisiting the New York City School Choice Scholarship Program Using Principal Stratification.” By Hui Jin and others. Journal of Educational and Behavioral Statistics, April 2010. Pages 154-173. <jeb.sagepub.com>

Pages 154-155: “Although quite a few school choice voucher programs have been conducted across the United States, the New York City School Choice Scholarship Program is arguably the largest and best-implemented private school choice randomized experiment to date. However, even this program suffers from two common complications in social science experiments: missing data and noncompliance.”

[27] Calculated with data from:

a) Dataset: “Table 3.16. Government Current Expenditures by Function.” U.S. Bureau of Economic Analysis. Last revised September 17, 2014. <www.bea.gov>

b) Dataset: “Table 3.1. Government Current Receipts and Expenditures.” U.S. Bureau of Economic Analysis. Last revised February 27, 2015. <www.bea.gov>

c) Dataset: “Table 1.1.5. Gross Domestic Product.” U.S. Bureau of Economic Analysis. Last revised January 30, 2015. <www.bea.gov>

d) Dataset: “HH-1. Households by Type: 1940 to Present.” U.S. Census Bureau, Current Population Survey, January 2015. <www.census.gov>

NOTE: An Excel file containing the data and calculations is available upon request.

[28] Webpage: “FAQ: BEA seems to have several different measures of government spending. What are they for and what do they measure?” U.S. Bureau of Economic Analysis (BEA), May 28, 2010. <www.bea.gov>

“Current expenditures[†] measures all spending by government on current-period activities, and consists not only of government consumption expenditures,[‡] but also current transfer payments[§], interest payments,# and subsidies (and removes wage accruals less disbursements[Φ ψ]).”

NOTES:

† Per correspondence with BEA, data for education total expenditures is unavailable, because “BEA does not produce an estimate of government total expenditures by function as defined by the national income and product accounts (NIPAs).” [Email from BEA to Just Facts, March 18, 2015.] Searches for this data through other federal agencies also proved futile.

‡ “Consumption expenditures include what government spends on its work force and for goods and services, such as fuel for military jets and rent for government buildings and other structures.” [Webpage: “FAQ: BEA seems to have several different measures of government spending. What are they for and what do they measure?” BEA, May 28, 2010. <www.bea.gov>]

§ “Current transfer payments. These consist of social benefits and other current transfer payments to the rest of the world. Social benefits are payments from social insurance funds, such as social security and Medicare, and payments providing other income support, such as Medicaid and food stamp benefits. Other current transfers to the rest of the world consists of federal aid to foreign countries and payments to international organizations such as the United Nations. Federal ‘other current transfer payments’ also includes grants-in-aid to state and local governments.” [Report: “A Primer on BEA’s Government Accounts.” By Bruce E. Baker and Pamela A. Kelly. BEA, March 2008. <www.bea.gov>. Page 34.]

# “Interest payments. These represent the cost of borrowing by governments to finance their capital and operational costs.” [Report: “A Primer on BEA’s Government Accounts.” By Bruce E. Baker and Pamela A. Kelly. BEA, March 2008. <www.bea.gov>. Page 34.]

£ “Subsidies. These are payments to businesses, including homeowners and government enterprises at another level of government.” [Report: “A Primer on BEA’s Government Accounts.” By Bruce E. Baker and Pamela A. Kelly. BEA, March 2008. <www.bea.gov>. Page 34.]

Φ “Wage accruals less disbursements is no longer an adjustment that is needed in the accounts as BEA’s income estimates for wages were moved to an accrual basis during the 2013 comprehensive revision. This adjustment was related to the timing of wage payments due to things like pay days. Pensions benefits are captured in the national accounts on an accrual basis as the net present value of the estimated future benefits. So the estimates of consumption expenditures would include the accrual based cost of pensions.” [Email from BEA to Just Facts, March 18, 2015.]

ψ “BEA will change its recording of the transactions of defined benefit pension plans from a cash accounting basis to an accrual accounting basis as part of the comprehensive revision. … Accrual accounting is the preferred method for compiling national accounts because it matches incomes earned from production with the corresponding productive activity and records both in the same period.31 The recording of defined benefit pension plan transactions on an accrual basis will better align pension-related compensation with the timing of when employees earned the benefit entitlements and will avoid the volatility that arises if sporadic cash payments made by employers into defined benefit pension plans are used to measure compensation.32 In cases when defined benefit pension plans are underfunded or overfunded, the employers’ pension plan expenses also will be measured more accurately under the accrual approach.” [Report: “Preview of the 2013 Comprehensive Revision of the National Income and Product Accounts: Changes in Definitions and Presentations: Changes in Definitions and Presentations.” By Shelly Smith and others. BEA, March 2013. <www.bea.gov>. Pages 21-22.]

[29] See the footnote above, which documents that the U.S. Bureau of Economic Analysis does not publish total expenditures for education or any other specific function of government. Per the U.S. Bureau of Economic Analysis, land purchases are included in total expenditures but not in current expenditures:

Total government expenditures: In addition to the transactions that are included in current expenditures, this measure includes … net purchases of nonproduced assets (for example, land).” [Webpage: “FAQ: BEA seems to have several different measures of government spending. What are they for and what do they measure?” U.S. Bureau of Economic Analysis, May 28, 2010. <www.bea.gov>]

[30] See the second footnote above, which documents that the U.S. Bureau of Economic Analysis does not publish data for education total expenditures. Per the U.S. Bureau of Economic Analysis, purchases of durable items such as buildings and computers are included in total expenditures but not in current expenditures:

Gross investment includes what government spends on structures, equipment, and software, such as new highways, schools, and computers. …

Total government expenditures: In addition to the transactions that are included in current expenditures, this measure includes gross investment….†

Note that although current expenditures do not include gross investment, they do include “consumption of fixed capital,” which measures the depreciation of durable items as they are used.‡ § This accounts for most (but not all) of the costs of these items. From 1929 through 2014, consumption of fixed capital was roughly 70% of gross government investment.#

NOTES:

† Webpage: “FAQ: BEA seems to have several different measures of government spending. What are they for and what do they measure?” U.S. Bureau of Economic Analysis, May 28, 2010. <www.bea.gov>

‡ Webpage: “FAQ: BEA seems to have several different measures of government spending. What are they for and what do they measure?” U.S. Bureau of Economic Analysis (BEA), May 28, 2010. <www.bea.gov>

“Current expenditures … consists not only of government consumption expenditures….”

§ Report: “A Primer on BEA’s Government Accounts.” By Bruce E. Baker and Pamela A. Kelly. U.S. Bureau of Economic Analysis, March 2008. <www.bea.gov>

Page 33: “Consumption expenditures [include] … consumption of fixed capital….”

Page 38: “In estimating the national income and product accounts, it is necessary to compute consumption of fixed capital (CFC) or depreciation. … In the government accounts, CFC is used as a proxy for the services derived from government capital investment, both past and present.”

# Calculated with data from “Table 3.1. Government Current Receipts and Expenditures.” U.S. Bureau of Economic Analysis. Last revised February 27, 2015. <www.bea.gov>. NOTE: An Excel file containing the data and calculations is available upon request.

[31] The next six footnotes document that:

  • Substantial amounts of healthcare benefits promised to government employees are unfunded.
  • Accrual accounting (as opposed to cash accounting) of these benefits would measure these unfunded liabilities.
  • The U.S. Bureau of Economic Analysis (the source of the education spending figures cited above) uses cash accounting (as opposed to accrual accounting) to measure government spending on retiree healthcare benefits.

[32] Report: “State and Local Government Retiree Health Benefits: Liabilities Are Largely Unfunded, but Some Governments Are Taking Action.” U.S. Government Accountability Office, November 2009. <www.gao.gov>

Accounting standards require governments to account for the costs of other post-employment benefits (OPEB)—the largest of which is typically retiree health benefits—when an employee earns the benefit. As such, governments are reporting their OPEB liabilities—the amount of the obligation to employees who have earned OPEB. As state and local governments have historically not funded retiree health benefits when the benefits are earned, much of their OPEB liability may be unfunded. Amid fiscal pressures facing governments, this has raised concerns about the actions the governments can take to address their OPEB liabilities. …

The total unfunded OPEB liability reported in state and the largest local governments’ CAFRs exceeds $530 billion. However, as variations between studies’ totals show, totaling unfunded OPEB liabilities across governments is challenging for a number of reasons, including the way that governments disclose such data. The unfunded OPEB liabilities for states and local governments GAO reviewed varied widely in size. Most of these governments do not have any assets set aside to fund them. The total for unfunded OPEB liabilities is higher than $530 billion because GAO reviewed OPEB data in CAFRs for the 50 states and 39 large local governments but not data for all local governments or additional data reported in separate financial reports. Also, the CAFRs we reviewed report data that predate the market downturn. Finally, OPEB valuations are based on assumptions about the health care cost inflation rate and discount rates for assets, which also affect the size of the unfunded liability.

Some state and local governments have taken actions to address liabilities associated with retiree health benefits by setting aside assets to prefund the liabilities before employees retire and reducing these liabilities by changing the structure of retiree health benefits. Approximately 35 percent of the 89 governments for which GAO reviewed CAFRs reported having set aside some assets for OPEB liabilities, but the percentage of the OPEB liability funded varied.

[33] Article: “Defined Benefit Pensions and Household Income and Wealth.” By Marshall B. Reinsdorf and David G. Lenze. Survey of Current Business (published by the U.S. Bureau of Economic Analysis), August 2009. Pages 50-62. <www.bea.gov>

Pages 50-51:

U.S. households usually participate in two kinds of retirement income programs: social security, and a plan sponsored by their employer. The employer plan may be organized as either a defined contribution plan, such as a 401(k) plan, or a defined benefit plan. Defined contribution plans provide resources during retirement based on the amount of money that has been accumulated in an account, while defined benefit plans determine the level of benefits by a formula that typically depends on length of service and average or final pay. …

… A defined benefit plan has an actuarial liability for future benefits equal to the expected present value of the benefits to which the plan participants are entitled under the benefit formula. The value of participants’ benefit entitlement often does not coincide with the value of the assets that the plan has on hand; indeed, a plan that has a pay-as-you-go funding scheme might have only enough assets to ensure that it can make the current period’s benefit payments.2

A complete measure of the wealth of defined benefit plan participants is the expected present value of the benefits to which they are entitled, not the assets of the plan. This follows from the fact that if the assets of a defined benefit plan are insufficient to pay promised benefits, the plan sponsor must cover the shortfall. …

… [U]nder the accrual approach, the measure of compensation income for the participants in the plan is no longer the employer’s actual contributions to the plan. Instead, it is the present value of the benefits to which employees become entitled as a result of their service to the employer.

Measuring household income from defined benefit plans by actual contributions from employers plus actual investment income on plan assets can be considered a cash accounting approach to measuring these plans’ transactions…. We use the term “accrual accounting” to mean any approach that adopts the principle that a plan’s benefit obligations ought to be recorded as they are incurred.

2. Federal law requires that private pension plans operate as funded plans, not as pay-as-you-go plans.

[34] Report: “Preview of the 2013 Comprehensive Revision of the National Income and Product Accounts: Changes in Definitions and Presentations: Changes in Definitions and Presentations.” By Shelly Smith and others. U.S. Bureau of Economic Analysis, March 2013. <www.bea.gov>

Page 21: “Accrual accounting is the preferred method for compiling national accounts because it matches incomes earned from production with the corresponding productive activity and records both in the same period.”

[35] Summary of Statement No. 106: “Employers’ Accounting for Postretirement Benefits Other Than Pensions.” Financial Accounting Standards Board, December 1990. <www.fasb.org>

The Board believes that measurement of the obligation and accrual of the cost based on best estimates are superior to implying, by a failure to accrue, that no obligation exists prior to the payment of benefits. The Board believes that failure to recognize an obligation prior to its payment impairs the usefulness and integrity of the employer’s financial statements. …

This Statement relies on a basic premise of generally accepted accounting principles that accrual accounting provides more relevant and useful information than does cash basis accounting. …

[L]ike accounting for other deferred compensation agreements, accounting for postretirement benefits should reflect the explicit or implicit contract between the employer and its employees.

[36] Email from the U.S. Bureau of Economic Analysis to Just Facts, March 19, 2015.

“Retiree health care benefits (which are separate from pensions) are treated on a cash basis and are effectively included in the compensation of current workers.”

[37] Webpage: “What is included in federal government employee compensation?” U.S. Bureau of Economic Analysis. Accessed March 19, 2015 at <www.bea.gov>

“The contributions for employee health insurance consist of the federal share of premium payments to private health insurance plans for current employees and retirees.”

[38] Calculated with data from:

a) Table 3.16: “Government Current Expenditures by Function.” U.S. Department of Commerce, Bureau of Economic Analysis. Last revised September 17, 2014. <www.bea.gov>

b) Report: “Fiscal Year 2015 Historical Tables: Budget Of The U.S. Government.” White House Office of Management and Budget, February 26, 2014. <www.whitehouse.gov>

“Table 3.1—Outlays by Superfunction and Function: 1940–2018.”

Accessed October 24, 2014 at <www.whitehouse.gov>

NOTE: An Excel file containing the data and calculations is available upon request.

[39] The next 3 footnotes document that:

  • private-sector economic output is equal to personal consumption expenditures (PCE) + gross private domestic investment (GPDI) + net exports of goods and services.
  • PCE is the “primary measure of consumer spending on goods and services” by private individuals and nonprofit organizations.
  • GPDI is a measure of private spending on “structures, equipment, and intellectual property products.”

Since education is not a service that is typically imported or exported, a valid approximation of private spending on education can be arrived at by summing PCE and GPDI. The fourth footnote below details the data used in this calculation.

[40] Report: “Fiscal Year 2013 Analytical Perspectives, Budget Of The U.S. Government.” White House Office of Management and Budget, February 12, 2012. <www.gpo.gov>

Page 471:

The main purpose of the NIPAs [national income and product accounts published by the U.S. Bureau of Economic Analysis] is to measure the Nation’s total production of goods and services, known as gross domestic product (GDP), and the incomes generated in its production. GDP excludes intermediate production to avoid double counting. Government consumption expenditures along with government gross investment — State and local as well as Federal — are included in GDP as part of final output, together with personal consumption expenditures, gross private domestic investment, and net exports of goods and services (exports minus imports).

[41] Report: “Concepts and Methods of the U.S. National Income and Product Accounts (Chapters 1–11 and 13).” U.S. Bureau of Economic Analysis, November 2014. <www.bea.gov>

Page 5-1:

Personal consumption expenditures (PCE) is the primary measure of consumer spending on goods and services in the U.S. economy.1 It accounts for about two-thirds of domestic final spending, and thus it is the primary engine that drives future economic growth. PCE shows how much of the income earned by households is being spent on current consumption as opposed to how much is being saved for future consumption.

PCE also provides a comprehensive measure of types of goods and services that are purchased by households. Thus, for example, it shows the portion of spending that is accounted for by discretionary items, such as motor vehicles, or the adjustments that consumers make to changes in prices, such as a sharp run-up in gasoline prices.2

Page 5-2:

PCE measures the goods and services purchased by “persons”—that is, by households and by nonprofit institutions serving households (NPISHs)—who are resident in the United States. Persons resident in the United States are those who are physically located in the United States and who have resided, or expect to reside, in this country for 1 year or more. PCE also includes purchases by U.S. government civilian and military personnel stationed abroad, regardless of the duration of their assignments, and by U.S. residents who are traveling or working abroad for 1 year or less.

Page 5-64:

Nonprofit institutions serving households

In the NIPAs, nonprofit institutions serving households (NPISHs), which have tax-exempt status, are treated as part of the personal sector of the economy. Because NPISHs produce services that are not generally sold at market prices, the value of these services is measured as the costs incurred in producing them.

In PCE, the value of a household purchase of a service that is provided by a NPISH consists of the price paid by the household or on behalf of the household for that service plus the value added by the NPISH that is not included in the price. For example, the value of the educational services provided to a student by a university consists of the tuition fee paid by the household to the university and of the additional services that are funded by sources other than tuition fees (such as by the returns to an endowment fund).

[42] Report: “Measuring the Economy: A Primer on GDP and the National Income and Product Accounts.” U.S. Bureau Of Economic Analysis, October 2014. <www.bea.gov>

Page 8: “Gross private domestic investment consists of purchases of fixed assets (structures, equipment, and intellectual property products) by private businesses that contribute to production and have a useful life of more than one year, of purchases of homes by households, and of private business investment in inventories.”

[43] Calculated with data from:

a) Dataset: “Table 2.3.5U. Personal Consumption Expenditures by Major Type of Product and by Major Function.” U.S. Bureau of Economic Analysis. Last revised June 1, 2015. <www.bea.gov>

b) Dataset: “Table 1.1.5. Gross Domestic Product.” U.S. Bureau of Economic Analysis. Last revised January 30, 2015. <www.bea.gov>

c) Dataset: “HH-1. Households by Type: 1940 to Present.” U.S. Census Bureau, Current Population Survey, January 2015. <www.census.gov>

NOTE: An Excel file containing the data and calculations is available upon request.

[44] Calculated with data from:

a) Dataset: “Table 2.3.5U. Personal Consumption Expenditures by Major Type of Product and by Major Function.” U.S. Bureau of Economic Analysis. Last revised June 1, 2015. <www.bea.gov>

b) Dataset: “Table 1.1.5. Gross Domestic Product.” U.S. Bureau of Economic Analysis. Last revised January 30, 2015. <www.bea.gov>

c) Dataset: “HH-1. Households by Type: 1940 to Present.” U.S. Census Bureau, Current Population Survey, January 2015. <www.census.gov>

NOTE: An Excel file containing the data and calculations is available upon request.

[45] Calculated with the dataset: “PINC-03. Educational Attainment--People 25 Years Old and Over, by Total Money Earnings in 2013, Work Experience in 2013, Age, Race, Hispanic Origin, and Sex; Current Population Survey 2014 Annual Social and Economic Supplement.” U.S. Census Bureau, October 2, 2014. <www.census.gov>

NOTE: An Excel file containing the data and calculations is available upon request.

[46] Report: “Income and Poverty in the United States: 2013.” By Carmen DeNavas-Walt and Bernadette D. Proctor. U.S. Census Bureau, September 2014. <www.census.gov>

Page 4: “The income and poverty estimates shown in this report are based solely on money income before taxes and do not include the value of noncash benefits, such as those provided by the Supplemental Nutrition Assistance Program (SNAP), Medicare, Medicaid, public housing, or employer-provided fringe benefits.”

[47] Calculated with the dataset: “PINC-03. Educational Attainment--People 25 Years Old and Over, by Total Money Earnings in 2013, Work Experience in 2013, Age, Race, Hispanic Origin, and Sex; Current Population Survey 2014 Annual Social and Economic Supplement.” U.S. Census Bureau, October 2, 2014. <www.census.gov>

NOTE: An Excel file containing the data and calculations is available upon request.

[48] Dataset: “PINC-03. Educational Attainment--People 25 Years Old and Over, by Total Money Earnings in 2013, Work Experience in 2013, Age, Race, Hispanic Origin, and Sex; Current Population Survey 2014 Annual Social and Economic Supplement.” U.S. Census Bureau, October 2, 2014. <www.census.gov>

[49] Report: “Key Concepts and Features of the 2003 National Assessment of Adult Literacy.” By Sheida White and Sally Dillow. U.S. Department of Education, Institute of Education Sciences, December 2005. <nces.ed.gov>

Page 3:

NAAL measures how well U.S. adults perform tasks with printed materials

As a part of their everyday lives, adults in the United States interact with a variety of printed and other written materials to perform a multitude of tasks. A comprehensive list of such tasks would be virtually endless. It would include such activities as balancing a checkbook, following directions on a prescription medicine bottle, filling out a job application, consulting a bus schedule, correctly interpreting a chart in the newspaper, and using written instructions to operate a voting machine. …

Literacy is not a single skill or quality that one either possesses or lacks. Rather, it encompasses various types of skills that different individuals possess to varying degrees. There are different levels and types of literacy, which reflect the ability to perform a wide variety of tasks using written materials that differ in nature and complexity. A common thread across all literacy tasks is that each has a purpose—whether that purpose is to pay the telephone bill or to understand a piece of poetry. All U.S. adults must successfully perform literacy tasks in order to adequately function—that is, to meet personal and employment goals as well as contribute to the community.

[50] Report: “Key Concepts and Features of the 2003 National Assessment of Adult Literacy.” By Sheida White and Sally Dillow. U.S. Department of Education, Institute of Education Sciences, December 2005. <nces.ed.gov>

Page 1: “Sponsored by the National Center for Education Statistics (NCES) in the U.S. Department of Education’s Institute of Education Sciences, the 2003 National Assessment of Adult Literacy (NAAL) is a nationally representative assessment of literacy among adults (age 16 and older) residing in households and prisons in the United States.”

Page 3:

The National Assessment of Adult Literacy (NAAL) measures the ability of a nationally representative sample of adults to perform literacy tasks similar to those that they encounter in their daily lives. Statistical procedures ensure that NAAL participants represent the entire population of U.S. adults who are age 16 and older and live in households or prisons. In 2003, the 19,714 adults who participated in NAAL represented a U.S. adult population of about 222 million. …

Like other adults, NAAL participants bring to literacy tasks a full range of backgrounds, experiences, and skill levels. Like real-life tasks, NAAL tasks vary with respect to the difficulty of the materials used as well as the complexity of the actions to be performed. However, in order to be fair to all participants, none of the tasks require specialized background knowledge, and all of them were reviewed for bias against particular groups. …

NAAL tasks reflect a definition of literacy that emphasizes the use of written materials to function adequately in one’s environment and to develop as an individual. Of course, the actual literacy tasks that individuals must perform in their daily lives vary to some extent depending on the nature of their work and personal goals. However, virtually all literacy tasks require certain underlying skills, such as the ability to read and understand common words. NAAL measures adults’ performance on a range of tasks mimicking actual tasks encountered by adults in the United States. Adults with very low levels of performance on NAAL tasks may be unable to function adequately in 21st century America.

Page 4:

NAAL examines three literacy areas—prose, document, and quantitative

NAAL reports a separate score for each of three literacy areas:

Prose literacy refers to the knowledge and skills needed to perform prose tasks—that is, to search, comprehend, and use continuous texts. Prose examples include editorials, news stories, brochures, and instructional materials.

Document literacy refers to the knowledge and skills needed to perform document tasks—that is, to search, comprehend, and use noncontinuous texts in various formats. Document examples include job applications, payroll forms, transportation schedules, maps, tables, and drug or food labels.

Quantitative literacy refers to the knowledge and skills required to perform quantitative tasks—that is, to identify and perform computations, either alone or sequentially, using numbers embedded in printed materials. Examples include balancing a checkbook, computing a tip, completing an order form, or determining the amount of interest on a loan from an advertisement.

Pages 13-14:

In addition to the four performance levels that were developed using the bookmark method, the Committee on Performance Levels for Adult Literacy also recommended that NCES report on a fifth category—Nonliterate in English. This category includes two groups of adults:

• Two percent of the adults who were selected to participate in the 2003 NAAL could not be tested—in other words, could not participate in NAAL at all—because they knew neither English nor Spanish (the other language spoken by interviewers in most areas). The Nonliterate in English category includes these adults because their inability to communicate in English indicates a lack of English literacy skills.

• Three percent of the adults who were tested in 2003 did not take the main part of the assessment, which was too difficult for them, but did take an alternative assessment specifically designed for the least-literate adults. Questions on the alternative assessment were asked in either English or Spanish, but all written materials were in English only. While some adults in this group displayed minimal English literacy skills (e.g., the ability to identify a letter or a common word in a simple text), others lacked such skills entirely. (For example, an adult who was able to attempt the alternative assessment by following oral Spanish instructions might still prove unable to do even the minimal amount of English reading needed to provide any correct answers.) The Nonliterate in English category includes these adults because their English literacy skills are minimal at best.

In 2003, the two groups of adults classified as Nonliterate in English—the 2 percent who could not be tested because of a language barrier (i.e., inability to communicate in English or Spanish) and the 3 percent who took the alternative assessment—accounted for 11 million adults, or 5 percent of the population. These adults range from having no English literacy skills to being able to “recognize some letters, numbers, or common sight words in everyday contexts” (Hauser et al. 2005).

Page 32: “Because NAAL is designed to assess literacy in English, all the written instructions and responses are in English.”

[51] Webpage: “National Assessment of Adult Literacy, Sample Questions Search: 1985, 1992 & 2003.” U.S. Department Of Education, National Center for Education Statistics. Accessed July 19, 2015 at <nces.ed.gov>

“Item Number: N120601 … Scale: Document Literacy … Task Demand: Text Search … Percent who answered correctly: 82.0%”

“Item Number: XX … Scale: XX … Task Demand: XX”

[52] Webpage: “National Assessment of Adult Literacy, Sample Questions Search: 1985, 1992 & 2003.” U.S. Department Of Education, National Center for Education Statistics. Accessed July 19, 2015 at <nces.ed.gov>

“Item Number: C080101 … Scale: Quantitative Literacy … Task Demand: Computation, Text Search … Percent who answered correctly: 59.6%”

[53] Webpage: “National Assessment of Adult Literacy, Sample Questions Search: 1985, 1992 & 2003.” U.S. Department Of Education, National Center for Education Statistics. Accessed July 19, 2015 at <nces.ed.gov>

“Item Number: N130901 … Quantitative Literacy: XX … Task Demand: Computation, Text Search … Percent who answered correctly: 45.8%”

[54] Webpage: “National Assessment of Adult Literacy, Sample Questions Search: 1985, 1992 & 2003.” U.S. Department Of Education, National Center for Education Statistics. Accessed July 19, 2015 at <nces.ed.gov>

“Item Number: N091001 … Scale: Quantitative Literacy … Task Demand: Computation, Text Search … Percent who answered correctly: 17.6%”

[55] Webpage: “National Assessment of Adult Literacy, Sample Questions Search: 1985, 1992 & 2003.” U.S. Department Of Education, National Center for Education Statistics. Accessed July 19, 2015 at <nces.ed.gov>

“Item Number: N100701 … Scale: Document Literacy … Task Demand: Application, Inferential, Text Search … Percent who answered correctly: 10.6%”

[56] Article: “Mann, Horace.” Encyclopædia Britannica Ultimate Reference Suite 2004.

U.S. educator, the first great American advocate of public education, who believed that, in a democratic society, education should be free and universal, nonsectarian, democratic in method, and reliant on well-trained, professional teachers. …

… He started a biweekly Common School Journal for teachers and lectured widely to interested groups of citizens. His annual reports to the board ranged far and wide through the field of pedagogy, stating the case for the public school and discussing its problems. Essentially his message centred on six fundamental propositions:

(1) that a republic cannot long remain ignorant and free, hence the necessity of universal popular education;

(2) that such education must be paid for, controlled, and sustained by an interested public;

(3) that such education is best provided in schools embracing children of all religious, social, and ethnic backgrounds;

(4) that such education, while profoundly moral in character, must be free of sectarian religious influence;

(5) that such education must be permeated throughout by the spirit, methods, and discipline of a free society, which preclude harsh pedagogy in the classroom; and

(6) that such education can be provided only by well-trained, professional teachers.

Mann encountered strong resistance to these ideas—from clergymen who deplored nonsectarian schools, from educators who condemned his pedagogy as subversive of classroom authority, and from politicians who opposed the board as an improper infringement of local educational authority—but his views prevailed.

[57] Webpage: “Horace Mann.” PBS. Accessed July 9, 2015 at <www.pbs.org>

Horace Mann, often called the Father of the Common School, began his career as a lawyer and legislator. … He spearheaded the Common School Movement, ensuring that every child could receive a basic education funded by local taxes. His influence soon spread beyond Massachusetts as more states took up the idea of universal schooling. …

… These developments were all part of Mann’s driving determination to create a system of effective, secular, universal education in the United States.

[58] The Common School Journal for the Year 1841 (Volume III). Edited by Horace Mann (Secretary of the Massachusetts Board of Education). Marsh, Capen, Lyon, and Webb, 1841. Page 15:

CONCLUSION.

The tendency of the preceding remarks must be obvious, and therefore our application of them may be brief.

In the first place, if there must be institutions, associations, combinations amongst men, whose tendency is to alienation and discord; to whet the angry feelings of individuals against each other; to transmit the contentions of the old to the young, and to make the enmities of the dead survive to the living;— if these things must continue to be, in a land calling itself Christian;—let there be one institution, at least, which shall be sacred from the ravages of the spirit of party,—one spot, in the wide land, unblasted by the fiery breath of animosity. Amid unions for aggression, let there be one rallying point for a peaceful and harmonious cooperation and fellowship, where all the good may join, in the most beneficent of labors. The young do not come into life, barbed and fanged against each other. A blow is never the salutation which two infants give, on meeting for the first time. By a proper training, the kindly feelings may be kept uppermost. Those powers may be cultivated, which have the double blessing of bestowing happiness on the possessor and on the race. The Common School is the institution which can receive and train up children in the elements of all good knowledge, and of virtue, before they are subjected to the alienating competitions of life. This institution is the greatest discovery ever made by man;—we repeat it, the Common School is the greatest discovery ever made by man. In two grand, characteristic attributes, it is supereminent over all others:—first, in its universality;— for it is capacious enough to receive and cherish in its parental bosom every child that comes into the world; and second, in the timeliness of the aid it proffers;—its early, seasonable supplies of counsel and guidance making security antedate danger. Other social organizations are curative and remedial; this is a preventive and an antidote; they come to heal diseases and wounds; this to make the physical and moral frame invulnerable to them. Let the Common School be expanded to its capabilities, let it be worked with the efficiency of which it is susceptible, and nine tenths of the crimes in the penal code would become obsolete; the long catalogue of human ills would be abridged; men would walk more safely by day; every pillow would be more inviolable by night; property, life, and character held by a stronger tenure; all rational hopes respecting the future brightened.

[59] Book: Life of Horace Mann (Volume 1). By Mary Tyler Peabody Mann (Horace Mann’s wife). Walker, Fuller, and Company, 1865. Pages 141-142:

Dec. 20. Have been engaged mainly this week with a long article for the first number of the third volume of the “Common-school Journal.” …

In this introduction, Mr. Mann shows how forcibly his mind had been led, by the “wild roar of party politics” of that year, to look into the secret springs of public action; and how futile is the attempt to “define truth by law, and to perpetuate it by power and wealth, instead of knowledge.” He closes it in these words, which apply equally to our own times: …

… The common school is the institution which can receive and train up children in the elements of all good knowledge and of virtue before they are subjected to the alienating competitions of life. This institution is the greatest discovery ever made by man: we repeat it, the common school is the greatest discovery ever made by man. … Let the common school be expanded to its capabilities, let it be worked with the efficiency of which it is susceptible, and nine-tenths of the crimes in the penal code would become obsolete; the long catalogue of human ills would be abridged; men would walk more safely by day; every pillow would be more inviolable by night; property, life, and character held by a stronger tenure; all rational hopes respecting the future brightened.

[60] Article: “Most U.S. youths unfit to serve, data show.” By William H. McMichael. Army Times, November 3, 2009. <www.armytimes.com>

According to the latest Pentagon figures, a full 35 percent, or more than one-third, of the roughly 31.2 million Americans aged 17 to 24 are unqualified for military service because of physical and medical issues. And, said Curt Gilroy, the Pentagon’s director of accessions, “the major component of this is obesity. We have an obesity crisis in the country. There’s no question about it.”

The Pentagon draws its data from the Centers for Disease Control, which regularly tracks obesity. The steadily rising trend is not good news for military recruiters, despite their recent successes, nor for the overall health of the U.S. population.

In 1987, according to the CDC, a mere 6 percent of 18- to 34-year-olds, or about 1 out of 20, were obese. In 2008, 22 years later, 23 percent of that age group — almost 1 out of 4 — was considered to be obese. …

“Kids are just not able to do push-ups,” Gilroy said. “And they can’t do pull-ups. And they can’t run.”

The reasons are “almost common knowledge, Gilroy said — what he called “the couch potato syndrome” and the widespread elimination of scholastic physical fitness programs.

[61] Article: “Army: 77% of Young Americans Now Unfit to Serve.” By Kevin Haraldson. News Radio 1200 WOIA (San Antoni, TX), January 5th 2014. <www.woai.com>

The commander of the U.S. Army Recruiting Command tells 1200 WOAI news that more than three quarters of all of the 17 to 24 year old men and women in America are currently not eligible for enlistment in the Army, mainly because they are overweight.

“The latest figures we have is 77.5% are disqualified for one reason or another,” Maj. Gen. Allen Batschelet said in an interview. “That means just 22.5% would be qualified.”

He said prospective recruits disqualify themselves for three main reasons. One is what the Army refers to as ‘morally disqualified,’ meaning they have used or are using illegal drugs or have a criminal record. Number two are ‘cognitive disqualifications,’ meaning they are not educated enough to pass the Army entrance exam. But the third, and the most widespread, are physical disqualifications, which are mainly due to being overweight.

[62] Article: “Recruits’ Ineligibility Tests the Military: More Than Two-Thirds of American Youth Wouldn’t Qualify for Service, Pentagon Says.” By Miriam Jordan. Wall Street Journal, June 27, 2014. <online.wsj.com>

More than two-thirds of America’s youth would fail to qualify for military service because of physical, behavioral or educational shortcomings, posing challenges to building the next generation of soldiers even as the U.S. draws down troops from conflict zones. …

The military services don’t keep figures on how many people they turn away. But the Defense Department estimates 71% of the roughly 34 million 17- to 24-year-olds in the U.S. would fail to qualify to enlist in the military if they tried, a figure that doesn’t even include those turned away for tattoos or other cosmetic issues.

[63] Paper: “The Importance of Noncognitive Skills: Lessons from the GED Testing Program.” By James J. Heckman and Yona Rubinstein. American Economic Review, May, 2001. Pages 145-149. <jenni.uchicago.edu>

Page 145:

It is common knowledge outside of academic journals that motivation, tenacity, trustworthiness, and perseverance are important traits for success in life. … Numerous instances can be cited of high-IQ people who failed to achieve success in life because they lacked self discipline and low-IQ people who succeeded by virtue of persistence, reliability, and self-discipline. The value of trustworthiness has recently been demonstrated when market systems were extended to Eastern European societies with traditions of corruption and deceit.

It is thus surprising that academic discussions of skill and skill formation almost exclusively focus on measures of cognitive ability and ignore noncognitive skills. … Most assessments of school reforms stress the gain from reforms as measured by the ability of students to perform on a standardized achievement test. …

Studies by Samuel Bowles and Herbert Gintis (1976), Rick Edwards (1976), and Roger Klein et al. (1991) demonstrate that job stability and dependability are traits most valued by employers as ascertained by supervisor ratings and questions of employers….

Page 146:

The GED [General Educational Development] is a mixed signal. Dropouts who take the GED are smarter (have higher cognitive skills) than other high-school dropouts and yet at the same time have lower levels of noncognitive skills. Both types of skill are valued in the market and affect schooling choices. Our finding challenges the conventional signaling literature, which assumes a single skill. It also demonstrates the folly of a psychometrically oriented educational evaluation policy that assumes cognitive skills to be all that matter. Inadvertently, a test has been created that separates out bright but nonpersistent and undisciplined dropouts from other dropouts. It is, then, no surprise that GED recipients are the ones who drop out of school, fail to complete college (Stephen Cameron and James Heckman, 1993) and who fail to persist in the military (Janice Laurence, 2000). GED’ s are “wiseguys,” who lack the abilities to think ahead, to persist in tasks, or to adapt to their environments. The performance of the GED recipients compared to both high-school dropouts of the same ability and high-school graduates demonstrates the importance of noncognitive skills in economic life.

[64] Dataset: “Table 236.55. Total and current expenditures per pupil in public elementary and secondary schools: Selected years, 1919-20 through 2011-12.” U.S. Department Of Education, National Center for Education Statistics, July 2014. <nces.ed.gov>

Expenditure per pupil in fall enrollment1 … Total expenditure4 … 2011-12 …

Unadjusted dollars2 [=] 12,010

Constant 2013-14 dollars3 [=] 12,401 …

1 Data for 1919-20 to 1953-54 are based on school-year enrollment. …

2 Unadjusted (or “current”) dollars have not been adjusted to compensate for inflation.

3 Constant dollars based on the Consumer Price Index, prepared by the Bureau of Labor Statistics, U.S. Department of Labor, adjusted to a school-year basis.

4 Excludes “Other current expenditures,” such as community services, private school programs, adult education, and other programs not allocable to expenditures per student at public schools.

[65] Report: “Documentation to the NCES Common Core of Data Public Elementary/Secondary School Universe Survey: School Year 2010–11, Version Provisional 2a.” U.S. Department Of Education, National Center for Education Statistics, September 2012. <nces.ed.gov>

Page C-6: “Elementary A general level of instruction classified by state and local practice as elementary, composed of any span of grades not above grade 8; preschool or kindergarten included only if it is an integral part of an elementary school or a regularly established school system.”

Page C-14: “Secondary The general level of instruction classified by state and local practice as secondary and composed of any span of grades beginning with the next grade following the elementary grades and ending with or below grade 12.”

[66] Dataset: “Table 236.55. Total and current expenditures per pupil in public elementary and secondary schools: Selected years, 1919-20 through 2011-12.” U.S. Department Of Education, National Center for Education Statistics, July 2014. <nces.ed.gov>

“NOTE: Beginning in 1980-81, state administration expenditures are excluded from both ‘total’ and ‘current’ expenditures.”

[67] The next seven footnotes document that:

  • NCES data on education spending does not account for unfunded pension or healthcare benefits.
  • “Defined benefit” pension programs guarantee employees specified levels of benefits, regardless of how much money the employer has previously set aside to pay those benefits.
  • Most government employees receive defined benefit pensions.
  • Many government pension plans are underfunded.

[68] Email from the U.S. Department Of Education, National Center for Education Statistics to Just Facts, March 31, 2015.

“The expenditures reported [in Table 213 and elsewhere by the National Center for Education Statistics] do not include or account for unfunded pension benefits or unfunded healthcare benefits.”

[69] Report: “Documentation for the NCES Common Core of Data National Public Education Financial Survey (NPEFS), School Year 2010–11 (Fiscal Year 2011), Preliminary File Version 1a.” U.S. Department Of Education, National Center for Education Statistics, December 2013. <nces.ed.gov>

Pages 5-6:

NPEFS [National Public Education Finance Survey] collects employee benefits for the functions of instruction, support services, and operation of noninstructional services. NPEFS respondents are currently reporting employee benefits, which are defined as the “Amounts paid by the school district on behalf of employees (amounts not included in gross salary but in addition to that amount). Such payments are fringe benefits payments and although not directly paid to employees, nevertheless are part of the cost of personal services.”13 The definition of employee benefits is derived from the NCES school finance accounting handbook, Financial Accounting for Local and State School Systems: 2009 Edition (Allison, Honegger, and Johnson 2009). NPEFS does not collect actuarially determined annual required contributions;14 accrued annual requirement contribution liability;15 or the actuarial value of pension plan assets.16

13 The NPEFS instruction manual provides that employee benefits “include amounts paid by, or on behalf of, an LEA for fringe benefits such as group insurance (including health benefits for current and retired employees), social security contributions, retirement contributions, tuition reimbursements, unemployment compensation, worker’s compensation, and other benefits such as unused sick leave (NCES, 2012).

14 Actuarially determined annual required contributions are the annual required contribution (ARC) that incorporates both the cost of benefits in the current year and the amortization of the plan’s unfunded actuarial accrued liability.

15 The accrued annual requirement contribution liability is the difference between actuarially determined contributions and actual payments made to the pension fund.

16 Actuarial value of pension plan assets is the value of cash, investments, and other property belonging to a pension plan as used by an actuary for the purpose of an actuarial valuation.

[70] Article: “Defined Benefit Pensions and Household Income and Wealth.” By Marshall B. Reinsdorf and David G. Lenze. Survey of Current Business (published by the U.S. Bureau of Economic Analysis), August 2009. Pages 50-62. <www.bea.gov>

Pages 50-51:

U.S. households usually participate in two kinds of retirement income programs: social security, and a plan sponsored by their employer. The employer plan may be organized as either a defined contribution plan, such as a 401(k) plan, or a defined benefit plan. Defined contribution plans provide resources during retirement based on the amount of money that has been accumulated in an account, while defined benefit plans determine the level of benefits by a formula that typically depends on length of service and average or final pay. …

… A defined benefit plan has an actuarial liability for future benefits equal to the expected present value of the benefits to which the plan participants are entitled under the benefit formula. The value of participants’ benefit entitlement often does not coincide with the value of the assets that the plan has on hand; indeed, a plan that has a pay-as-you-go funding scheme might have only enough assets to ensure that it can make the current period’s benefit payments.2

A complete measure of the wealth of defined benefit plan participants is the expected present value of the benefits to which they are entitled, not the assets of the plan. This follows from the fact that if the assets of a defined benefit plan are insufficient to pay promised benefits, the plan sponsor must cover the shortfall. …

… [U]nder the accrual approach, the measure of compensation income for the participants in the plan is no longer the employer’s actual contributions to the plan. Instead, it is the present value of the benefits to which employees become entitled as a result of their service to the employer.

Measuring household income from defined benefit plans by actual contributions from employers plus actual investment income on plan assets can be considered a cash accounting approach to measuring these plans’ transactions…. We use the term “accrual accounting” to mean any approach that adopts the principle that a plan’s benefit obligations ought to be recorded as they are incurred.

2. Federal law requires that private pension plans operate as funded plans, not as pay-as-you-go plans.

[71] Report: “Preview of the 2013 Comprehensive Revision of the National Income and Product Accounts: Changes in Definitions and Presentations: Changes in Definitions and Presentations.” By Shelly Smith and others. U.S. Bureau of Economic Analysis, March 2013. <www.bea.gov>

Page 22:

For defined benefit plans, the cash accounting approach is inadequate because the value of the benefit entitlements that participants accrue during a year often fails to coincide with the plans’ cash receipts. …

… An employer who offers a defined benefit pension plan promises that an employee will receive a specified amount of future benefits that usually increases with each year of service.”

[72] Textbook: Fiscal Administration. By John Mikesell. Wadsworth, Cengage Learning, 2014.

Page 170:

The vast majority of public employee pension programs are defined benefit programs.30

30 Exceptions to the rule that government employees are in defined benefit programs: faculty at many state universities are in the TIAA/CREF defined contribution program and federal employees in the Federal Employee Retirement System Thrift Savings Plan. In 1996, Michigan established a defined contribution plan for all new employees. In 1991, West Virginia school employees were put in such a plan.

[73] Paper: “Bringing Actuarial Measures of Defined Benefit Pensions into the U.S. National Accounts.” By Marshall Reinsdorf (International Monetary Fund), David Lenze (U.S. Bureau of Economic Analysis), and Dylan Rassier (U.S. Bureau of Economic Analysis). International Monetary Fund, International Association for Research in Income and Wealth, 33rd General Conference, August 24-30, 2014. <www.iariw.org>

Pages 12-13:

Although private DB [defined benefit] plans are on the decline, for state and local government employees DB pension plans continue to be the predominant form of retirement plan. … In 2012, there were 227 state-administered and 3,771 locally-administered DB pension plans according to the Survey of Public Pension Plans conducted by the U.S. Census Bureau. The number of active state and local plan members was 14.4 million (91 percent of the 15.9 million full-time equivalent employees), and the number of beneficiaries receiving periodic benefit payments was 9.0 million.

[74] Webpage: “What changes were made to pensions during the 2013 comprehensive revision, and how have the changes affected private, federal, and state and local compensation?” U.S. Bureau of Economic Analysis, July 31, 2013. <www.bea.gov>

“A large number of state and local pension plans are underfunded, which means that the value of the plans’ assets is less than their accrued pension liabilities for current workers and retirees.”

[75] The next 4 footnotes document that:

  • NCES data on education spending does not account for unfunded healthcare and other post-employment benefits.
  • Retiree health benefits are common in the government sector and rare in the private sector.
  • Substantial amounts of healthcare benefits promised to government employees are unfunded.

[76] Email from the U.S. Department Of Education, National Center for Education Statistics to Just Facts, March 31, 2015.

“The expenditures reported [in Table 213 and elsewhere by the National Center for Education Statistics] do not include or account for unfunded pension benefits or unfunded healthcare benefits.”

[77] Report: “Documentation for the NCES Common Core of Data National Public Education Financial Survey (NPEFS), School Year 2010–11 (Fiscal Year 2011), Preliminary File Version 1a.” U.S. Department Of Education, National Center for Education Statistics, December 2013. <nces.ed.gov>

Pages 5-6:

NPEFS [National Public Education Finance Survey] collects employee benefits for the functions of instruction, support services, and operation of noninstructional services. NPEFS respondents are currently reporting employee benefits, which are defined as the “Amounts paid by the school district on behalf of employees (amounts not included in gross salary but in addition to that amount). Such payments are fringe benefits payments and although not directly paid to employees, nevertheless are part of the cost of personal services.”13 The definition of employee benefits is derived from the NCES school finance accounting handbook, Financial Accounting for Local and State School Systems: 2009 Edition (Allison, Honegger, and Johnson 2009). NPEFS does not collect actuarially determined annual required contributions;14 accrued annual requirement contribution liability;15 or the actuarial value of pension plan assets.16

13 The NPEFS instruction manual provides that employee benefits “include amounts paid by, or on behalf of, an LEA for fringe benefits such as group insurance (including health benefits for current and retired employees), social security contributions, retirement contributions, tuition reimbursements, unemployment compensation, worker’s compensation, and other benefits such as unused sick leave (NCES, 2012).

14 Actuarially determined annual required contributions are the annual required contribution (ARC) that incorporates both the cost of benefits in the current year and the amortization of the plan’s unfunded actuarial accrued liability.

15 The accrued annual requirement contribution liability is the difference between actuarially determined contributions and actual payments made to the pension fund.

16 Actuarial value of pension plan assets is the value of cash, investments, and other property belonging to a pension plan as used by an actuary for the purpose of an actuarial valuation.

[78] Report: “Employment-Based Retiree Health Benefits: Trends in Access and Coverage, 1997–2010.” By Paul Fronstin and Nevin Adams. Employee Benefit Research Institute, October 2012. <www.ebri.org>

Page 1: “Very few private-sector employers currently offer retiree health benefits, and the number offering them has been declining. In 2010, 17.7 percent of workers were employed at establishments that offered health coverage to early retirees, down from 28.9 percent in 1997.”

Page 4:

One of the most important factors (if not the single most important) contributing to the decline in the availability of retiree health benefits was a 1990 accounting rule change.1

The Financial Accounting Standards Board (FASB) issued Financial Accounting Statement No. 106 (FAS 106), “Employers’ Accounting for Postretirement Benefits Other Than Pensions” in December 1990, and it triggered many of the changes that private-sector employers have made to retiree health benefits. FAS 106 required companies to record retiree-health-benefit liabilities on their financial statements in accordance with generally accepted accounting principles, beginning with fiscal years after Dec. 15, 1992. Specifically, FAS 106 required private-sector employers to accrue and expense certain payments for future claims as well as actual paid claims. The immediate income-statement inclusion and balance-sheet-footnote recognition of these liabilities dramatically affected companies’ reported profits and losses. With this new view of the cost and the increasing expense of providing retiree health benefits, many private-sector employers overhauled their retiree health programs in ways that controlled, reduced, or eliminated these costs.2

Page 8:

The AHRQ [Agency for Healthcare Research and Quality] data show a similar trend among state-government employers. Among state employers, the percentage offering retiree health benefits increased between 1997 and 2003. In 2003, 94.9 percent were providing health coverage to early retirees and 88.6 percent were providing health coverage to Medicare-eligible retirees (Figure 4). However, recently, the percentage of state-government employers offering retiree health benefits has fallen. By 2010, 70 percent were offering health coverage to early retirees and 63.2 percent were offering it to Medicare-eligible retirees.

Similarly, there has been a recent decline in the percentage of local-government employers offering retiree health benefits. Between 2006 and 2010, the percentage of local governments with 10,000 or more workers that offered health coverage to early retirees fell from 95.1 percent to 77.6 percent, and the percentage offering it to Medicare-eligible retirees fell from 86.2 percent to 67.3 percent (Figure 5). Some of this decline may be due to recent GASB rules mentioned above.

Only a few local governments reported that they have either recently or soon plan to eliminate health benefits for retirees. Instead, local governments have shifted (or plan to shift) the costs to retirees. In 2011, 2 percent of local governments reported that they eliminated coverage in the past two years or planned to eliminate coverage in the next two years for early retirees (Figure 6). Five percent reported doing so, or planning to do so, for Medicare-eligible retirees. In contrast, 21 percent reported that they eliminated the employer subsidy in the past two years or planned to do so in the following two years for early-retiree coverage, and 32 percent reported taking such an action for Medicare-eligible retirees.

[79] Report: “State and Local Government Retiree Health Benefits: Liabilities Are Largely Unfunded, but Some Governments Are Taking Action.” U.S. Government Accountability Office, November 2009. <www.gao.gov>

Accounting standards require governments to account for the costs of other post-employment benefits (OPEB)—the largest of which is typically retiree health benefits—when an employee earns the benefit. As such, governments are reporting their OPEB liabilities—the amount of the obligation to employees who have earned OPEB. As state and local governments have historically not funded retiree health benefits when the benefits are earned, much of their OPEB liability may be unfunded. Amid fiscal pressures facing governments, this has raised concerns about the actions the governments can take to address their OPEB liabilities. …

The total unfunded OPEB liability reported in state and the largest local governments’ CAFRs exceeds $530 billion. However, as variations between studies’ totals show, totaling unfunded OPEB liabilities across governments is challenging for a number of reasons, including the way that governments disclose such data. The unfunded OPEB liabilities for states and local governments GAO reviewed varied widely in size. Most of these governments do not have any assets set aside to fund them. The total for unfunded OPEB liabilities is higher than $530 billion because GAO reviewed OPEB data in CAFRs for the 50 states and 39 large local governments but not data for all local governments or additional data reported in separate financial reports. Also, the CAFRs we reviewed report data that predate the market downturn. Finally, OPEB valuations are based on assumptions about the health care cost inflation rate and discount rates for assets, which also affect the size of the unfunded liability.

Some state and local governments have taken actions to address liabilities associated with retiree health benefits by setting aside assets to prefund the liabilities before employees retire and reducing these liabilities by changing the structure of retiree health benefits. Approximately 35 percent of the 89 governments for which GAO reviewed CAFRs reported having set aside some assets for OPEB liabilities, but the percentage of the OPEB liability funded varied.

[80] Article: “The 2015 EdNext Poll on School Reform: Public thinking on testing, opt out, common core, unions, and more.” By Michael B. Henderson, Paul E. Peterson, and Martin R. West. Education Next, Winter 2016. <educationnext.org>

These are among the many findings to emerge from the ninth annual Education Next survey, administered in May and June 2015 to a nationally representative sample of some 4,000 respondents, including oversamples of roughly 700 teachers, 700 African Americans, and 700 Hispanics (see methodology sidebar). …

The results presented here are based upon a nationally representative, stratified sample of adults (age 18 years and older) and representative oversamples of the following subgroups: teachers (693), African Americans (661). and Hispanics (734). Total sample size is 4,083. Respondents could elect to complete the survey in English or Spanish. Survey weights were employed to account for nonresponse and the oversampling of specific groups. …

The survey was conducted from May 21 to June 8, 2015, by the polling firm Knowledge Networks (KN), a GfK company. KN maintains a nationally representative panel of adults, obtained via address-based sampling techniques, who agree to participate in a limited number of online surveys.

[81] Poll: “Policy and Governance Survey 2015.” Commissioned by Education Next and the Program on Education Policy and Governance at the Harvard Kennedy School of Government. Conducted by Knowledge Networks during May-June 2015. <educationnext.org>

Page 17:

6. Based on your best guess, what is the average amount of money spent each year for a child in public schools in your local school district?

Public $6,307

Parents $5,540

Teachers $7,186

African Americans $5,585

Hispanics $5,956

Whites $6,435

[82] Calculated with data from:

a) Dataset: “Table 236.55. Total and current expenditures per pupil in public elementary and secondary schools: Selected years, 1919-20 through 2011-12.” U.S. Department Of Education, National Center for Education Statistics, July 2014. <nces.ed.gov>

b) Dataset: “Table 209.30. Highest degree earned, years of full-time teaching experience, and average class size for teachers in public elementary and secondary schools, by state: 2011-12.” U.S. Department Of Education, National Center for Education Statistics, May 2013. <nces.ed.gov>

c) Dataset: “Table 105.20. Enrollment in educational institutions, by level and control of institution, enrollment level, and attendance status and sex of student: Selected years, fall 1990 through fall 2023.” U.S. Department Of Education, National Center for Education Statistics, January 2014. <nces.ed.gov>

NOTES:

- “[T]eachers for students with disabilities and other special teachers … are generally excluded from class size calculations.” [Dataset: “Table 208.20. Public and private elementary and secondary teachers, enrollment, pupil/teacher ratios, and new teacher hires: Selected years, fall 1955 through fall 2023.” U.S. Department Of Education, National Center for Education Statistics, February 2014. <nces.ed.gov>]

- An Excel file containing the data and calculations is available upon request.

[83] See these 13 footnotes for documentation that the following items are excluded from spending data published by the National Center for Education Statistics:

  • State administration spending
  • Unfunded pension benefits
  • Post-employment non-pension benefits like health insurance

[84] Dataset: “Table 236.55. Total and current expenditures per pupil in public elementary and secondary schools: Selected years, 1919-20 through 2011-12.” U.S. Department Of Education, National Center for Education Statistics, July 2014. <nces.ed.gov>

[85] Report: Racial Disparities in Education Finance: Going Beyond Equal Revenues.” By Kim Rueben and Sheila Murray. Urban Institute, November 2008. <www.taxpolicycenter.org>

Page 1:

In the past, because public schools were funded largely by local property taxes, property-rich and -poor school districts differed greatly in expenditures per pupil. Since the early 1970s, however, state legislatures have, on their own initiative or at the behest of state courts, implemented school finance equalization programs to reduce the disparity in within-state education spending. …

Since the 1990s, many of the challenges to state finance systems have focused on ensuring that all students have equitable access to adequate educational opportunities as required by state education clauses (Minorini and Sugarman 1999). The argument is that some districts do not provide students with an adequate education and that it is the state’s responsibility to see that districts receive the funding to enable them to do so. The remedy might require some districts to spend more (perhaps significantly more) than other districts, depending on their student population. For example, in districts with many students from low-income families and families where English is not the first language, an “adequate education” may cost more money, and the state is required to ensure that these needs are met.

Page 5:

To examine spending patterns across different populations of students, we compared average per pupil spending across districts weighted by the number of students in each racial or ethnic group. In general, differences in spending per pupil in districts serving nonwhite and white students are very small. In 1972, the ratio of nonwhite to white spending was .98; this trend had reversed by 1982, as spending per pupil for nonwhite students was slightly higher than for white students in most states and in the United States as a whole and has been for the past 20 years (figure 2). Table 2 presents spending per pupil figures for 2002 weighted by the number of students in each subgroup.

Page 7: “The results presented thus far need to be considered with a few caveats. These ratios do not reflect that the costs of educating students of different groups differ and that minority students are often found in urban districts that have higher cost structures. … In addition, although spending differences have lessened between districts, it is unclear whether inequities are lessened at the school level.”

[86] Brief: “Do Districts Enrolling High Percentages of Minority Students Spend Less?” By Thomas Parrish. U.S. Department of Education, National Center for Education Statistics, December 1996. <nces.ed.gov>

Figure 1 shows expenditures for four categories of school districts by the percentage of minority students enrolled. Each of these four categories of school districts represents about 25 percent of the nation’s public school children. Figure 1 shows that on average, during the 1989–90 school year, spending was fairly equal across school districts with less than 50 percent minority enrollment. However, districts in which 50 percent or more of the students enrolled were racial minorities spent more than those districts with less than 50 percent minority enrollment. For example, the average expenditure differential between districts with the highest and the lowest percentage of minority students was $431 per student ($5,474 versus $5,043).

Figure 1. Education Expenditures in the United States in Relation to Percentage of Minority Enrollment (1989-90)

School Districts by Percentage of Minority Enrollment; Expenditures per Student

Less than 5% [=] $5,043

5% - <20% [=] $5,169

20% - <50% [=] $5,071

50% or more [=] $5,474 …

In terms of “buying power” in school year 1989–90, districts with the highest percentages of minority students spent $286 less on public education per year than did districts with the lowest percentages of minority students ($4,103 vs. $4,389 per student) (figure 2). This change in direction occurs because school districts enrolling high percentages of minority students are more likely to be located in high-cost urban centers and to serve substantial numbers of students with special needs, thereby reducing the “buying power” of the dollars received.

Figure 2. Education “buying power” in the United States in Relation to Percentage of Minority Enrollment (1989-90)

Less than 5% [=] $4,389

5% - <20% [=] $4,350

20% - <50% [=] $4,190

50% or more [=] $4,103

[87] Book: Generational Change: Closing the Test Score Gap. Edited by Paul E. Peterson. Rowman & Littlefield, 2006. Chapter 2: “How Families and Schools Shape the Achievement Gap.” By Derek Neal (University of Chicago and NBER). Pages 26-46.

Pages 32, 44:

Under the assumption that spending per student does not vary by race within a school district, the combination of school district data on per-pupil expenditure and school-level data on the racial composition of students provides information on average per pupil spending by public schools on black and white students. Given several different definitions of average expenditure, average spending per black student in public schools ranged from roughly $100 to $500 more than the corresponding figure for white students in 2001.15 These data provide suggestive but not definitive evidence concerning racial differences in resources provided to public schools. …

15. The data come from two Common Core of Data files: the Local (School District) Education Financial Survey and the Public Elementary/Secondary School Data. I calculated averages based on just educational expenditures as well as total expenditures. I also examined the sensitivity of results to the inclusion of allocated data.

[88] Brief: “The Myth of Racial Disparities in Public School Funding.” By Jason Richwine. Heritage Foundation, April 20, 2011. <www.heritage.org>

Page 2: “One of the more rigorous reports on funding disparities was published by the Urban Institute.11 The authors of the study combined district-level spending data with the racial and ethnic composition of schools within districts. … This paper employs a similar methodology, using 2006–2007 datasets from the U.S. Department of Education to examine school funding at both the national and regional levels.

Page 3:

Because the cost of living varies across the U.S., school expenditures are not always directly comparable. In areas with a lower cost of living, the same amount of money can buy more resources than in high-cost areas. To account for this difference, the NCES calculates a Comparable Wage Index (CWI) for each school district based on the average non-teacher wage in the district’s labor market. …

Cost adjustments should be regarded cautiously. Living expenses can still vary within markets, sometimes considerably. The District of Columbia, for example, is a high-expense city overall, but its poorest (and mostly black and Hispanic) sections have a lower cost of living than the white sections. While the raw data are likely to overstate the minority school funding advantage, the adjusted data probably understate it.

Page 4:

Public Education Spending by Race and Ethnic Group

Per-Pupil Spending; % of White Per-Pupil Spending; Adjusted

for Cost of Living

White; $10,816; 100%, 100%

Black; $11,387; 105%; 101%

Hispanic; $10,951; 101%; 96%

Asian; $11,535; 107%; 97%

[89] Dataset: “Table 235.10. Revenues for public elementary and secondary schools, by source of funds: Selected years, 1919-20 through 2011-12.” U.S. Department Of Education, National Center for Education Statistics, July 2014. <nces.ed.gov>

“2011-12 … Federal [=] 10.2% … State [=] 45.2% … Local [=] 44.6% … [Local] Property Taxes [=] 35.9% … Other [Local] Public Revenue [=] 6.7% … [Local] Private [=] 2.0%”

[90] Report: “Documentation to the NCES Common Core of Data Public Elementary/Secondary School Universe Survey: School Year 2010–11, Version Provisional 2a.” U.S. Department Of Education, National Center for Education Statistics, September 2012. <nces.ed.gov>

Page C-6: “Elementary A general level of instruction classified by state and local practice as elementary, composed of any span of grades not above grade 8; preschool or kindergarten included only if it is an integral part of an elementary school or a regularly established school system.”

Page C-14: “Secondary The general level of instruction classified by state and local practice as secondary and composed of any span of grades beginning with the next grade following the elementary grades and ending with or below grade 12.”

[91] The next 3 footnotes document that:

  • private-sector economic output is equal to personal consumption expenditures (PCE) + gross private domestic investment (GPDI) + net exports of goods and services.
  • PCE is the “primary measure of consumer spending on goods and services” by private individuals and nonprofit organizations.
  • GPDI is a measure of private spending on “structures, equipment, and intellectual property products.”

Since private school education is not a service that is typically imported or exported, a valid approximation of spending on private K-12 schools can be arrived at by summing PCE, GPDI, and government spending on private K-12 schools. The fourth footnote below details the data used in this calculation. The results of this calculation are consistent with the working paper: “Estimates of Expenditures for Private K-12 Schools.” By Michael Garet, Tsze H. Chan, and Joel D. Sherman. U.S. Department of Education, National Center for Education Statistics, May 1995. <nces.ed.gov>

[92] Report: “Fiscal Year 2013 Analytical Perspectives, Budget Of The U.S. Government.” White House Office of Management and Budget, February 12, 2012. <www.gpo.gov>

Page 471:

The main purpose of the NIPAs [national income and product accounts published by the U.S. Bureau of Economic Analysis] is to measure the Nation’s total production of goods and services, known as gross domestic product (GDP), and the incomes generated in its production. GDP excludes intermediate production to avoid double counting. Government consumption expenditures along with government gross investment — State and local as well as Federal — are included in GDP as part of final output, together with personal consumption expenditures, gross private domestic investment, and net exports of goods and services (exports minus imports).

[93] Report: “Concepts and Methods of the U.S. National Income and Product Accounts (Chapters 1–11 and 13).” U.S. Bureau of Economic Analysis, November 2014. <www.bea.gov>

Page 5-1:

Personal consumption expenditures (PCE) is the primary measure of consumer spending on goods and services in the U.S. economy.1 It accounts for about two-thirds of domestic final spending, and thus it is the primary engine that drives future economic growth. PCE shows how much of the income earned by households is being spent on current consumption as opposed to how much is being saved for future consumption.

PCE also provides a comprehensive measure of types of goods and services that are purchased by households. Thus, for example, it shows the portion of spending that is accounted for by discretionary items, such as motor vehicles, or the adjustments that consumers make to changes in prices, such as a sharp run-up in gasoline prices.2

 

Page 5-2:

PCE measures the goods and services purchased by “persons”—that is, by households and by nonprofit institutions serving households (NPISHs)—who are resident in the United States. Persons resident in the United States are those who are physically located in the United States and who have resided, or expect to reside, in this country for 1 year or more. PCE also includes purchases by U.S. government civilian and military personnel stationed abroad, regardless of the duration of their assignments, and by U.S. residents who are traveling or working abroad for 1 year or less.

5-64:

Nonprofit institutions serving households

In the NIPAs, nonprofit institutions serving households (NPISHs), which have tax-exempt status, are treated as part of the personal sector of the economy. Because NPISHs produce services that are not generally sold at market prices, the value of these services is measured as the costs incurred in producing them.

In PCE, the value of a household purchase of a service that is provided by a NPISH consists of the price paid by the household or on behalf of the household for that service plus the value added by the NPISH that is not included in the price. For example, the value of the educational services provided to a student by a university consists of the tuition fee paid by the household to the university and of the additional services that are funded by sources other than tuition fees (such as by the returns to an endowment fund).

[94] Report: “Measuring the Economy: A Primer on GDP and the National Income and Product Accounts.” U.S. Bureau Of Economic Analysis, October 2014. <www.bea.gov>

Page 8: “Gross private domestic investment consists of purchases of fixed assets (structures, equipment, and intellectual property products) by private businesses that contribute to production and have a useful life of more than one year, of purchases of homes by households, and of private business investment in inventories.”

[95] Calculated with data from:

a) Dataset: “Table 2.3.5U. Personal Consumption Expenditures by Major Type of Product and by Major Function.” U.S. Bureau of Economic Analysis. Last revised June 1, 2015. <www.bea.gov>

“PCE on Elementary and Secondary Schools [=] $26,525,000,000”

b) “Table 236.20. Total expenditures for public elementary and secondary education and other related programs, by function and subfunction: Selected years, 1990-91 through 2010-11.” U.S. Department Of Education, National Center for Education Statistics, July 2013. <nces.ed.gov>

c) Dataset: “Table 1.1.5. Gross Domestic Product.” U.S. Bureau of Economic Analysis. Last revised February 27, 2015. <www.bea.gov>

d) Dataset: “Table 105.20. Enrollment in educational institutions, by level and control of institution, enrollment level, and attendance status and sex of student: Selected years, fall 1990 through fall 2023.” U.S. Department Of Education, National Center for Education Statistics, January 2014. <nces.ed.gov>

NOTE: An Excel file containing the data and calculations is available upon request.

[96] Calculated with the dataset: “Table 8. Average class size for private school teachers in elementary schools, secondary schools, and schools with combined grades, by classroom type and affiliation: 2007-08.” U.S. Department Of Education, National Center for Education Statistics. Accessed June 27, 2015 at <nces.ed.gov>

NOTES

- On June 27, 2015, Just Facts conducted an extensive search for the average class size in private schools, and this dataset provided the latest and most complete available data on this subject. The most recent version of the Department Of Education’s “Digest of Education Statistics” only contains class size data for public schools. [Report: “Digest of Education Statistics 2013.” By Thomas D. Snyder and Sally A. Dillow. U.S. Department Of Education, National Center for Education Statistics, May 7, 2015. <nces.ed.gov>]

- An Excel file containing the data and calculations is available upon request.

[97] CALCULATION: $6,469 spending per student × 18.8 students per classroom = $121,617 spending per classroom

[98] Calculated with data from:

a) Dataset: “Table 205.50. Private elementary and secondary enrollment, number of schools, and average tuition, by school level, orientation, and tuition: Selected years, 1999–2000 through 2011-12.” U.S. Department Of Education, National Center for Education Statistics, June 2013.

<nces.ed.gov>

“Each school reports the highest annual tuition charged for a full-time student; this amount does not take into account discounts that individual students may receive. This amount is weighted by the number of students enrolled in each school and averaged.”

b) Dataset: “Table 236.55. Total and current expenditures per pupil in public elementary and secondary schools: Selected years, 1919-20 through 2011-12.” U.S. Department Of Education, National Center for Education Statistics, July 2014. <nces.ed.gov>

NOTE: An Excel file containing the data and calculations is available upon request.

[99] Report: “Documentation to the NCES Common Core of Data Public Elementary/Secondary School Universe Survey: School Year 2010–11, Version Provisional 2a.” U.S. Department Of Education, National Center for Education Statistics, September 2012. <nces.ed.gov>

Page C-6: “Elementary A general level of instruction classified by state and local practice as elementary, composed of any span of grades not above grade 8; preschool or kindergarten included only if it is an integral part of an elementary school or a regularly established school system.”

Page C-14: “Secondary The general level of instruction classified by state and local practice as secondary and composed of any span of grades beginning with the next grade following the elementary grades and ending with or below grade 12.”

[100] Paper: “Academic Achievement and Demographic Traits of Homeschool Students: A Nationwide Study.” By Brian D. Ray. Academic Leadership, Winter 2010. <www.nheri.org>

Page 4: “Students were included in the study if a parent affirmed that his/her student was ‘… taught at home within the past 12 months by his/her parent for at least 51% of the time in the grade level now being tested.’

Page 7: “The target population was all families in the United States who were educating their school-age children at home and having standardized achievement tests administered to their children. … A total of 11,739 students provided usable questionnaires with corresponding achievement tests.”

[101] Email from Brian D. Ray to Just Facts, May 12, 2015.

“The median amount spent per this one year on the student’s education for textbooks, lesson materials, tutoring, enrichment services, testing, counseling, evaluation, and so forth is $400 to $599. Here is the frequency list regarding the answers. …”

[102] Calculated with data from the paper: “Academic Achievement and Demographic Traits of Homeschool Students: A Nationwide Study.” By Brian D. Ray. Academic Leadership, Winter 2010. <www.nheri.org>

Page 7:

It was very challenging to calculate the response rate. One of the main problems was that, well into the study, it was discovered that many of the large-group test administrators were not communicating to their constituent homeschool families that they had been invited to participate in the study. Based on the best evidence available, the response rate was a minimum of 19% for the four main testing services with whom the study was originally planned, who worked fairly hard to get a good response from the homeschooled families, and whose students accounted for 71.5% (n = 8,397) of the participants in the study. That is, of the students who were tested and whose parents were invited to participate in the study, both test scores and survey responses were received for this group. It is possible that the response rate was higher, perhaps as much as 25% to these four testing services. For the other testing services and sources of data, the response rate was notably lower, at an estimated 11.0%. These testing services and other sources of data used a less-concentrated approach to soliciting participation and following-up with reminders to secure participation.

NOTE: An Excel file containing the data and calculations is available upon request.

[103] Textbook: Mind on Statistics (Fourth edition). By Jessica M. Utts and Robert F. Heckard. Brooks/Cole Cengage Learning, 2012.

Pages 164-165:

Surveys that simply use those who respond voluntarily are sure to be biased in favor of those with strong opinions or with time on their hands. …

According to a poll taken among scientists and reported in the prestigious journal Science … scientists don’t have much faith in either the public or the media. … It isn’t until the end of the article that we learn who responded: “The study reported a 34% response rate among scientists…. With only about a third of those contacted responding, it is inappropriate to generalize these findings and conclude that most scientists have so little faith in the public and the media.

[104] Book: Sampling: Design And Analysis (Second edition). By Sharon L. Lohr. Brooks/Cole Cengage Learning, 2010.

Pages 5-6:

The following examples indicate some ways in which selection bias can occur. …

… Nonresponse distorts the results of many surveys, even sources that are carefully designed to minimize other sources of selection bias. Often, nonrespondents differ critically from the respondents, but the extent of that difference is unknown unless you can later obtain information about the nonrespondents. Many surveys reported in newspapers or research journals have dismal response rates – in some, the response rate is as low as 10%. It is difficult to see how results can be generalized of the population when 90% of the targeted sample cannot be reached for refuses to participate.

[105] Paper: “Response Rates to Mail Surveys Published in Medical Journals.” By David A. Asch and others. Journal of Clinical Epidemiology, 1997. Pages 1129-1136. <christakis.med.harvard.edu>

Page 1129:

The purpose of this study was to characterize response rates for mail surveys published in medical journals…. The mean response rate among mail surveys published in medical journals is approximately 60%. However, response rates vary according to subject studied and techniques used. Published surveys of physicians have a mean response rate of only 54%, and those of non-physicians have a mean response rate of 68%. … Although several mail survey techniques are associated with higher response rates, response rates to published mail surveys tend to be moderate. However, a survey’s response rate is at best an indirect indication of the extent of non-respondent bias. Investigators, journal editors, and readers should devote more attention to assessments of bias, and less to specific response rate thresholds.

The level of art and interpretation in calculating response rates reflects the indirect and therefore limited use of the response rate in evaluating survey results. So long as one has sufficient cases for statistical analyses, non-response to surveys is a problem only because of the possibility that respondents differ in a meaningful way from non-respondents, thus biasing the results22, 23. Although there are more opportunities for non-response bias when response rates are low than high, there is no necessary relationship between response rates and bias. Surveys with very low response rates may provide a representative sample of the population of interest, and surveys with high response rates may not.

Nevertheless, because it is so easy to measure response rates, and so difficult to identify bias, response rates are a conventional proxy for assessments of bias. In general, investigators do not seem to help editors and readers in this regard. As we report, most published surveys make no mention of attempts to ascertain non-respondent bias. Similarly, some editors and readers may discredit the results of a survey with a low response rate even if specific tests limit the extent or possibility of this bias.

[106] Webpage: “CPI Inflation Calculator.” United States Department of Labor, Bureau of Labor Statistics. Accessed June 3, 2015. <www.bls.gov>

“$400 in 2007 has the same buying power as $456.71 in 2014”

“$599 in 2007 has the same buying power as $683.92 in 2014”

“The CPI inflation calculator uses the average Consumer Price Index for a given calendar year. This data represents changes in prices of all goods and services purchased for consumption by urban households. This index value has been calculated every year since 1913. For the current year, the latest monthly index value is used.”

[107] Article: “What Have We Learned About Homeschooling?” By Eric J. Isenberg. Peabody Journal of Education, December 5, 2007. Pages 387-409. <www.tandfonline.com>

Page 398:

Parents make school choice decisions based on preferences, the quality of local schools, and constraints of income and available leisure time. Separating the causal effect of each variable on school choice requires holding the others constant. For instance, if two families with identical preferences, income, and leisure time choose different schools, the difference can be ascribed to the local education market. Families who live in the same area with the same time and income constraints but who choose different schools must have different preferences.

Page 404:

Using aggregate data or child-level data, there is some evidence that poorer academic quality of public schools and decreased choice of private schools both contribute to an increase in homeschooling. Isenberg (2003) used test score data to measure academic school quality in Wisconsin. The results indicate that in small towns, a decrease in math test scores in a school district increases the likelihood of homeschooling. The magnitude of this effect is significant. A decrease in math scores from the 1 standard deviation above the mean to 1 standard deviation below the mean increases homeschooling by 29%, from 1.9 percentage points to 2.4 percentage points, all else equal. A decrease from 2 standard deviations above to 2 standard deviations below increases homeschooling by 65%, from 1.6 percentage points to 2.7 percentage points.

Page 405:

If parents are dissatisfied with the public schools for academic, religious, or other reasons, they must choose between homeschooling and private schooling. Private school has tuition costs; homeschooling has opportunity costs of time. Isenberg (2006) showed the ways in which mothers are motivated by the amount of disposable time they have, the opportunity cost of time, and income constraints. The results are summarized in Table 3.

If a mother has preschool children as well as a school-age child, she is predisposed to stay home, decrease her work hours, or even stay out of the labor force entirely and therefore more likely to homeschool. Of course, small children require a great deal of time to care for, but this pull on a mother’s time is dominated by the incentive to withdraw from the labor force, freeing daytime hours and eliminating commute time, thereby increasing the likelihood of homeschooling. All else equal, having a preschool child younger than 3 years old increases the probability of homeschooling a school-age sibling by 1.2 percentage points; a toddler age 3 to 6 increases the probability of homeschooling by 0.5 percentage points

Having school-age siblings also increases the likelihood that a child is homeschooled. Each additional sibling beyond the first sibling increases the probability that a particular child is homeschooled. All else equal, a child with two other school-age siblings is 1.2 percentage points more likely to be homeschooled than a child with one school-age sibling, and a child with three or more siblings in school is an additional 1.7 percentage points more likely to be homeschooled than a child with two siblings. There appear to be economies of scale in homeschooling.

The presence of other adults in the household also has a significant effect on the likelihood of homeschooling. This may be because these extra adults take over household tasks, giving the mother more disposable time. Other adults in the household, including but not limited to a husband, increase the likelihood of homeschooling by 0.5 percentage points per extra adult.

[108] Calculated with the dataset: “Table 236.20. Total expenditures for public elementary and secondary education and other related programs, by function and subfunction: Selected years, 1990-91 through 2010-11.” U.S. Department Of Education, National Center for Education Statistics, July 2013. <nces.ed.gov>

“Excludes expenditures for state education agencies. Detail may not sum to totals because of rounding.”

NOTE: An Excel file containing the data and calculations is available upon request.

[109] See these 13 footnotes for documentation that the following items are excluded from spending data published by the National Center for Education Statistics:

  • State administration spending
  • Unfunded pension benefits
  • Post-employment non-pension benefits like health insurance

[110] Calculated with the dataset: “Table 236.20. Total expenditures for public elementary and secondary education and other related programs, by function and subfunction: Selected years, 1990-91 through 2010-11.” U.S. Department Of Education, National Center for Education Statistics, July 2013. <nces.ed.gov>

“Excludes expenditures for state education agencies. Detail may not sum to totals because of rounding.”

NOTE: An Excel file containing the data and calculations is available upon request.

[111] Calculated with the dataset: “Table 236.20. Total expenditures for public elementary and secondary education and other related programs, by function and subfunction: Selected years, 1990-91 through 2010-11.” U.S. Department Of Education, National Center for Education Statistics, July 2013. <nces.ed.gov>

NOTE: An Excel file containing the data and calculations is available upon request.

[112] See these 13 footnotes for documentation that the following items are excluded from spending data published by the National Center for Education Statistics:

• State administration spending

• Unfunded pension benefits

• Post-employment non-pension benefits like health insurance

[113] Calculated with data from:

a) Dataset: “Table 211.10. Average salaries for full-time teachers in public and private elementary and secondary schools, by selected characteristics: 2011-12.” U.S. Department Of Education, National Center for Education Statistics, May 2013. <nces.ed.gov>

b) Dataset: “Table 211.60. Estimated average annual salary of teachers in public elementary and secondary schools, by state: Selected years, 1969-70 through 2012-13.” U.S. Department Of Education, National Center for Education Statistics, April 2013. <nces.ed.gov>

c) Dataset: “Table 236.55. Total and current expenditures per pupil in public elementary and secondary schools: Selected years, 1919-20 through 2011-12.” U.S. Department Of Education, National Center for Education Statistics, July 2014. <nces.ed.gov>

NOTE: An Excel file containing the data and calculations is available upon request.

[114] Calculated with data from:

a) Dataset: “Table 211.10. Average salaries for full-time teachers in public and private elementary and secondary schools, by selected characteristics: 2011-12.” U.S. Department Of Education, National Center for Education Statistics, May 2013. <nces.ed.gov>

b) Dataset: “Table 236.55. Total and current expenditures per pupil in public elementary and secondary schools: Selected years, 1919-20 through 2011-12.” U.S. Department Of Education, National Center for Education Statistics, July 2014. <nces.ed.gov>

NOTE: An Excel file containing the data and calculations is available upon request.

[115] Report: “Employer Costs for Employee Compensation, Historical Listing, March 2004 – December 2014.” U.S. Bureau of Labor Statistics, March 11, 2015. <www.bls.gov>

Pages 105-106: “Table 7. State and local government workers, by occupational group: employer costs per hours worked for employee compensation and costs as a percentage of total compensation, 2004-2014 … Teachers … March 2014 … Total compensation … Cost per hour worked [=] $58.72”

[116] The source of the data for teacher compensation is the U.S. Bureau of Labor Statistics’ Employer Costs for Employee Compensation survey. The next three footnotes show that this survey does not capture the costs of retirement health benefits or the unfunded liabilities of pensions.

[117] Paper: “Compensation for State and Local Government Workers.” By Maury Gittleman (U.S. Department of Labor) and Pierce Brooks (U.S. Department of Labor). Journal of Economic Perspectives, Winter 2012. Pages 217-242. <pubs.aeaweb.org>

Appendix (<www.aeaweb.org>): “Note that the ECEC by design excludes retiree health plan costs.”

NOTE: On 12/27/2014, Just Facts wrote to the authors of this paper to confirm the statement above, and the lead author affirmed this is true. He was unable to find an explicit statement from a BLS publication stating that retiree health plan costs are not included in the ECEC, but he wrote, “If one looks at what is explicitly included, however, it is apparent that they are not included in ECEC costs.”

[118] NOTE: The information in the source below implicitly confirms the source above, because it explains that the ECEC measures the costs of employing current employees divided by their working hours. Thus, it does not capture healthcare costs for previous employees (e.g., retirees).

Article: “Analyzing employers’ costs for wages, salaries, and benefits.” By Felicia Nathan (economist in the Division of Employment Cost Trends, Bureau of Labor Statistics). Monthly Labor Review, October 1987. <www.bls.gov>

Pages 6-8:

How compensation costs are calculated

At least two approaches can be taken in measuring an employer’s costs for employee compensation [ECEC]. One approach focuses on past expenditures—that is, the actual money an employer spent on compensation during a specified time, usually a past year. The other approach focuses on current costs—annual costs based on the current price of benefits under current plan provisions. The Bureau’s previous measure of compensation cost levels, the Employer Expenditures for Employee Compensation survey, used the past expenditures approach.5 Because the ECI measures change from one time to a another, it uses the current cost approach.

To estimate the total compensation cost per hour worked, the ECI (1) identifies the benefits provided, (2) determines, from current cost information (current price and current plan provisions), the cost per hour worked for each benefit, then (3) sums the costs for the benefits with the straight-time wage or salary rate. The following examples illustrate how current costs are determined for specific benefit plans, and how they differ from costs based on past expenditures. …

Example 2. A health insurance plan is provided all employees. The monthly premium for each employee is $120 for the first 6 months of a given year, and increases to $140 for the last 6 months. Each employee works 2,000 hours per year. …

In this example, the current cost at any time during the first half of the year is the annual premium divided by the annual hours worked….

Compensation cost levels, however, should reflect the current industry and occupational mix each year they are published. Thus, to estimate current cost levels for the aggregate series, it is necessary to have employment data that refer to the current mix. Such data are obtained by apportioning industry employment from the Bureau’s Current Employment Statistics program, using occupational employment by industry from the ECI sample. Industry employment estimates from the Current Employment Statistics program are published monthly, and are adjusted each year to a universe of all nonfarm establishments from March of the previous year.

5 The Employer Expenditures for Employee Compensation (EEEC) [note the difference from ECEC] survey was discontinued in 1977. While differing from the ECI in that it measured expenditures rather than current costs, the EEEC survey had other characteristics similar to those of the ECI. It covered virtually the same benefits and reported the costs on a work-hour basis. The scope of the EEEC survey was also similar to that of the ECI in that it covered the private nonfarm work force.

[119] Email from the U.S. Bureau of Labor Statistics to Just Facts, May 12, 2015.

For the purposes of NCS [the National Compensation Survey, the source of the Employer Cost for Employee Compensation data†], defined benefit costs are: actual dollar amount an establishment placed in the pension fund from cash, stock, corporate bonds and other financial instrument; Pension Benefit Guaranty Corporation (PBGC) premiums; and administration fees paid to third party administrators (ex.legal, actuary, broker’s). Those costs that are considered out-of-scope for NCS are: actuarial costs (i.e. estimate of current and future obligations); pension benefits paid to retirees; service costs (actuarially determined estimate of employer pension obligations based on benefits earned by employees during the period); and other costs (ex. interest, amortization of prior costs).

NOTE: † Report: “Work Schedules in the National Compensation Survey.” By Richard Schumann. U.S. Bureau of Labor Statistics, July 28, 2008. <www.bls.gov>

Page 1: “The National Compensation Survey (NCS) produces data on occupational earnings, compensation cost trends--the Employment Cost Index (ECI) and the Employer Cost for Employee Compensation (ECEC) series--and benefits.”

[120] Report: “Work Schedules in the National Compensation Survey.” By Richard Schumann. Bureau of Labor Statistics, July 28, 2008. <www.bls.gov>

Page 1:

Work schedules in the United States are generally viewed as consisting of an 8-hour day and a 40-hour week. But the National Compensation Survey (NCS) covers many occupations that have different types of work schedules: fire fighters, for example, who often work 24 straight hours followed by 48 hours off; truck drivers, many of whom spend days at a time on the road; waiters and waitresses, whose schedules may vary every week; and school teachers, who tend to work many hours at home. Fitting all of these different schedules into a common form for data publication can be challenging.

The National Compensation Survey (NCS) produces data on occupational earnings, compensation cost trends--the Employment Cost Index (ECI) and the Employer Cost for Employee Compensation (ECEC) series--and benefits. The wage and benefit data collected from NCS respondents come in several time frames: hourly, weekly, biweekly, monthly, or annually. Converting the raw data into a common format requires accurate work schedules. This article explains how the NCS calculates these work schedules and the role that they play in the calculation of the published data series.

Definition Of The Work Schedule

The NCS work schedule is defined as, “The number of daily hours, weekly hours, and annual weeks that employees in an occupation are scheduled and do work.” The work schedule is the standard schedule for the occupation; short-term fluctuations and one-time events are not considered unless the change becomes permanent. For example, paid or unpaid time off due to a snowstorm would not result in the adjustment of the work schedule because this would not represent a permanent change. Paid lunch periods are included in the work schedule, as is incidental time off, such as coffee breaks, or wash-up time. Vacation, holidays, sick leave, and other kinds of leave hours are included in the work schedule, but they are subtracted when calculating the number of hours worked in a year.

Page 2:

Benefit costs. The ECI and ECEC publish data for a wide variety of benefits. The costs for these benefits may take different forms, such as monthly premiums, percent of gross earnings, or days of paid leave. These costs must be converted to a common cost form to allow for the calculation of individual benefit and total benefit costs across occupations, industries, and other publication categories in the survey. The NCS uses a cost-per-hour-worked concept as the common cost form. To convert all costs to a per-hour-worked basis, the cost of each benefit is converted to an annual cost and then divided by the number of annual hours worked.

Page 4:

Additional requirements of the job. Professional and managerial employees often work beyond the established work schedule of the employer due to the requirements of their jobs. Because such workers are exempt from the overtime provisions of the Fair Labor Standards Act, employers are not required to compensate them for the additional hours. If the hours worked are not compensated for, then they usually are not recorded. Collection of the actual hours normally worked would be the preferred way of determining the work schedule, but records of hours worked by exempt employees are usually not available. In most cases, the NCS collects the employer’s best estimate of the hours normally worked by exempt employees. If the respondent is unwilling or unable to estimate the hours, then the normal work hours of other employees in the establishment are used.

The actual hours worked by elementary and secondary school teachers (who are exempt) are often not available. Time spent in lesson preparation, test construction and grading, providing additional help to students, and other nonclassroom activities are not available and therefore not recorded. The NCS uses contract hours for teachers in determining the work schedule.12 Contracts usually specify the length of the school day, the number of teaching and required nonteaching days, and the amount of time, if any, teachers are required to be in the school before and after school hours. These hours are used to construct the work schedule. For example, it is common for teacher contracts to specify that teachers will work 185 days per year. In these cases, the daily work schedule would be the length of the school day plus any time teachers are required to be in school before or after the school day, and the weekly work schedule would be the daily schedule multiplied by 5 days (Monday through Friday). The number of weeks would be 37 (185 days ÷ 5 days per week). The time not worked during summer, Christmas break, and spring break would be excluded from the work schedule and would not be considered vacation or holiday. Jobs in schools are not considered to be seasonal.

[121] Dataset: “Employer costs per hour worked for employee compensation and costs as a percent of total compensation: private industry teachers, March 2014.” U.S. Bureau of Labor Statistics. Sent to Just Facts on June 24, 2015.

“Compensation component … Total compensation [=] $44.63”

NOTE: Contact us for a copy of this dataset.

[122] CALCULATION: ($58.72 - $44.63) / 58.72 = 24%

[123] See these 5 footnotes for documentation that:

  • “Defined benefit” pension programs guarantee employees specified levels of benefits, regardless of how much money the employer has previously set aside to pay those benefits.
  • Most government employees receive defined benefit pensions and most private sector employees do not.
  • Many government pension plans are underfunded.

[124] See these 2 footnotes for documentation that retiree health benefits are common in the government sector and rare in the private sector.

[125] Report: “National Compensation Survey: Occupational Earnings in the United States, 2010.” U.S. Bureau of Labor Statistics, May 2011. <www.bls.gov>

Page 8: “Survey data were collected over a 13-month period for the 87 larger areas; for the 140 smaller areas, data were collected over a 4-month period. For each establishment in the survey, the data reflect the establishment’s most recent information at the time of collection. The data for the National bulletin were compiled from locality data collected between December 2009 and January 2011. The average reference period is July 2010.”

Page 9:

For hourly workers, scheduled hours worked per day and per week, exclusive of overtime, are recorded. For salaried workers, field economists record the typical number of hours actually worked because those exempt from overtime provisions often work beyond the assigned work schedule.

The number of weeks worked annually is determined as well. Because salaried workers who are exempt from overtime provisions often work beyond the assigned work schedule, the typical number of hours they actually worked is collected.

Page 58: “Table 4. Full-time private industry workers: Mean and median hourly, weekly, and annual earnings and mean weekly and annual hours … Primary, secondary, and special education school teachers … Annual … Mean hours [=] 1,560”

Page 93: “Table 5. Full-time State and local government workers: Mean and median hourly, weekly, and annual earnings and mean weekly and annual hours … Primary, secondary, and special education school teachers … Annual … Mean hours [=] 1,405”

CALCULATION: (1,560 - 1,405) / 1,405 = 11%

NOTE: For more details on teacher work hours, click here.

[126] Calculated with data from:

a) Dataset: “Table 211.10. Average salaries for full-time teachers in public and private elementary and secondary schools, by selected characteristics: 2011-12.” U.S. Department Of Education, National Center for Education Statistics, May 2013. <nces.ed.gov>

b) Dataset: “Table 211.60. Estimated average annual salary of teachers in public elementary and secondary schools, by state: Selected years, 1969-70 through 2012-13.” U.S. Department Of Education, National Center for Education Statistics, April 2013. <nces.ed.gov>

c) Dataset: “Table 236.55. Total and current expenditures per pupil in public elementary and secondary schools: Selected years, 1919-20 through 2011-12.” U.S. Department Of Education, National Center for Education Statistics, July 2014. <nces.ed.gov>

d) Dataset: “Table 106.70. Gross domestic product price index, Consumer Price Index, education price indexes, and federal budget composite deflator: Selected years, 1919 through 2013.” U.S. Department Of Education, National Center for Education Statistics, April 2014. <nces.ed.gov>

e) Report: “Employer Costs for Employee Compensation, Historical Listing, March 2004 – December 2014.” U.S. Bureau of Labor Statistics, March 11, 2015. <www.bls.gov>

Pages 105-106: “Table 7. State and local government workers, by occupational group: employer costs per hours worked for employee compensation and costs as a percentage of total compensation, 2004-2014 … Teachers”

NOTE: An Excel file containing the data and calculations is available upon request.

[127] Calculated with data from:

a) Dataset: “Table 211.10. Average salaries for full-time teachers in public and private elementary and secondary schools, by selected characteristics: 2011-12.” U.S. Department Of Education, National Center for Education Statistics, May 2013. <nces.ed.gov>

b) Dataset: “Table 211.60. Estimated average annual salary of teachers in public elementary and secondary schools, by state: Selected years, 1969-70 through 2012-13.” U.S. Department Of Education, National Center for Education Statistics, April 2013. <nces.ed.gov>

c) Dataset: “Table 236.55. Total and current expenditures per pupil in public elementary and secondary schools: Selected years, 1919-20 through 2011-12.” U.S. Department Of Education, National Center for Education Statistics, July 2014. <nces.ed.gov>

d) Dataset: “Table 106.70. Gross domestic product price index, Consumer Price Index, education price indexes, and federal budget composite deflator: Selected years, 1919 through 2013.” U.S. Department Of Education, National Center for Education Statistics, April 2014. <nces.ed.gov>

e) Report: “Employer Costs for Employee Compensation, Historical Listing, March 2004 – December 2014.” U.S. Bureau of Labor Statistics, March 11, 2015. <www.bls.gov>

Pages 105-106: “Table 7. State and local government workers, by occupational group: employer costs per hours worked for employee compensation and costs as a percentage of total compensation, 2004-2014 … Teachers”

NOTE: An Excel file containing the data and calculations is available upon request.

[128] See these 4 footnotes for documentation that the following items are excluded from employee compensation data published by the Bureau of Labor Statistics:

  • Unfunded pension liabilities
  • Post-employment expenses of worker compensation, such as retirement health benefits

[129] The next two footnotes contain studies of teacher work hours from the U.S. Bureau of Labor Statistics. Unlike less rigorous studies of working hours, these studies are based on comprehensive, detailed records. The first of these studies employed field economists to measure actual working hours, as opposed to relying solely upon assigned work schedules. The second study is based on teacher journals of work hours, as opposed to generalized questions about how long they estimate they work.

The first study found that full-time public school teachers work an average of 1,405 hours per year and full-time private school teachers work an average of 1,560 hours per year. The second study found that full-time teachers (public and private) work an average of 39.2 hours per week during the weeks in which they work. The U.S. Department of Labor estimates that full-time teachers work an average of 37 or 38 weeks per year. At 39.2 hours per week, this amounts to 1,450 to 1,490 hours per year.

In keeping with Just Facts’ Standards of Credibility, Just Facts is citing the highest of these numbers in order to give “preference to figures that are contrary to our viewpoints.” To triple-check these two studies, Just Facts conducted a detailed time study of a full-time public school teacher. This is shown in the third footnote below.

[130] Report: “National Compensation Survey: Occupational Earnings in the United States, 2010.” U.S. Bureau of Labor Statistics, May 2011. <www.bls.gov>

Page 8: “Survey data were collected over a 13-month period for the 87 larger areas; for the 140 smaller areas, data were collected over a 4-month period. For each establishment in the survey, the data reflect the establishment’s most recent information at the time of collection. The data for the National bulletin were compiled from locality data collected between December 2009 and January 2011. The average reference period is July 2010.”

Page 9:

For hourly workers, scheduled hours worked per day and per week, exclusive of overtime, are recorded. For salaried workers, field economists record the typical number of hours actually worked because those exempt from overtime provisions often work beyond the assigned work schedule.

The number of weeks worked annually is determined as well. Because salaried workers who are exempt from overtime provisions often work beyond the assigned work schedule, the typical number of hours they actually worked is collected.

Page 58: “Table 4. Full-time private industry workers: Mean and median hourly, weekly, and annual earnings and mean weekly and annual hours … Primary, secondary, and special education school teachers … Annual … Mean hours [=] 1,560”

Page 93: “Table 5. Full-time State and local government workers: Mean and median hourly, weekly, and annual earnings and mean weekly and annual hours … Primary, secondary, and special education school teachers … Annual … Mean hours [=] 1,405”

[131] Report: “Teachers’ work patterns: when, where, and how much do U.S. teachers work?” By Krantz-Kent (economist in the Division of Labor Force Statistics, U.S. Bureau of Labor Statistics). Monthly Labor Review, March 2008. Pages 52-59. <www.bls.gov>

Page 1:

In the ATUS (American Time Use Survey), interviewers collect data in a time diary format, in which survey participants provide information about activities that they engaged in “yesterday.” Because of the way in which the data are collected, it is possible to identify and quantify the work that teachers do at home, at a workplace, and at other locations and to examine the data by day of the week and time of day. Data are available for nearly every day of 2003–06, which is the reference period for this analysis.

In the presentation that follows, “teachers” refers to persons whose main job is teaching preschool-to–high school students. Persons in the “other professionals” occupations also are classified by their main job. With the exception of chart 1, all estimates presented are restricted to persons who were employed during the week prior to their interview and who did some work during that period. Thus, a teacher who was on summer or semester break during the week of the survey is not included in this analysis. Unless otherwise specified, data pertain to persons who work full time; that is, they usually work 35 hours or more per week. Estimates of work hours refer to persons’ main job only.

Page 59: “Full-time teachers worked nearly 3 more hours per day than part-time teachers. On average for all days of the week, full-time teachers worked 5.6 hours per day and part-time teachers worked 2.8 hours per day.”

NOTE: This survey found that during the weeks in which full-time teachers work, they work an average of 5.6 hours per day (including weekends). This amounts to 39.2 hours per week. Per the U.S. Department of Labor, full-time teachers work an average of 37 or 38 weeks per year.† At 39.2 hours per week, this amounts to 1,450 to 1,490 hours per year.

† BLS Handbook of Methods. U.S. Bureau of Labor Statistics. Chapter 8: “National Compensation Measures.” Last revised July 10, 2013. <www.bls.gov>. Page 16: “Primary, secondary, and special education teachers typically have a work schedule of 37 or 38 weeks per year.”

[132] To triple-check the two studies above, in April 2015 Just Facts conducted a detailed time study of a full-time public school teacher who works an average of 3.3 hours per workday beyond contractually required work hours. This teacher:

  • arrives at school 20 minutes before the required contract time to set up and plan.
  • stays an extra hour per day after the required contract time to help students.
  • spends an average of two hours per workday grading tests and preparing lessons.
  • coaches two sports teams.

This teacher works 1,516 hours per year not including coaching, 1,759 hours including one sport, and 1,913 hours including two sports. These figures are higher than but consistent with the studies above given the extraordinary commitment of this particular teacher. Beyond working 3.3 extra unpaid hours per workday, this teacher earns approximately $16,000 per year in supplemental contracts, as opposed to the national average of $1,092.†

† Calculated with data from: “Table 211.10. Average salaries for full-time teachers in public and private elementary and secondary schools, by selected characteristics: 2011-12.” U.S. Department Of Education, National Center for Education Statistics, May 2013. <nces.ed.gov>. An Excel file containing the data and calculations is available upon request.

[133] Report: “National Compensation Survey: Occupational Earnings in the United States, 2010.” U.S. Bureau of Labor Statistics, May 2011. <www.bls.gov>

Page 8: “Survey data were collected over a 13-month period for the 87 larger areas; for the 140 smaller areas, data were collected over a 4-month period. For each establishment in the survey, the data reflect the establishment’s most recent information at the time of collection. The data for the National bulletin were compiled from locality data collected between December 2009 and January 2011. The average reference period is July 2010.”

Page 9:

For hourly workers, scheduled hours worked per day and per week, exclusive of overtime, are recorded. For salaried workers, field economists record the typical number of hours actually worked because those exempt from overtime provisions often work beyond the assigned work schedule.

The number of weeks worked annually is determined as well. Because salaried workers who are exempt from overtime provisions often work beyond the assigned work schedule, the typical number of hours they actually worked is collected.

Page 49: Table 4. Full-time private industry workers: Mean and median hourly, weekly, and annual earnings and mean weekly and annual hours … All workers … Annual … Mean hours [=] 2,045”

[134] Calculated with data from:

a) Dataset: “Table 211.10. Average salaries for full-time teachers in public and private elementary and secondary schools, by selected characteristics: 2011-12.” U.S. Department Of Education, National Center for Education Statistics, May 2013. <nces.ed.gov>

b) Dataset: “Table 211.60. Estimated average annual salary of teachers in public elementary and secondary schools, by state: Selected years, 1969-70 through 2012-13.” U.S. Department Of Education, National Center for Education Statistics, April 2013. <nces.ed.gov>

c) Dataset: “Table 236.55. Total and current expenditures per pupil in public elementary and secondary schools: Selected years, 1919-20 through 2011-12.” U.S. Department Of Education, National Center for Education Statistics, July 2014. <nces.ed.gov>

d) Dataset: “Table 106.70. Gross domestic product price index, Consumer Price Index, education price indexes, and federal budget composite deflator: Selected years, 1919 through 2013.” U.S. Department Of Education, National Center for Education Statistics, April 2014. <nces.ed.gov>

e) Report: “Employer Costs for Employee Compensation, Historical Listing, March 2004 – December 2014.” U.S. Bureau of Labor Statistics, March 11, 2015. <www.bls.gov>

Pages 105-106: “Table 7. State and local government workers, by occupational group: employer costs per hours worked for employee compensation and costs as a percentage of total compensation, 2004-2014 … Teachers”

f) Report: “National Compensation Survey: Occupational Earnings in the United States, 2010.” U.S. Bureau of Labor Statistics, May 2011. <www.bls.gov>. Pages 8, 9, 49, and 93.

NOTE: An Excel file containing the data and calculations is available upon request.

[135] See these 4 footnotes for documentation that the following items are excluded from employee compensation data published by the Bureau of Labor Statistics:

  • Unfunded pension liabilities
  • Post-employment expenses of worker compensation, such as retirement health benefits

[136] Calculated with data from:

a) Dataset: “Table 211.10. Average salaries for full-time teachers in public and private elementary and secondary schools, by selected characteristics: 2011-12.” U.S. Department Of Education, National Center for Education Statistics, May 2013. <nces.ed.gov>

b) Dataset: “Table 211.60. Estimated average annual salary of teachers in public elementary and secondary schools, by state: Selected years, 1969-70 through 2012-13.” U.S. Department Of Education, National Center for Education Statistics, April 2013. <nces.ed.gov>

c) Dataset: “Table 236.55. Total and current expenditures per pupil in public elementary and secondary schools: Selected years, 1919-20 through 2011-12.” U.S. Department Of Education, National Center for Education Statistics, July 2014. <nces.ed.gov>

d) Dataset: “Table 106.70. Gross domestic product price index, Consumer Price Index, education price indexes, and federal budget composite deflator: Selected years, 1919 through 2013.” U.S. Department Of Education, National Center for Education Statistics, April 2014. <nces.ed.gov>

e) Report: “Employer Costs for Employee Compensation, Historical Listing, March 2004 – December 2014.” U.S. Bureau of Labor Statistics, March 11, 2015. <www.bls.gov>

Pages 105-106: “Table 7. State and local government workers, by occupational group: employer costs per hours worked for employee compensation and costs as a percentage of total compensation, 2004-2014 … Teachers”

f) Report: “National Compensation Survey: Occupational Earnings in the United States, 2010.” U.S. Bureau of Labor Statistics, May 2011. <www.bls.gov>. Pages 8, 9, 49, and 93.

NOTE: An Excel file containing the data and calculations is available upon request.

[137] See these 4 footnotes for documentation that the following items are excluded from employee compensation data published by the Bureau of Labor Statistics:

  • Unfunded pension liabilities
  • Post-employment expenses of worker compensation, such as retirement health benefits

[138] Article: “Educational Support Services.” By Stephen T. Schroth (Know College). Encyclopedia of Human Services and Diversity. Edited by Linwood H. Cousins. Sage Publications, 2014.

Page 447: “All 50 states and the District of Columbia provide public education for children from kindergarten through grade 12. Additionally, many states also fund preschool programs that permit some children as young as 3 years of age to attend classes.”

[139] Dataset: “Table 203.90. Average daily attendance (ADA) as a percentage of total enrollment, school day length, and school year length in public schools, by school level and state: 2007-08 and 2011-12.” U.S. Department Of Education, National Center for Education Statistics, May 2013. <nces.ed.gov>

“2011-12 … United States … Average hours in school day [=] 6.7 … Average days in school year [=] 179 … Average hours in school year [=] 1,203”

[140] Calculated with data from:

a) Dataset: “Table 105.20. Enrollment in educational institutions, by level and control of institution, enrollment level, and attendance status and sex of student: Selected years, fall 1990 through fall 2023.” U.S. Department Of Education, National Center for Education Statistics, January 2014. <nces.ed.gov>

“Elementary and secondary schools … 2012 … Public = 49,652 … Private [=] 5,181 …

Includes enrollments in local public school systems and in most private schools (religiously affiliated and nonsectarian). Excludes homeschooled children who were not also enrolled in public and private schools. Private elementary enrollment includes preprimary students in schools offering kindergarten or higher grades.”

b) Dataset: “Table 206.10. Number and percentage of homeschooled students ages 5 through 17 with a grade equivalent of kindergarten through 12th grade, by selected child, parent, and household characteristics: 2003, 2007, and 2012.” U.S. Department Of Education, National Center for Education Statistics, November 2014. <nces.ed.gov>

“2012 … Number home-schooled (in thousands) [=] 1,773”

CALCULATIONS:

49,652 public + 5,181 private + 1,773 homeschooled = 56,606 total

49,652 public / 56,606 total = 88%

5,181 private / 56,606 total = 9%

1,773 homeschooled / 56,606 total = 3%

NOTE: The word “approximately” is used because the counts from public and private schools include some preprimary students.

[141] Report: “Documentation to the NCES Common Core of Data Public Elementary/Secondary School Universe Survey: School Year 2010–11, Version Provisional 2a.” U.S. Department Of Education, National Center for Education Statistics, September 2012. <nces.ed.gov>

Page C-6: “Elementary A general level of instruction classified by state and local practice as elementary, composed of any span of grades not above grade 8; preschool or kindergarten included only if it is an integral part of an elementary school or a regularly established school system.”

Page C-14: “Secondary The general level of instruction classified by state and local practice as secondary and composed of any span of grades beginning with the next grade following the elementary grades and ending with or below grade 12.”

[142] Dataset: “Table 219.40. Public high school graduates and averaged freshman graduation rate, by race/ethnicity and state or jurisdiction: 2009-10.” U.S. Department Of Education, National Center for Education Statistics, November 2012. <nces.ed.gov>

“The AFGR [averaged freshman graduation rate] provides an estimate of the percentage of students who receive a regular diploma within 4 years of entering 9th grade. The rate uses aggregate student enrollment data to estimate the size of an incoming freshman class and aggregate counts of the number of diplomas awarded 4 years later.

[143] Calculated with the dataset: “PINC-03. Educational Attainment--People 25 Years Old and Over, by Total Money Earnings in 2013, Work Experience in 2013, Age, Race, Hispanic Origin, and Sex; Current Population Survey 2014 Annual Social and Economic Supplement.” U.S. Census Bureau, October 2, 2014. <www.census.gov>

NOTE: An Excel file containing the data and calculations is available upon request.

[144] Report: “Income and Poverty in the United States: 2013.” By Carmen DeNavas-Walt and Bernadette D. Proctor. U.S. Census Bureau, September 2014. <www.census.gov>

Page 4: “The income and poverty estimates shown in this report are based solely on money income before taxes and do not include the value of noncash benefits, such as those provided by the Supplemental Nutrition Assistance Program (SNAP), Medicare, Medicaid, public housing, or employer-provided fringe benefits.”

[145] Calculated with the dataset: “PINC-03. Educational Attainment--People 25 Years Old and Over, by Total Money Earnings in 2013, Work Experience in 2013, Age, Race, Hispanic Origin, and Sex; Current Population Survey 2014 Annual Social and Economic Supplement.” U.S. Census Bureau, October 2, 2014. <www.census.gov>

NOTE: An Excel file containing the data and calculations is available upon request.

[146] Report: “Income and Poverty in the United States: 2013.” By Carmen DeNavas-Walt and Bernadette D. Proctor. U.S. Census Bureau, September 2014. <www.census.gov>

Page 4: “The income and poverty estimates shown in this report are based solely on money income before taxes and do not include the value of noncash benefits, such as those provided by the Supplemental Nutrition Assistance Program (SNAP), Medicare, Medicaid, public housing, or employer-provided fringe benefits.”

[147] Report: “The Condition of College & Career Readiness 2014.” ACT, 2014. <www.act.org>

Page 3: “Nationally, 1,845,787 students—or 57% of the 2014 U.S. graduating class—took the ACT.”

Page 4: “Percent of 2014 ACT-Tested High School Graduates Meeting ACT College Readiness Benchmarks by Subject … English [=] 64 … Reading [=] 44 … Arithmetic [=] 43 … Science [=] 37 … All Four Subjects [=] 26”

[148] Report: “The Condition of College & Career Readiness 2014.” ACT, 2014. <www.act.org>

Page 6: “Percent of 2010–2014 ACT-Tested High School Graduates Meeting Three or More Benchmarks by Race/Ethnicity … 2014 … Asian [=] 57 White [=] 49 … Pacific Islander [=] 24 … Hispanic [=] 23 American Indian [=] 18 … African American [=] 11”

[149] Calculated with the dataset: “Table 602.10. Average reading literacy scale scores of fourth-graders and percentage whose schools emphasize reading skills and strategies at or before second grade or at third grade, by sex and country or other education system: 2001, 2006, and 2011.” U.S. Department Of Education, National Center for Education Statistics, February 2013. <nces.ed.gov>

NOTE: An Excel file containing the data and calculations is available upon request.

[150] Calculated with the dataset: “Table 602.40. Average reading literacy, mathematics literacy, and science literacy scores of 15-year-old students, by sex and country or other education system: 2009 and 2012.” U.S. Department Of Education, National Center for Education Statistics, July 2014. <nces.ed.gov>

NOTE: An Excel file containing the data and calculations is available upon request.

[151] Webpage: “List of OECD Member countries - Ratification of the Convention on the OECD.” Organization for Economic Cooperation and Development. Accessed May 8, 2013 at <www.oecd.org>

“Australia, Austria, Belgium, Canada, Chile, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Israel, Italy, Japan, Korea [South], Luxembourg, Mexico, Netherlands, New Zealand, Norway, Poland, Portugal, Slovak Republic, Slovenia, Spain, Sweden, Switzerland, Turkey, United Kingdom, United States”

[152] Book: Beyond Economic Growth: An Introduction to Sustainable Development, Second Edition. By Tatyana P. Soubbotina. World Bank, 2004. <www.worldbank.org>

Pages 132-133:

Developed countries (industrial countries, industrially advanced countries). High-income countries, in which most people have a high standard of living. Sometimes also defined as countries with a large stock of physical capital, in which most people undertake highly specialized activities. According to the World Bank classification, these include all high-income economies except Hong Kong (China), Israel, Kuwait, Singapore, and the United Arab Emirates. Depending on who defines them, developed countries may also include middle-income countries with transition economies, because these countries are highly industrialized. Developed countries contain about 15 percent of the world’s population. They are also sometimes referred to as “the North.”

Page 141:

Organisation for Economic Cooperation and Development (OECD). An organization that coordinates policy among developed countries. OECD member countries exchange economic data and create unified policies to maximize their countries’ economic growth and help nonmember countries develop more rapidly. The OECD arose from the Organisation for European Economic Cooperation (OEEC), which was created in 1948 to administer the Marshall Plan in Europe. In 1960, when the Marshall Plan was completed, Canada, Spain, and the United States joined OEEC members to form the OECD.

[153] Dataset: “Table 602.10. Average reading literacy scale scores of fourth-graders and percentage whose schools emphasize reading skills and strategies at or before second grade or at third grade, by sex and country or other education system: 2001, 2006, and 2011.” U.S. Department Of Education, National Center for Education Statistics, February 2013. <nces.ed.gov>

[154] Dataset: “Table 602.40. Average reading literacy, mathematics literacy, and science literacy scores of 15-year-old students, by sex and country or other education system: 2009 and 2012.” U.S. Department Of Education, National Center for Education Statistics, July 2014. <nces.ed.gov>

[155] Calculated with the dataset: “Table 602.20. Average fourth-grade scores and annual instructional time in mathematics and science, by country or other education system: 2011.” U.S. Department Of Education, National Center for Education Statistics, December 2012. <nces.ed.gov>

NOTE: An Excel file containing the data and calculations is available upon request.

[156] Calculated with the dataset: “Table 602.40. Average reading literacy, mathematics literacy, and science literacy scores of 15-year-old students, by sex and country or other education system: 2009 and 2012.” U.S. Department Of Education, National Center for Education Statistics, July 2014. <nces.ed.gov>

NOTE: An Excel file containing the data and calculations is available upon request.

[157] Webpage: “List of OECD Member countries - Ratification of the Convention on the OECD.” Organization for Economic Cooperation and Development. Accessed May 8, 2013 at <www.oecd.org>

“Australia, Austria, Belgium, Canada, Chile, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Israel, Italy, Japan, Korea [South], Luxembourg, Mexico, Netherlands, New Zealand, Norway, Poland, Portugal, Slovak Republic, Slovenia, Spain, Sweden, Switzerland, Turkey, United Kingdom, United States”

[158] Book: Beyond Economic Growth: An Introduction to Sustainable Development, Second Edition. By Tatyana P. Soubbotina. World Bank, 2004. <www.worldbank.org>

Pages 132-133:

Developed countries (industrial countries, industrially advanced countries). High-income countries, in which most people have a high standard of living. Sometimes also defined as countries with a large stock of physical capital, in which most people undertake highly specialized activities. According to the World Bank classification, these include all high-income economies except Hong Kong (China), Israel, Kuwait, Singapore, and the United Arab Emirates. Depending on who defines them, developed countries may also include middle-income countries with transition economies, because these countries are highly industrialized. Developed countries contain about 15 percent of the world’s population. They are also sometimes referred to as “the North.”

Page 141:

Organisation for Economic Cooperation and Development (OECD). An organization that coordinates policy among developed countries. OECD member countries exchange economic data and create unified policies to maximize their countries’ economic growth and help nonmember countries develop more rapidly. The OECD arose from the Organisation for European Economic Cooperation (OEEC), which was created in 1948 to administer the Marshall Plan in Europe. In 1960, when the Marshall Plan was completed, Canada, Spain, and the United States joined OEEC members to form the OECD.

[159] Dataset: “Table 602.20. Average fourth-grade scores and annual instructional time in mathematics and science, by country or other education system: 2011.” U.S. Department Of Education, National Center for Education Statistics, December 2012. <nces.ed.gov>

[160] Dataset: “Table 602.40. Average reading literacy, mathematics literacy, and science literacy scores of 15-year-old students, by sex and country or other education system: 2009 and 2012.” U.S. Department Of Education, National Center for Education Statistics, July 2014. <nces.ed.gov>

[161] Article: “U.S. education spending tops global list, study shows.” Associated Press, June 25, 2013. <www.cbsnews.com>

“When people talk about other countries out-educating the United States, it needs to be remembered that those other nations are out-investing us in education as well,” said Randi Weingarten, president of the American Federation of Teachers, a labor union.”

[162] Calculated with the dataset: “Table 605.10. Gross domestic product per capita and public and private education expenditures per full-time-equivalent (FTE) student, by level of education and country: Selected years, 2005 through 2011.” U.S. Department Of Education, National Center for Education Statistics, August 2014. <nces.ed.gov>

NOTES:

- An Excel file containing the data and calculations is available upon request.

- Data for Canada and Greece were unavailable.

[163] Webpage: “List of OECD Member countries - Ratification of the Convention on the OECD.” Organization for Economic Cooperation and Development. Accessed May 8, 2013 at <www.oecd.org>

“Australia, Austria, Belgium, Canada, Chile, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Israel, Italy, Japan, Korea [South], Luxembourg, Mexico, Netherlands, New Zealand, Norway, Poland, Portugal, Slovak Republic, Slovenia, Spain, Sweden, Switzerland, Turkey, United Kingdom, United States”

[164] Book: Beyond Economic Growth: An Introduction to Sustainable Development, Second Edition. By Tatyana P. Soubbotina. World Bank, 2004. <www.worldbank.org>

Pages 132-133:

Developed countries (industrial countries, industrially advanced countries). High-income countries, in which most people have a high standard of living. Sometimes also defined as countries with a large stock of physical capital, in which most people undertake highly specialized activities. According to the World Bank classification, these include all high-income economies except Hong Kong (China), Israel, Kuwait, Singapore, and the United Arab Emirates. Depending on who defines them, developed countries may also include middle-income countries with transition economies, because these countries are highly industrialized. Developed countries contain about 15 percent of the world’s population. They are also sometimes referred to as “the North.”

Page 141:

Organisation for Economic Cooperation and Development (OECD). An organization that coordinates policy among developed countries. OECD member countries exchange economic data and create unified policies to maximize their countries’ economic growth and help nonmember countries develop more rapidly. The OECD arose from the Organisation for European Economic Cooperation (OEEC), which was created in 1948 to administer the Marshall Plan in Europe. In 1960, when the Marshall Plan was completed, Canada, Spain, and the United States joined OEEC members to form the OECD.

[165] Calculated with data from:

a) Dataset: “Table 605.10. Gross domestic product per capita and public and private education expenditures per full-time-equivalent (FTE) student, by level of education and country: Selected years, 2005 through 2011.” U.S. Department Of Education, National Center for Education Statistics, August 2014. <nces.ed.gov>

b) Dataset: “Table 602.40. Average reading literacy, mathematics literacy, and science literacy scores of 15-year-old students, by sex and country or other education system: 2009 and 2012.” U.S. Department Of Education, National Center for Education Statistics, July 2014. <nces.ed.gov>

NOTES:

- An Excel file containing the data and calculations is available upon request.

- Data for Canada and Greece were unavailable.

[166] Webpage: “List of OECD Member countries - Ratification of the Convention on the OECD.” Organization for Economic Cooperation and Development. Accessed May 8, 2013 at <www.oecd.org>

“Australia, Austria, Belgium, Canada, Chile, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Israel, Italy, Japan, Korea [South], Luxembourg, Mexico, Netherlands, New Zealand, Norway, Poland, Portugal, Slovak Republic, Slovenia, Spain, Sweden, Switzerland, Turkey, United Kingdom, United States”

[167] Book: Beyond Economic Growth: An Introduction to Sustainable Development, Second Edition. By Tatyana P. Soubbotina. World Bank, 2004. <www.worldbank.org>

Pages 132-133:

Developed countries (industrial countries, industrially advanced countries). High-income countries, in which most people have a high standard of living. Sometimes also defined as countries with a large stock of physical capital, in which most people undertake highly specialized activities. According to the World Bank classification, these include all high-income economies except Hong Kong (China), Israel, Kuwait, Singapore, and the United Arab Emirates. Depending on who defines them, developed countries may also include middle-income countries with transition economies, because these countries are highly industrialized. Developed countries contain about 15 percent of the world’s population. They are also sometimes referred to as “the North.”

Page 141:

Organisation for Economic Cooperation and Development (OECD). An organization that coordinates policy among developed countries. OECD member countries exchange economic data and create unified policies to maximize their countries’ economic growth and help nonmember countries develop more rapidly. The OECD arose from the Organisation for European Economic Cooperation (OEEC), which was created in 1948 to administer the Marshall Plan in Europe. In 1960, when the Marshall Plan was completed, Canada, Spain, and the United States joined OEEC members to form the OECD.

[168] “Nineteenth Annual Report of the Board of Education of Jersey City, N.J. for the Year Ending November 30, 1885.” By the Jersey City Board of Education, Department of Public Instruction. Sunday Tattler Print, 1886. <books.google.com>

Page 8:

Cost Per Pupil, Based on Average Attendance, Average Register, Total Enrollment, For the Year …

For all the schools

Average Attendance [=] 14,926

Average Register [=] 16,186

Total Enrollment [=] 24,446

Cost per Pupil, Based on Average Attendance [=] $13.24

Cost per Pupil, Based on Average Register [=] $12.21

Cost per Pupil, Based on Average Total Enrollment [=] $8.09

[169] Webpage: “Consumer Price Index (Estimate) 1800-.” Federal Reserve Bank of Minneapolis. Accessed July 11, 2015 at <www.minneapolisfed.org>

“Annual Average [CPI] … 1855 [=] 28 … 2014 [=] 711.4”

CALCULATION: $13.24 × (711.4 / 28) = $336

[170] “Nineteenth Annual Report of the Board of Education of Jersey City, N.J. for the Year Ending November 30, 1885.” By the Jersey City Board of Education, Department of Public Instruction. Sunday Tattler Print, 1886. <books.google.com>

Pages 19-23:

The following is the list of questions used at the last examination for entrance to the High School, and the names and ranks of the successful candidates….

1885 High School Entrance Exam, Page 1
1885 High School Entrance Exam, Page 2
1885 High School Entrance Exam, Page 3
1885 High School Entrance Exam, Page 4
1885 High School Entrance Exam, Page 5
1885 High School Entrance Exam, Page 6
1885 High School Entrance Exam, Page 7
1885 High School Entrance Exam, Page 8
1885 High School Entrance Exam, Page 9

Page 28: “The rules for the Government of the High School provide that all examinations for admission shall be in writing, and shall be conducted by the Principal and Assistant Teachers of the High School under the supervision of the Committee on High School and the School Superintendent. The Committee shall fix a standard for all examinations, which shall not be less than 75 percent of the maximum credits attainable.”

[171] Dataset: “Table 236.55. Total and current expenditures per pupil in public elementary and secondary schools: Selected years, 1919-20 through 2011-12.” U.S. Department Of Education, National Center for Education Statistics, July 2014. <nces.ed.gov>

“Expenditure per pupil in average daily attendance … Total expenditure … 1919-20 [=] $788 … 2011-12 [=] $13,210”

[172] See these 13 footnotes for documentation that the following items are excluded from spending data published by the National Center for Education Statistics:

  • State administration spending
  • Unfunded pension benefits
  • Post-employment non-pension benefits like health insurance

[173] “Glossary: Integrated Postsecondary Education Data System.” U.S. Department of Education, National Center for Education Statistics. Accessed May 12, 2015 at <nces.ed.gov>

Private for-profit institution A private institution in which the individual(s) or agency in control receives compensation other than wages, rent, or other expenses for the assumption of risk.

Private not-for-profit institution A private institution in which the individual(s) or agency in control receives no compensation, other than wages, rent, or other expenses for the assumption of risk. These include both independent not-for-profit schools and those affiliated with a religious organization.

Public institution An educational institution whose programs and activities are operated by publicly elected or appointed school officials and which is supported primarily by public funds.

[174] Dataset: “Table 334.10. Expenditures of public degree-granting postsecondary institutions, by purpose of expenditure and level of institution: 2006-07 through 2012-13.” U.S. Department Of Education, National Center for Education Statistics, January 2015. <nces.ed.gov>

[175] Dataset: “Table 334.30. Total expenditures of private nonprofit degree-granting postsecondary institutions, by purpose and level of institution: 1999-2000 through 2012-13.” U.S. Department Of Education, National Center for Education Statistics, January 2015. <nces.ed.gov>

[176] Dataset: “Table 334.50. Total expenditures of private for-profit degree-granting postsecondary institutions, by purpose and level of institution: 1999-2000 through 2012-13.” U.S. Department Of Education, National Center for Education Statistics, January 2015. <nces.ed.gov>

[177] “Glossary: Integrated Postsecondary Education Data System.” U.S. Department of Education, National Center for Education Statistics. Accessed June 18, 2015 at <nces.ed.gov>

Instruction A functional expense category that includes expenses of the colleges, schools, departments, and other instructional divisions of the institution and expenses for departmental research and public service that are not separately budgeted. Includes general academic instruction, occupational and vocational instruction, community education, preparatory and adult basic education, and regular, special, and extension sessions. Also includes expenses for both credit and non-credit activities. Excludes expenses for academic administration where the primary function is administration (e.g., academic deans). Information technology expenses related to instructional activities if the institution separately budgets and expenses information technology resources are included (otherwise these expenses are included in academic support). Institutions include actual or allocated costs for operation and maintenance of plant, interest, and depreciation.

[178] “Glossary: Integrated Postsecondary Education Data System.” U.S. Department of Education, National Center for Education Statistics. Accessed June 18, 2015 at <nces.ed.gov>

Research A functional expense category that includes expenses for activities specifically organized to produce research outcomes and commissioned by an agency either external to the institution or separately budgeted by an organizational unit within the institution. The category includes institutes and research centers, and individual and project research. This function does not include nonresearch sponsored programs (e.g., training programs). Also included are information technology expenses related to research activities if the institution separately budgets and expenses information technology resources (otherwise these expenses are included in academic support.) Institutions include actual or allocated costs for operation and maintenance of plant, interest, and depreciation.

[179] “Glossary: Integrated Postsecondary Education Data System.” U.S. Department of Education, National Center for Education Statistics. Accessed June 18, 2015 at <nces.ed.gov>

Public service A functional expense category that includes expenses for activities established primarily to provide noninstructional services beneficial to individuals and groups external to the institution. Examples are conferences, institutes, general advisory service, reference bureaus, and similar services provided to particular sectors of the community. This function includes expenses for community services, cooperative extension services, and public broadcasting services. Also includes information technology expenses related to the public service activities if the institution separately budgets and expenses information technology resources (otherwise these expenses are included in academic support). Institutions include actual or allocated costs

[180] “Glossary: Integrated Postsecondary Education Data System.” U.S. Department of Education, National Center for Education Statistics. Accessed June 18, 2015 at <nces.ed.gov>

Academic support A functional expense category that includes expenses of activities and services that support the institution’s primary missions of instruction, research, and public service. It includes the retention, preservation, and display of educational materials (for example, libraries, museums, and galleries); organized activities that provide support services to the academic functions of the institution (such as a demonstration school associated with a college of education or veterinary and dental clinics if their primary purpose is to support the instructional program); media such as audiovisual services; academic administration (including academic deans but not department chairpersons); and formally organized and separately budgeted academic personnel development and course and curriculum development expenses. Also included are information technology expenses related to academic support activities; if an institution does not separately budget and expense information technology resources, the costs associated with the three primary programs will be applied to this function and the remainder to institutional support. Institutions include actual or allocated costs for operation and maintenance of plant, interest, and depreciation.

[181] “Glossary: Integrated Postsecondary Education Data System.” U.S. Department of Education, National Center for Education Statistics. Accessed June 18, 2015 at <nces.ed.gov>

Student services A functional expense category that includes expenses for admissions, registrar activities, and activities whose primary purpose is to contribute to students emotional and physical well-being and to their intellectual, cultural, and social development outside the context of the formal instructional program. Examples include student activities, cultural events, student newspapers, intramural athletics, student organizations, supplemental instruction outside the normal administration, and student records. Intercollegiate athletics and student health services may also be included except when operated as self-supporting auxiliary enterprises. Also may include information technology expenses related to student service activities if the institution separately budgets and expenses information technology resources(otherwise these expenses are included in institutional support.) Institutions include actual or allocated costs for operation and maintenance of plant, interest, and depreciation.

[182] “Glossary: Integrated Postsecondary Education Data System.” U.S. Department of Education, National Center for Education Statistics. Accessed June 18, 2015 at <nces.ed.gov>

Institutional support A functional expense category that includes expenses for the day-to-day operational support of the institution. Includes expenses for general administrative services, central executive-level activities concerned with management and long range planning, legal and fiscal operations, space management, employee personnel and records, logistical services such as purchasing and printing, and public relations and development. Also includes information technology expenses related to institutional support activities. If an institution does not separately budget and expense information technology resources, the IT costs associated with student services and operation and maintenance of plant will also be applied to this function.

[183] “Glossary: Integrated Postsecondary Education Data System.” U.S. Department of Education, National Center for Education Statistics. Accessed June 18, 2015 at <nces.ed.gov>

Hospital services Expenses associated with a hospital operated by the postsecondary institution (but not as a component unit) and reported as a part of the institution. This classification includes nursing expenses, other professional services, general services, administrative services, and fiscal services. Also included are information technology expenses, actual or allocated costs for operation and maintenance of plant, interest and depreciation related to hospital capital assets.

Hospitals (revenues) Revenues generated by a hospital operated by the postsecondary institution. Includes gifts, grants, appropriations, research revenues, endowment income, and revenues of health clinics that are part of the hospital unless such clinics are part of the student health services program. Sales and service revenues are included net of patient contractual allowances. Revenues associated with the medical school are included elsewhere. Also includes all amounts appropriated by governments (federal, state, local) for the operation of hospitals.

[184] “Glossary: Integrated Postsecondary Education Data System.” U.S. Department of Education, National Center for Education Statistics. Accessed June 18, 2015 at <nces.ed.gov>

Auxiliary enterprises expenses Expenses for essentially self-supporting operations of the institution that exist to furnish a service to students, faculty, or staff, and that charge a fee that is directly related to, although not necessarily equal to, the cost of the service. Examples are residence halls, food services, student health services, intercollegiate athletics (only if essentially self-supporting), college unions, college stores, faculty and staff parking, and faculty housing. Institutions include actual or allocated costs for operation and maintenance of plant, interest and depreciation.

Auxiliary enterprises revenues Revenues generated by or collected from the auxiliary enterprise operations of the institution that exist to furnish a service to students, faculty, or staff, and that charge a fee that is directly related to, although not necessarily equal to, the cost of the service. Auxiliary enterprises are managed as essentially self-supporting activities. Examples are residence halls, food services, student health services, intercollegiate athletics, college unions, college stores, and movie theaters.

[185] “Glossary: Integrated Postsecondary Education Data System.” U.S. Department of Education, National Center for Education Statistics. Accessed June 18, 2015 at <nces.ed.gov>

Operation and maintenance of plant A functional expense category that includes expenses for operations established to provide service and maintenance related to campus grounds and facilities used for educational and general purposes. Specific expenses include utilities, fire protection, property insurance, and similar items. This function does include amounts charged to auxiliary enterprises, hospitals, and independent operations. Also includes information technology expenses related to operation and maintenance of plant activities if the institution separately budgets and expenses information technology resources (otherwise these expenses are included in institutional support). Institutions may, as an option, distribute depreciation expense to this function.

[186] Calculated with data from:

a) Dataset: “Table 334.10. Expenditures of public degree-granting postsecondary institutions, by purpose of expenditure and level of institution: 2006-07 through 2012-13.” U.S. Department Of Education, National Center for Education Statistics, January 2015. <nces.ed.gov>

b) Dataset: “Table 334.30. Total expenditures of private nonprofit degree-granting postsecondary institutions, by purpose and level of institution: 1999-2000 through 2012-13.” U.S. Department Of Education, National Center for Education Statistics, January 2015. <nces.ed.gov>

c) Dataset: “Table 334.50. Total expenditures of private for-profit degree-granting postsecondary institutions, by purpose and level of institution: 1999-2000 through 2012-13.” U.S. Department Of Education, National Center for Education Statistics, January 2015. <nces.ed.gov>

NOTE: An Excel file containing the data and calculations is available upon request.

[187] Dataset: “Table 3.16. Government Current Expenditures by Function.” U.S. Bureau of Economic Analysis. Last revised September 17, 2014. <www.bea.gov>

Line 32: “Education … Higher … 2013 [=] $167.3 [billions dollars]”

NOTE: This figure does not include government funding for research conducted by universities. [Email from the U.S. Bureau of Economic Analysis to Just Facts, June 19, 2015.]

[188] Calculated with data from:

a) Dataset: “Table 334.10. Expenditures of public degree-granting postsecondary institutions, by purpose of expenditure and level of institution: 2006-07 through 2012-13.” U.S. Department Of Education, National Center for Education Statistics, January 2015. <nces.ed.gov>

b) Dataset: “Table 334.30. Total expenditures of private nonprofit degree-granting postsecondary institutions, by purpose and level of institution: 1999-2000 through 2012-13.” U.S. Department Of Education, National Center for Education Statistics, January 2015. <nces.ed.gov>

c) Dataset: “Table 334.50. Total expenditures of private for-profit degree-granting postsecondary institutions, by purpose and level of institution: 1999-2000 through 2012-13.” U.S. Department Of Education, National Center for Education Statistics, January 2015. <nces.ed.gov>

NOTES:

- An Excel file containing the data and calculations is available upon request.

- Functions that contribute directly to the education of students and the general public include instruction, public service, and academic support (see next footnote).

- For private for-profit colleges, the National Center for Education Statistics does not segregate spending on: (a) public service from research and (b) academic support from institutional support and student services. To estimate spending on these individual functions for private for-profit colleges, Just Facts averaged the respective ratios on these functions from public and private non-profit colleges and applied these ratios to private for-profit colleges.

[189] “Glossary: Integrated Postsecondary Education Data System.” U.S. Department of Education, National Center for Education Statistics. Accessed June 18, 2015 at <nces.ed.gov>

Instruction A functional expense category that includes expenses of the colleges, schools, departments, and other instructional divisions of the institution and expenses for departmental research and public service that are not separately budgeted. Includes general academic instruction, occupational and vocational instruction, community education, preparatory and adult basic education, and regular, special, and extension sessions. Also includes expenses for both credit and non-credit activities. Excludes expenses for academic administration where the primary function is administration (e.g., academic deans). Information technology expenses related to instructional activities if the institution separately budgets and expenses information technology resources are included (otherwise these expenses are included in academic support). Institutions include actual or allocated costs for operation and maintenance of plant, interest, and depreciation.

Public service A functional expense category that includes expenses for activities established primarily to provide noninstructional services beneficial to individuals and groups external to the institution. Examples are conferences, institutes, general advisory service, reference bureaus, and similar services provided to particular sectors of the community. This function includes expenses for community services, cooperative extension services, and public broadcasting services. Also includes information technology expenses related to the public service activities if the institution separately budgets and expenses information technology resources (otherwise these expenses are included in academic support). Institutions include actual or allocated costs

Academic support A functional expense category that includes expenses of activities and services that support the institution’s primary missions of instruction, research, and public service. It includes the retention, preservation, and display of educational materials (for example, libraries, museums, and galleries); organized activities that provide support services to the academic functions of the institution (such as a demonstration school associated with a college of education or veterinary and dental clinics if their primary purpose is to support the instructional program); media such as audiovisual services; academic administration (including academic deans but not department chairpersons); and formally organized and separately budgeted academic personnel development and course and curriculum development expenses. Also included are information technology expenses related to academic support activities; if an institution does not separately budget and expense information technology resources, the costs associated with the three primary programs will be applied to this function and the remainder to institutional support. Institutions include actual or allocated costs for operation and maintenance of plant, interest, and depreciation.

[190] Email from the U.S. Bureau of Economic Analysis to Just Facts, June 15, 2015.

“Federal government outlays on loans, including student loans, are typically excluded from BEA’s estimates of federal expenditures. Unlike similar estimates – notably, estimates of spending on student loans in the federal budget – BEA excludes both the loan amounts and the subsidy costs of those loans from our estimates.”

[191] Calculated with the dataset: “Table 330.10. Average undergraduate tuition and fees and room and board rates charged for full-time students in degree-granting postsecondary institutions, by level and control of institution: 1963-64 through 2013-14.” U.S. Department Of Education, National Center for Education Statistics, December 2014. <nces.ed.gov>

NOTE: An Excel file containing the data and calculations is available upon request.

[192] Email from the U.S. Department Of Education, National Center for Education Statistics to Just Facts, June 10, 2015.

“In reference to table 330.10, ‘tuition and fees and room and board rates charged’ refer to the published rates that do not include discounts or student financial aid in its calculation.”

[193] Report: “Trends in the Student Loan Market.” U.S. Treasury Department, Treasury Borrowing Advisory Committee, November 4, 2014. <www.treasury.gov>

Page 6:

History of the Student Lending Program

• Student loans are used to finance post-secondary education, which is typically targeted for undergraduate and postgraduate education but also can include eligible vocational or trade schools.

• The U.S. government began offering Federal financing for Institutions of Higher Education (IHE) in 1965 with Title IV of the Higher Education Act (HEA).

[194] Report: “Trends in the Student Loan Market.” U.S. Treasury Department, Treasury Borrowing Advisory Committee, November 4, 2014. <www.treasury.gov>

Page 36:

Types of U.S. Federal Student Loans

• Direct, Subsidized Loans. Loan is directly administered by the Federal government and offered only to undergraduate students based on financial need. Interest does not accumulate while the borrower remains in school. The interest rate (2014-15) is 4.66% and the maximum loan balance is $23,000.

• Direct, Unsubsidized Loans. Loan is directly administered by the Federal government and offered to both undergraduate and graduate students regardless of need. Interest accumulates while the borrower remains in school. The 2014-15 interest rate for undergraduates is 4.66% and for graduate students is 6.21%. For undergraduate students, the maximum loan balance is $31,000 for dependent students (i.e. supported by parents) and the maximum combined balance of subsidized and unsubsidized Federal loans is $57,500 for independent students. Graduate and professional students have a hard cap of $138,500 balance.

• Direct PLUS Loans. Loan is directly administered by the Federal government and offered to graduate students and the parents of undergraduate students up to the cost of tuition and living expenses, at an interest rate (2014-15) of 7.21%.

• Perkins Loans. Loan is administered by the IHE/university. Interest does not accumulate while the borrower remains in school. The 2014-15 rate is 5.0%. The aggregate limit is $27,500 for undergraduate and $60,000 for graduate students (inclusive of the $27,500 as an undergraduate).

Page 37:

Terms of U.S. Federal Student Loans

• Pay rate – Interest rates are fixed over the life of the loan, but are based upon the UST10y rate and a fixed spread – 205bp for undergraduate loans, 360bp for graduate loans and 460bp for PLUS loans. The rates are capped at 8.25% (undergraduate), 9.50% (graduate) and 10.50% (PLUS).

• Maturity – The maturity of student loans is typically 10y but can extend to 25y.

• Repayment – For the most part, Federal student loans are similar to auto loans, with a fixed monthly payment of principal and interest over a ten year term.

[195] Report: “Trends in the Student Loan Market.” U.S. Treasury Department, Treasury Borrowing Advisory Committee, November 4, 2014. <www.treasury.gov>

Page 36:

Types of U.S. Federal Student Loans

• Direct, Subsidized Loans. Loan is directly administered by the Federal government and offered only to undergraduate students based on financial need. Interest does not accumulate while the borrower remains in school. The interest rate (2014-15) is 4.66% and the maximum loan balance is $23,000.

• Direct, Unsubsidized Loans. Loan is directly administered by the Federal government and offered to both undergraduate and graduate students regardless of need. Interest accumulates while the borrower remains in school. The 2014-15 interest rate for undergraduates is 4.66% and for graduate students is 6.21%. For undergraduate students, the maximum loan balance is $31,000 for dependent students (i.e. supported by parents) and the maximum combined balance of subsidized and unsubsidized Federal loans is $57,500 for independent students. Graduate and professional students have a hard cap of $138,500 balance.

• Direct PLUS Loans. Loan is directly administered by the Federal government and offered to graduate students and the parents of undergraduate students up to the cost of tuition and living expenses, at an interest rate (2014-15) of 7.21%.

• Perkins Loans. Loan is administered by the IHE/university. Interest does not accumulate while the borrower remains in school. The 2014-15 rate is 5.0%. The aggregate limit is $27,500 for undergraduate and $60,000 for graduate students (inclusive of the $27,500 as an undergraduate).

Page 37:

Terms of U.S. Federal Student Loans

• Pay rate – Interest rates are fixed over the life of the loan, but are based upon the UST10y rate and a fixed spread – 205bp for undergraduate loans, 360bp for graduate loans and 460bp for PLUS loans. The rates are capped at 8.25% (undergraduate), 9.50% (graduate) and 10.50% (PLUS).

• Maturity – The maturity of student loans is typically 10y but can extend to 25y.

• Repayment – For the most part, Federal student loans are similar to auto loans, with a fixed monthly payment of principal and interest over a ten year term.

[196] “Fiscal Year 2014 Financial Report of the United States Government.” U.S. Department of the Treasury, February 26, 2015. <www.fiscal.treasury.gov>

Page 71: “For those unable to afford credit at the market rate, federal credit programs provide subsidies in the form of direct loans offered at an interest rate lower than the market rate. For those to whom non-federal financial institutions are reluctant to grant credit because of the high risk involved, federal credit programs guarantee the payment of these non-federal loans and absorb the cost of defaults.”

Page 72: “The majority of the loan programs are provided by Education, HUD, USDA, Treasury, Small Business Administration (SBA), VA, Export-Import Bank and United States Agency for International Development (USAID). For significant detailed information regarding the direct and guaranteed loan programs listed in the tables above, please refer to the financial statements of the agencies.”

[197] Report: “Fair-Value Accounting for Federal Credit Programs.” U.S. Congressional Budget Office, March 2012. <www.cbo.gov>

Page 2: “Market risk is the component of financial risk that remains even after investors have diversified their portfolios as much as possible; it arises from shifts in macroeconomic conditions, such as productivity and employment, and from changes in expectations about future macroeconomic conditions. Loans and loan guarantees expose the government to market risk because future repayments of loans tend to be lower when the economy as a whole is performing poorly and resources are more highly valued.”

Page 7: “When the government extends credit, the associated market risk of those obligations is effectively passed along to citizens who, as investors, would view that risk as costly.”

[198] Article: “The New Math of Student Loans.” By AnnaMaria Andriotis. Wall Street Journal, June 12, 2015. <www.wsj.com>

[For] undergraduate students with creditworthy parents and graduate students with high credit scores—student loans from private lenders … could help them save thousands of dollars over the life of a loan. …

[Federal student loans don’t] reward parents who have high credit scores and are financially comfortable, because every applicant who gets approved for a federal loan—no matter what their credit score is—ends up with the same interest rate.

By contrast, private loan rates are determined based largely on the applicant or parent’s credit as well as documentation of their income. …

… Beginning July 1, interest rates will be 4.29% for [federal] Stafford loans for undergraduates, down from 4.66%. The rate on the federal Plus loan will be 6.84%, down from 7.21%. …

SunTrust Banks, for example, currently is charging fixed interest rates between 4% and 10.5% on its loans, which range from seven to 15 years. …

Citizens Bank, a unit of Citizens Financial Group, charges interest rates of as little as 5.75% for undergraduate fixed-rate loans. …

[199] Transcript: “Remarks at Southwest Texas State College Upon Signing the Higher Education Act of 1965.” Lyndon B. Johnson, November 8, 1965. <www.lbjlib.utexas.edu>

In a very few moments, I will put my signature on the Higher Education Act of 1965. The President’s signature upon this legislation passed by this Congress will swing open a new door for the young people of America. For them, and for this entire land of ours, it is the most important door that will ever open--the door to education. …

This bill is only one of more than two dozen education measures enacted by the first session of the 89th Congress. And history will forever record that this session-the first session of the 89th Congress--did more for the wonderful cause of education in America than all the previous 176 regular sessions of Congress did, put together.

[200] Report: “Trends in the Student Loan Market.” U.S. Treasury Department, Treasury Borrowing Advisory Committee, November 4, 2014. <www.treasury.gov>

Page 6:

History of the Student Lending Program

• Student loans are used to finance post-secondary education, which is typically targeted for undergraduate and postgraduate education but also can include eligible vocational or trade schools.

• The U.S. government began offering Federal financing for Institutions of Higher Education (IHE) in 1965 with Title IV of the Higher Education Act (HEA).

[201] “Fiscal Year 2014 Financial Report of the United States Government.” U.S. Department of the Treasury, February 26, 2015. <www.fiscal.treasury.gov>

Page 71: “For those to whom non-federal financial institutions are reluctant to grant credit because of the high risk involved, federal credit programs guarantee the payment of these non-federal loans and absorb the cost of defaults.”

Page 72: “[T]he Federal Family Education Loan (FFEL) Program … was established in fiscal year 1965, and is a guaranteed loan program.”

[202] Report: “Federal Family Education Loan Program’s Financial Statements for Fiscal Years 1993 and 1992.” General Accounting Office, June 1994. <www.gao.gov>

Page 65:

On August 10,1993, President Clinton signed the Omnibus Budget Reconciliation Act of 1993 (P.L. 103-66). A portion of that Act entitled “The Student Loan Reform Act of 1993” requires the phase-in of federal direct student lending. Direct student lending, as a percentage of new student loan volume will be phased in over five years as follows:

Academic Year

Percent

1994-95

5%

1995-96

40%

1996-97

at least 50%

1997-98

at least 50%

1998-99

at least 60%

The Student Loan Reform Act of 1993 ensures adequate financing for the current guaranty agencies during the transition and provides for alternative mechanisms to assure loan guarantees in the event that any of the guaranty agencies do not continue to operate. The implementation plans for the new direct loan program provide for Education’s coat of transitioning outstanding guaranteed loans, therefore no provision for such coat has been included in the principal statements.

[203] Calculated with data from:

a) Vote 406: “Omnibus Budget Reconciliation Act of 1993.” U.S. House of Representatives, August 5, 1993. <clerk.house.gov>

b) Vote 247: “Omnibus Budget Reconciliation Act of 1993.” U.S. Senate, August 6, 1993. <www.senate.gov>

Combined vote totals from both House of Congress:

Party

Voted YES

Voted NO

Republican

0

0%

219

100%

Democrat

268

85%

47

15%

Independent

1

100%

0

0%

[204] Report: “Trends in the Student Loan Market.” U.S. Treasury Department, Treasury Borrowing Advisory Committee, November 4, 2014. <www.treasury.gov>

Page 3: “[T]he Student Aid and Fiscal Responsibility Act (SAFRA) of 2010 ceased the origination of federal student loans by private lenders and as of July 1, 2010, all federal student loans are made directly by the Department of Education and funded by the U.S. Treasury Department.”

[205] “Fiscal Year 2012 Financial Report of the United States Government.” U.S. Department of the Treasury, January, 17, 2013. <fms.treas.gov>

Page 11: “The Student Aid and Fiscal Responsibility Act (SAFRA), which was enacted as part of the Health Care Education and Reconciliation Act of 2010 (Public Law 111-152), eliminated the authority to guarantee new FFEL [Federal Family Education Loans] after June 30, 2010.”

[206] House Resolution 4872: “Health Care and Education Reconciliation Act.” Signed into law by Barack Obama on March 30, 2010 (became Public Law No: 111-152). <www.gpo.gov>

[207] Calculated with data from:

a) Vote 194: “Health Care and Education Reconciliation Act of 2010.” U.S. House of Representatives, March 25, 2010. <clerk.house.gov>

b) Vote 105: “Health Care and Education Reconciliation Act of 2010.” U.S. Senate, March 25, 2010. <www.senate.gov>

Combined vote totals from both House of Congress:

Party;

Voted YES;;

Voted NO;;

Republican;

0;

0%;

215;

100%;

Democrat;

274;

89%;

35;

11%;

Independent;

2;

100%;

0;

0%;

NOTE: Results do not include those not voting or those who voted “Present.”

[208] Calculated with data from:

a) Dataset: “Quarterly Report on Household Debt and Credit.” Federal Reserve Bank of New York, Research And Statistics Group, Microeconomic Studies, May 2015. <www.newyorkfed.org>

b) “CPI Detailed Report Data for May 2015.” U.S. Department of Labor, Bureau of Labor Statistics, May 2015. <www.bls.gov>

“Table 24. Historical Consumer Price Index for All Urban Consumers (CPI-U): U. S. city average, all items (1982-84=100, unless otherwise noted)”

NOTE: An Excel file containing the data and calculations is available upon request.

[209] Article: “Student Loan Delinquencies Surge.” By Emily Dai. Inside the Vault, Federal Reserve Bank of St. Louis, Spring 2013. Pages 1-3. <www.stlouisfed.org>

Page 1: “Student loan debt increased significantly over the past few years, almost doubling from half a trillion dollars in 2007 to nearly $1 trillion today. After mortgage debt, it is the largest amount of debt held by U.S. consumers. In contrast, the amount of auto loan and credit card debt held by U.S. consumers today is approximately $783 billion and $679 billion, respectively.”

[210] Dataset: “Quarterly Report on Household Debt and Credit.” Federal Reserve Bank of New York, Research And Statistics Group, Microeconomic Studies, May 2015. <www.newyorkfed.org>

Page 3:

Total Debt Balance and its Composition … Trillions $

Type

15:Q1

Mortgage

8.171

HE Revolving

0.510

Auto Loan

0.968

Credit Card

0.684

Student Loan

1.189

Other

0.329

Total

11.851

[211] “Quarterly Report on Household Debt and Credit.” Federal Reserve Bank of New York, Research And Statistics Group, Microeconomic Studies, February 2015. <www.newyorkfed.org>

Page 28:

Loan types. In our analysis we distinguish between the following types of accounts: mortgage accounts, home equity revolving accounts, auto loans, bank card accounts, student loans and other loan accounts. Mortgage accounts include all mortgage installment loans, including first mortgages and home equity installment loans (HEL), both of which are closed-end loans. Home Equity Revolving accounts (aka Home Equity Line of Credit or HELOC), unlike home equity installment loans, are home equity loans with a revolving line of credit where the borrower can choose when and how often to borrow up to an updated credit limit. Auto Loans are loans taken out to purchase a car, including Auto Bank loans provided by banking institutions (banks, credit unions, savings and loan associations), and Auto Finance loans, provided by automobile dealers and automobile financing companies. Bankcard accounts (or credit card accounts) are revolving accounts for banks, bankcard companies, national credit card companies, credit unions and savings & loan associations. Student Loans include loans to finance educational expenses provided by banks, credit unions and other financial institutions as well as federal and state governments. The Other category includes Consumer Finance (sales financing, personal loans) and Retail (clothing, grocery, department stores, home furnishings, gas etc) loans. Our analysis excludes authorized user trades, disputed trades, lost/stolen trades, medical trades, child/family support trades, commercial trades and, as discussed above, inactive trades (accounts not reported on within the last 3 months).

[212] Article: “Student Loan Delinquencies Surge.” By Emily Dai. Inside the Vault, Federal Reserve Bank of St. Louis, Spring 2013. Pages 1-3. <www.stlouisfed.org>

Page 1:

In the third quarter of 2012, the share of delinquent student loan balances exceeded the share of delinquent credit card balances, according to the Federal Reserve Bank of New York’s Consumer Credit Panel and to Equifax.2 This is the first such occurrence since 2003, when reliable data became available.3 In the fourth quarter of 2012, the share of delinquent student loan balances continued to rise.

2. “Delinquent” here refers to balances past due for 90 days or more.

3. The data were first captured by Equifax in 2003 and first reported in 2010 in the Federal Reserve Bank of New York’s Household Debt and Credit Report.

[213] Dataset: “Quarterly Report on Household Debt and Credit.” Federal Reserve Bank of New York, Research And Statistics Group, Microeconomic Studies, May 2015. <www.newyorkfed.org>

[214] “Quarterly Report on Household Debt and Credit.” Federal Reserve Bank of New York, Research And Statistics Group, Microeconomic Studies, February 2015. <www.newyorkfed.org>

Page 28:

Loan types. In our analysis we distinguish between the following types of accounts: mortgage accounts, home equity revolving accounts, auto loans, bank card accounts, student loans and other loan accounts. Mortgage accounts include all mortgage installment loans, including first mortgages and home equity installment loans (HEL), both of which are closed-end loans. Home Equity Revolving accounts (aka Home Equity Line of Credit or HELOC), unlike home equity installment loans, are home equity loans with a revolving line of credit where the borrower can choose when and how often to borrow up to an updated credit limit. Auto Loans are loans taken out to purchase a car, including Auto Bank loans provided by banking institutions (banks, credit unions, savings and loan associations), and Auto Finance loans, provided by automobile dealers and automobile financing companies. Bankcard accounts (or credit card accounts) are revolving accounts for banks, bankcard companies, national credit card companies, credit unions and savings & loan associations. Student Loans include loans to finance educational expenses provided by banks, credit unions and other financial institutions as well as federal and state governments. The Other category includes Consumer Finance (sales financing, personal loans) and Retail (clothing, grocery, department stores, home furnishings, gas etc) loans. Our analysis excludes authorized user trades, disputed trades, lost/stolen trades, medical trades, child/family support trades, commercial trades and, as discussed above, inactive trades (accounts not reported on within the last 3 months).

[215] Report: “Trends in the Student Loan Market.” U.S. Treasury Department, Treasury Borrowing Advisory Committee, November 4, 2014. <www.treasury.gov>

Page 11:

• In attempting to gauge the potential future cost of the program, it is important to consider not only the volume of loans in default, but also the volumes in three other categories that could indicate difficulty repaying: deferment, forbearance, and serious delinquency.

• Default: “Default” in the context of student loans is generally defined as 270 days without payment. …

• Deferment: Payments have been postponed as a result of certain circumstances such as returning to school, military service, or economic hardship.

• Forbearance: Payments have been temporarily suspended or reduced as a result of certain types of financial hardships.

• The ability to defer or forbear on loans distinguishes student lending from other credit. During deferment or forbearance, the principal and interest of the loans capitalize, making balances larger for students and exacerbating repayment potential.

[216] Report: “Trends in the Student Loan Market.” U.S. Treasury Department, Treasury Borrowing Advisory Committee, November 4, 2014. <www.treasury.gov>

Page 12: “Breakdown of U.S. Federal Student Financing by Repayment Status, Type (As of June 2014)”

Page 11: “Serious Delinquency (90+ Days): Classified as in repayment, but given the high volume relative to historical levels, some portion can be considered at risk of default.”

[217] Report: “Higher Education: Education Should Strengthen Oversight of Schools and Accreditors.” U.S. Government Accountability Office, January 22, 2015. <www.gao.gov>

Report: “Higher Education: Education Should Strengthen Oversight of Schools and Accreditors.” U.S. Government Accountability Office, January 22, 2015. <www.gao.gov>

Page 23:

Selected Student Outcome Characteristics:

* Three-Year Cohort Default Rate: the percent of borrowers in default 3 years after entering repayment status. Education views this characteristic as an indicator of academic quality at schools, since students who received a lower quality education may be less likely to have adequate income to repay their loans;

* Forbearance Rate: the percent of borrowers in forbearance (and therefore not repaying their loans on a temporary basis) during the official cohort default period. Education views this characteristic as an indicator of academic quality at schools, since students who received a lower quality education may be less likely to have adequate income to repay their loans;

Page 52:

Table 6: Variables Used as Risk Indicators to Determine Likelihood of Accreditor Sanctions:

Risk Indicator: 3-Year Loan Default Rate (2009); Risk Scale: Academic; Percentage Schools with Missing Data: 3.1; Percentage Schools with Indicator NA: 19.1; Mean: 13.7; Median: 12.1; Range Between 5th, 95th Quantiles: [0.4,31.5]; Skewness: 1.5; Kurtosis: 9.2.

Risk Indicator: 3-Year Loan Default Rate (2010); Risk Scale: Academic; Percentage Schools with Missing Data: 4.5; Percentage Schools with Indicator NA: 16.5; Mean: 14.6; Median: 13.3; Range Between 5th, 95th Quantiles: [1.4,31.6]; Skewness: 1.4; Kurtosis: 8.8.

Risk Indicator: 5-Year Loan Forbearance Rate (2009 - 2010 Average); Risk Scale: Academic; Percentage Schools with Missing Data: 0.1; Percentage Schools with Indicator NA: 22.3; Mean: 43.6; Median: 44.2; Range Between 5th, 95th Quantiles: [24.2,61.8]; Skewness: -0.2; Kurtosis: 3.2.

CALCULATION: (13.7% default rate for the 2009 cohort + 14.6% default rate for the 2010 cohort) / 2 = 14.1%

[218] Report: “Trends in the Student Loan Market.” U.S. Treasury Department, Treasury Borrowing Advisory Committee, November 4, 2014. <www.treasury.gov>

Page 4:

A key concern is that students are taking on student loans because historically an education has been correlated with economic mobility; however, today an average of 40% of students at four-year institutions (and 68% of students in for-profit institutions) do not graduate within six years,(1) which means they most likely do not benefit from the income upside from a higher degree yet have the burden of student debt. …

1. National Center for Education Statistics. Based on graduation rates of Bachelor’s Degree-Seeking Students at 4-Year Postsecondary Institutions (cohort entry year: 2006).

Page 21:

Failure-to-graduate remains the most deadly of traps for higher education. As shown in the following Graphs 8, 9 and 10, the marginal benefit of higher education is clear in terms of lifetime earnings and better employment stability. Failure-to-graduate combined with leverage is a poor mix: the debt burden remains but very little of the economic benefits accrue. Whatever the reasons that the student failed-to-graduate, he or she is left with all of the downside and limited upside.

[219] Curriculum Vitae: “Deborah J. Lucas.” MIT Sloan School of Management, February 2014. <mitsloan.mit.edu>

Director, MIT Center for Finance and Policy, 2012-present

Sloan Distinguished Professor of Finance, Sloan School of Management, 2011-present

Assistant Director, Financial Analysis Division, Congressional Budget Office 2010-2011

Associate Director of Financial Studies, Congressional Budget Office 2009-2010

Professor of Finance, Sloan School of Management, 2009-2011 (on leave)

Donald C. Clark HSBC Professor of Consumer Finance, Department of Finance, Kellogg School of Management, Northwestern University, 1996 - 2009.

Chief Economist, Congressional Budget Office, 2000 – 2001.

Member, Social Security Technical Advisory Panel, 1999 - 2000, and 2006 – 2007.

Chairman, Department of Finance, Kellogg School of Management, 1996 – 1998.

John L. and Helen Kellogg Distinguished Associate Professor, Department of Finance, Kellogg School of Management, Northwestern University. 1992 – 1996.

NBER Research Associate, 1998 - present.

NBER Faculty Research Fellow, 1992 – 1998.

Senior Staff Economist, Council of Economic Advisers, Washington, D.C., 1992 – 1993.

Assistant Professor, Department of Finance, J.L. Kellogg School of Management, Northwestern University, 1985 - 1992.

Visiting Assistant Professor, Department of Finance, Sloan School of Management, Massachusetts Institute of Technology, 1990 - 1991.

[220] Book: Public Economics in the United States: How the Federal Government Analyzes and Influences the Economy. Edited by Steven Payson. ABC-CLIO, 2014. Chapter 15: “Federal Credit Programs.” By Deborah Lucas (Director, MIT Center for Finance and Policy). Pages 375-398.

Page 394:

The government delivers subsidies in a variety of forms, for example, using cash grants, or in-kind assistance such as free vaccinations. Government credit that is offered at a below-market price to beneficiaries similarly provides a subsidy. …

Government credit programs may have adverse consequences that must be weighed against their expected benefits. One concern is that credit subsidies will distort the allocation of capital in the economy and crowd out productive investments by households and firms. For example, subsidizing mortgages guarantees increases the demand for housing, causing more savings to be invested in residential construction. That leaves fewer resources available for other investment activities, and puts upward pressure on the interest rates facing all borrowers.

Easier access to credit markets is not always advantageous to program participants. Unsophisticated borrowers, such as some college students and first-time homebuyers, may not be fully aware of the costs and risks associated with accumulating high debt loans. Consumer protection and disclosure laws usually do not extend to the government, and there is possibility that it will inadvertently offer poorly designed products that can harm consumers. …

A well-understood consequence of government credit provision is that it tends to create incentives for greater risk taking, particularly when a borrower becomes financially distressed. The reason is that a debtor with a guaranteed debt benefits from the upside if a gamble pays off, whereas the government shares in the losses if the gamble fails. (The effect is less pronounced for loans obtained privately because financial institutions charge interest rates that increase with risk, which discourages excessive risk taking.)

[221] Webpage: “Student Loan Bankruptcy Exception.” FinAid. Accessed June 24, 2015 at <www.finaid.org>

The US Bankruptcy Code at 11 USC 523(a)(8) provides an exception to bankruptcy discharge for education loans. This page provides a history of the legislative language in this section of the US Bankruptcy Code.

Student loans were dischargeable in bankruptcy prior to 1976. With the introduction of the US Bankruptcy Code (11 USC 101 et seq) in 1978, the ability to discharge education loans was limited. Subsequent changes in the law have further narrowed the dischargeability of education debt. …

The following timeline illustrates the date of major changes in the treatment of student loans under the US Bankruptcy Code and related changes to other legislation: …

[222] Report: “Trends in the Student Loan Market.” U.S. Treasury Department, Treasury Borrowing Advisory Committee, November 4, 2014. <www.treasury.gov>

Page 9:

Unlike Other Credit, Can’t Extinguish Student Loans in Bankruptcy Default Consequences:

• Tax Refund Offsets: IRS can offset the borrower’s income tax refund until the defaulted loan is paid in full. A number of states also have laws that authorize state guaranty agencies to take state income tax refunds.

• Federal Benefits Offsets: The government can offset certain Social Security benefits to collect government student loans. Just as with other types of student loan collection, there is no time limit on Social Security offsets, according to a 2005 Supreme Court Case.

• Wage Garnishments: The government can also garnish wages as a way to recover money owed on a defaulted student loan. The United States Department of Education or a Student Loan Guarantor can garnish 15% of disposable pay(1) per pay period without a court order.

• Effect on Credit History: Adversely affects credit for many years. If borrower defaults, loan will be listed as a current debt that is in default. The default will also be listed in the historical section of borrower’s credit report, specifying the length of the default.

• License Revocations: A number of states allow professional and vocational boards to refuse to certify, certify with restrictions, suspend or revoke a member’s professional or vocational license and, in some cases, impose a fine, when a member defaults on student loans.

[223] Article: “Court Rules Social Security Can Be Seized To Pay Student Loans.” By Melissa McNamara. Associated Press, December 7, 2005. <www.cbsnews.com>

“The Supreme Court ruled unanimously Wednesday that the government can seize a person’s Social Security benefits to pay old student loans.”

[224] Article: “White House Floats Bankruptcy Process for Some Student Debt.” By Josh Mitchell. Wall Street Journal, March 10, 2015. <www.wsj.com>

“Fewer than 1,000 people try to get rid of their student loans every year using bankruptcy in a process that is both expensive and uncertain: It involves filing a lawsuit in federal court, and lawyers typically charge several thousand dollars upfront for that work. A Wall Street Journal analysis found 713 such lawsuits were filed last year.”

[225] Press release: “Fact Sheet: A Student Aid Bill of Rights: Taking Action to Ensure Strong Consumer Protections for Student Loan Borrowers.” White House, Office of the Press Secretary, March 10, 2015. <www.whitehouse.gov>

In addition, new requirements may be appropriate for private and federally guaranteed student loans so that all of the more than 40 million Americans with student loans have additional basic rights and protections. The President is directing his Cabinet and White House advisers, working with the Consumer Financial Protection Bureau, to study whether consumer protections recently applied to mortgages and credit cards, such as notice and grace periods after loans are transferred among lenders and a requirement that lenders confirm balances to allow borrowers to pay off the loan, should also be afforded to student loan borrowers and improve the quality of servicing for all types of student loans. The agencies will develop recommendations for regulatory and legislative changes for all student loan borrowers, including possible changes to the treatment of loans in bankruptcy proceedings and when they were borrowed under fraudulent circumstances.

[226] Report: “Federal Student Loan Forgiveness and Loan Repayment Programs.” By Alexandra Hegji, David P. Smole, and Elayne J. Heisler. Congressional Research Service, July 22, 2014. <www.fas.org>

Summary:

Over 50 federal student loan forgiveness and repayment programs are currently authorized under federal law. Although each program is designed to operate somewhat differently, they are all intended to provide debt relief to borrowers who perform specified types of service, enter into and remain employed in certain professions, serve in certain locations, or repay their loans according to an income-dependent repayment plan for an extended period of time.

Pages 3-4:

Distinction among Loan Forgiveness and Loan Repayment Programs

In employment-focused loan forgiveness and loan repayment programs, a borrower typically must work or serve in a certain function, profession, or geographic location for a specified period of time to qualify for benefits. In repayment plan-based loan forgiveness programs, a borrower typically must repay according to an income-dependent repayment plan for a specified period of time to qualify for benefits. At the end of the specified term, some or all of the individual’s qualifying student loan debt is forgiven or paid on his or her behalf. The individual is thus relieved of responsibility for paying that portion of his or her student loan debt. One of the most important distinctions among these types of programs is whether the availability of benefits is incorporated into the loan terms and conditions and is thus considered an entitlement to qualified borrowers or whether benefits are made available to qualified borrowers at the discretion of the entity administering the program and whether the benefits are subject to the availability of funds. For the purposes of this report, the former types of programs are referred to as loan forgiveness while the latter are referred to as loan repayment.

In general, loan forgiveness benefits are broadly available to borrowers of qualified loans. The availability of these benefits is expressed to borrowers in their loan documents, such as the master promissory note and the borrower’s rights and responsibilities statement.9 A borrower who satisfies the loan forgiveness program’s eligibility criteria, as set forth in the loan terms and conditions, is entitled to the loan forgiveness benefits. Benefits that are entitlements to qualified borrowers are generally funded through mandatory appropriations and accounted for as part of federal student loan subsidy costs, which are discussed in detail later in the section titled “Cost of Loan Forgiveness and Loan Repayment Programs.” There are two broad categories of loan forgiveness benefits: loan forgiveness for public service employment and loan forgiveness following income-dependent repayment.

Loan repayment programs also provide debt relief to borrowers for service in a specific function, profession, or location. However, in contrast to employment-focused loan forgiveness programs, the entity that administers a loan repayment program typically either directly repays some or all of the qualified borrower’s student loan debt on his or her behalf or provides funding to a separate entity for purposes of implementing a loan repayment program and making such payments. Loan repayment benefits are generally offered through programs that are separate or distinct from the program through which a federal student loan is made. In many instances, these programs are designed to address broad employment needs or shortages (e.g., within a specific occupation or geographic location), while other such programs are intended to help individual federal agencies recruit and retain qualified employees, often serving as an additional form of compensation to targeted employees, who may be harder to recruit or retain. Both types of loan repayment benefits are generally available to a limited number of qualified borrowers. Typically, loan repayment benefits are discretionary and their availability is subject to the appropriation of funds.

Pages 11-12:

Availability of Loan Forgiveness for Public Service Employment

As described above, loan forgiveness for public service employment provides debt relief to qualified borrowers employed in certain occupations, for specific employers, or in public service. These benefits are considered entitlements and are written into the terms and conditions of widely available federal student loans (e.g., Direct Loan Subsidized and Unsubsidized Loans and Perkins Loans). They are potentially available to an open-ended number of qualified borrowers.

Table 1 provides a summary of the various loan forgiveness for public service employment programs offered. …

Table 1 illustrates that although loan forgiveness benefits are entitlements that are potentially available to a wide array of borrowers, to qualify for benefits borrowers must still meet specific eligibility criteria, including completing a specific type of service or entering into a particular occupation or profession.

All three programs are widely available to individuals serving as teachers, while Federal Perkins Loan Cancellation is available to individuals who also serve in other specific public service occupations, such as law enforcement personnel and public defenders, and Direct Loan Public Service Loan Forgiveness is available to an even broader array of individuals who are employed full-time in public service, which includes employment in federal, state, local, or tribal government agencies, organizations and certain nonprofit organizations. However, unlike the other programs, its availability is also dependent on borrowers’ economic circumstances during repayment.

Additionally, borrowers under these programs must serve for a minimum period of time. For these loan forgiveness programs, service commitments generally last between one year (for partial benefits) and ten years.

Availability of Loan Forgiveness Following Income-Dependent Repayment

Loan forgiveness following income-dependent repayment provides debt relief to borrowers who repay their federal student loans as a proportion of their income for an extended period of time but who have not repaid their entire student loan debt. These benefits are considered entitlements and are written into the terms and conditions of widely available federal student loans (e.g., Direct Subsidized Loans, Direct Unsubsidized Loans, and Perkins Loans). They are potentially available to an open-ended number of qualified borrowers. These programs are potentially available to a large number of borrowers; however, these programs are distinct from those that target public service employment.

Table 2 provides a summary of the various loan forgiveness programs that provide debt relief to individuals following income-dependent repayment. The table also provides details on the operational status of the program.

Although it is unclear how many individual borrowers may benefit from these programs, as forgiveness benefits have not yet been realized under any of them, the table is organized according to the scale of benefits that might be realized by borrowers at the culmination of income-dependent repayment. The Income-Contingent Repayment (ICR) Plan A (Pay As You Earn) offers the most generous benefits currently available to borrowers—debt relief after 20 years of repayment based on 10% of discretionary income. The Income-Based Repayment (IBR) Plan for New Borrowers on or after July 1, 2014, will offer essentially the same level of benefits to individuals who are new borrowers on or after July 1, 2014. The IBR Plan for pre-July 1, 2014, borrowers offers debt relief after 25 years of repayment based on 15% of discretionary income and has been available to borrowers since 2009. Debt relief following 25 years of repayment according to ICR Plan B has been available to borrowers since 1994.

[227] Press release: “Fact Sheet: Making Student Loans More Affordable.” White House, Office of the Press Secretary, June 9, 2014. <www.whitehouse.gov>

Today, the President will direct the Secretary of Education to ensure that student loans remain affordable for all who borrowed federal direct loans as students by allowing them cap their payments at 10 percent of their monthly incomes. The Department will begin the process to amend its regulations this fall with a goal of making the new plan available to borrowers by December 2015.

With legislation passed by Congress and signed by the President in 2010 and regulations adopted by the Administration in 2012, most students taking out loans today can already cap their loan payments at 10 percent of their incomes. Monthly payments will be set on a sliding scale based upon income. Any remaining balance is forgiven after 20 years of payments, or 10 years for those in public service jobs. However, this Pay As You Earn (PAYE) option is not available to students with older loans (those who borrowed before October 2007 or who have not borrowed since October 2011), although they can access similar, less generous options. No existing repayment options will be affected, and the new repayment proposal will also aim to include new features to target the plan to struggling borrowers.

This executive action is expected to help up to 5 million borrowers who may be struggling with student loans today. For students that need to borrow to finance college, PAYE provides an important assurance that student loan debt will remain manageable. Because the PAYE plan is based in part on a borrower’s income after leaving school, it shares with students the risk of taking on debt to invest in higher education.

… Because the PAYE [Pay As You Earn] plan is based in part on a borrower’s income after leaving school, it shares with students the risk of taking on debt to invest in higher education.

[228] Article: “Government to Forgive Student Loans at Corinthian Colleges.” By Tamar Lewin. New York Times, June 8, 2015. <www.nytimes.com>

“Secretary of Education Arne Duncan announced Monday that the Education Department would forgive the federal loans of tens of thousands of students who attended Corinthian Colleges, a for-profit college company that closed and filed for bankruptcy last month, amid widespread charges of fraud.”

[229] Article: “For-Profit Colleges File for Bankruptcy.” By Tamar Lewin. New York Times, May 4, 2015. <www.nytimes.com>

Corinthian was once one of the nation’s largest for-profit college companies, enrolling more than 100,000 students at its 100 Everest, Heald and WyoTech campuses. But for the last few years, the company has faced charges of predatory recruiting and false placement and graduation rates. It went into its death spiral last year when the Department of Education suspended its access to the federal student aid it depended on, and then brokered the sale of most of its campuses.

[230] Press release: “Fact Sheet: Protecting Students from Abusive Career Colleges.” U.S. Department of Education, June 8, 2015. <www.ed.gov>

Today, the Education Department is announcing new steps in this work, particularly to address the concerns of students who attended schools owned by Corinthian Colleges Inc.

How debt relief will work for Corinthian students

The Department has worked to rapidly develop a streamlined process for getting debt relief to Corinthian students. The Department’s aim is to make the process of forgiving loans fair, clear and efficient—and to ensure that students who are eligible to participate know about this opportunity.

Some Corinthian schools closed down, while others were sold but remain open under different ownership. The announcements today are for:

• Corinthian students whose schools have closed down.

• Corinthian students who believe they were victims of fraud, regardless of whether their school closed. …

Helping Corinthian students whose schools have closed

In general, when a college closes, students are eligible to discharge their federal student loans if they were attending when the school closed or who withdrew from the school within 120 days of the closing date. Given the unique circumstances for former Corinthian students, the Department is expanding eligibility for students to apply for a closed school loan discharge, extending the window of time back to June 20, 2014, to capture students who attended the now-closed campuses after Corinthian entered into an agreement with the Department to terminate Corinthian’s ownership of its schools. …

Helping students who believe they were victims of fraud, regardless of whether their school closed

Provisions in the law called “defense to repayment” or “borrower’s defense” allow borrowers to seek loan forgiveness if they believe they were defrauded by their college under state law. This provision has rarely been used in the past. Now, the Department is taking unprecedented action to create a streamlined process that is fair to students who may have been victims of fraud and that holds colleges accountable to taxpayers. …

For example, after analyzing the Department’s findings in its investigation of Heald College and relevant California law, the Department has determined that evidence of misrepresentation exists for students enrolled in a large majority of programs offered at Heald College campuses between 2010 and 2015. Specifically, the Department has determined that students who relied on misrepresentations found in published job placement rates for many Heald programs qualify to have their federal direct student loans discharged. Students can have their loans forgiven and receive refunds for amounts paid based on a simple attestation. More information about this process—including the attestation form—is available on studentaid.gov/Corinthian. Additional details will be posted on the website in the coming weeks. …

• Building a better system for debt relief for the future: The Department will develop new regulations to clarify and streamline loan forgiveness under the defense to repayment provision, while maintaining or enhancing current consumer protection standards and strengthening those provisions that hold colleges accountable for actions that result in loan discharges. That process will begin later this year and will not slow down the loan discharge process for current applicants.

[231] Report: “Higher Education: Education Should Strengthen Oversight of Schools and Accreditors.” U.S. Government Accountability Office, January 22, 2015. <www.gao.gov>

Cover page: “To access federal student aid—which totaled more than $136 billion in fiscal year 2013—schools must be accredited to ensure they offer a quality education.”

Page 4: “Accreditors … play a critical role in protecting the federal investment in higher education as part of the ‘triad’ that oversees schools participating in federal student aid programs authorized under Title IV of the Higher Education Act.”

Page 5:

Accrediting Agencies: Apply and enforce standards that help ensure that the education offered by a postsecondary school is of sufficient quality to achieve the objectives for which it is offered. …

The purpose of accreditation … is to help ensure that member schools meet quality standards established by accrediting agencies. While accreditation first arose in the U.S. as a means of ensuring academic quality by nongovernmental peer evaluation, today the process also serves as one of the bases for determining a school’s eligibility to participate in federal student aid programs. …

Accreditation … is a peer review process that serves several purposes in addition to being a gatekeeper for federal funds….

Pages 6-7:

In general, two different types of accreditors--regional and national--offer accreditation to schools that allows the schools to access federal student aid funds,11 12 Regional accreditors accredit mostly nonprofit and public schools, while national accreditors generally accredit for-profit schools. At the time of our review, regional accreditors had 3,134 member schools in total, while national accreditors had 3,719.13 Seven regional accreditors accredit schools within a particular region and have historically accredited public and private nonprofit schools that award degrees. In addition, eight national accreditors operate nationwide and have historically accredited vocational or technical schools that do not award degrees. Differences between regional and national accreditors still exist, as seen in figure 2, but some for-profit schools have obtained regional accreditation in recent years and many for-profit schools currently award two-and four-year degrees.

[232] Report: “Higher Education: Education Should Strengthen Oversight of Schools and Accreditors.” U.S. Government Accountability Office, January 22, 2015. <www.gao.gov>

Page 4: “Accreditors--generally nongovernmental, nonprofit organizations--play a critical role in protecting the federal investment in higher education….”

Page 5:

[U.S. Department of] Education: Recognize accreditors determined to be reliable authorities as to the quality of education offered by schools; certify schools as eligible to participate in federal student aid programs; and ensure that participating schools comply with the laws, regulations, and policies governing federal student aid. …

Accreditation agencies and processes predate the Higher Education Act [of 1965], and accreditation is a peer review process that serves several purposes in addition to being a gatekeeper for federal funds, including facilitating the transferability of courses and credits across member schools. According to representatives of schools and accrediting agencies, accreditation also encourages schools to maintain a focus on self-improvement.

While Education is required to determine whether accrediting agencies have standards in certain areas before recognizing them, the accrediting agencies are responsible for evaluating member schools to determine if they meet the accreditors’ standards. This accreditation process generally occurs at least every 10 years, depending on the accreditor and the school. The process is typically conducted by volunteer peer evaluators, generally from other member schools, selected by the accreditor, with final accreditation decisions made by a board that includes representatives from member schools and the public. While specific steps vary by accrediting agency, schools generally go through a similar accreditation process (see figure 1).

[233] Webpage: “The Executive Branch.” White House. Accessed February 1, 2013 at <www.whitehouse.gov>

Under Article II of the Constitution, the President is responsible for the execution and enforcement of the laws created by Congress. Fifteen executive departments — each led by an appointed member of the President’s Cabinet — carry out the day-to-day administration of the federal government. They are joined in this by other executive agencies such as the CIA and Environmental Protection Agency, the heads of which are not part of the Cabinet, but who are under the full authority of the President. The President also appoints the heads of more than 50 independent federal commissions, such as the Federal Reserve Board or the Securities and Exchange Commission, as well as federal judges, ambassadors, and other federal offices. The Executive Office of the President (EOP) consists of the immediate staff to the President, along with entities such as the Office of Management and Budget and the Office of the United States Trade Representative. …

The Cabinet is an advisory body made up of the heads of the 15 executive departments. Appointed by the President and confirmed by the Senate, the members of the Cabinet are often the President’s closest confidants. In addition to running major federal agencies, they play an important role in the Presidential line of succession — after the Vice President, Speaker of the House, and Senate President pro tempore, the line of succession continues with the Cabinet offices in the order in which the departments were created. All the members of the Cabinet take the title Secretary, excepting the head of the Justice Department, who is styled Attorney General. …

Department of Education


The mission of the Department of Education is to promote student achievement and preparation for competition in a global economy by fostering educational excellence and ensuring equal access to educational opportunity.

The Department administers federal financial aid for education, collects data on America’s schools to guide improvements in education quality, and works to complement the efforts of state and local governments, parents, and students.

The U.S. Secretary of Education oversees the Department’s 4,200 employees and $68.6 billion budget.

[234] Report: “Higher Education: Education Should Strengthen Oversight of Schools and Accreditors.” U.S. Government Accountability Office, January 22, 2015. <www.gao.gov>

Page 9:

The Higher Education Act requires accreditors to report certain sanctions or take other interim actions (such as requiring annual reports on finances). The Higher Education Act requires accreditors to report certain sanctions, including terminations and probations, to Education within 30 days, and to provide Education a summary of the reasons leading them to terminate a school’s accreditation.19 Regional accreditors recently agreed on common sanction definitions, while national accrediting agencies do not have agreed-upon sanction definitions (see sidebar).

19 Accreditors must also report such sanctions, and provide summaries to the appropriate state licensing or authorizing agency. 20 U.S.C. § 1099b(a)(7) and (8). Specifically, accreditors must notify Education and the appropriate state licensing or authorizing agency of any final decision to place a school on probation; deny, withdraw, suspend, revoke, or terminate a school’s accreditation; or take other adverse action, as defined by the accrediting agency. 34 C.F.R. § 602.26(b). Accreditors must provide written notice to the public of such sanctions within 24 hours of its notice to the school. 34 C.F.R. § 602.26(c).

[235] Report: “Higher Education: Education Should Strengthen Oversight of Schools and Accreditors.” U.S. Government Accountability Office, January 22, 2015. <www.gao.gov>

Page 6: “In general, two different types of accreditors—regional and national—offer accreditation to schools that allows the schools to access federal student aid funds,11 12 Regional accreditors accredit mostly nonprofit and public schools, while national accreditors generally accredit for-profit schools.”

Page 8:

Areas in Which Accreditors Are Required to Have Standards:

1. Success with respect to student achievement (Standards may be established by the school and differ according to its mission); 2. Curricula; 3. Faculty; 4. Facilities, equipment, and supplies; 5. Fiscal and administrative capacity; 6. Student support services; 7. Recruiting and admissions practices; 8. Measures of program length and objectives; 9. Student complaints; 10. Compliance with federal student aid program responsibilities.

Pages 14-15:

In addition, the proportion of member schools that accreditors sanctioned varied. For example, two accreditors each sanctioned fewer than 2 percent of their member schools during our timeframe, compared to 41 percent for another accreditor. A representative from one accrediting agency explained that a key challenge for accreditors is grappling with competing expectations of accreditation. The representative noted that there is a general view by policy makers and those who influence policy that accreditors do not terminate accreditation enough. However, if an accreditor does terminate a particular school’s accreditation, she said there may be significant negative reaction from the public in the affected region, and a view that the accreditor is being too punitive.

Page 18:

Reasons for Accreditor Sanctions:

Academic quality: issues with student achievement in relation to the mission and curricula, or other student outcomes;

Administrative capability: issues such as those related to facilities, supplies, and administrative capability;

Financial capability: issues with financial capability and compliance with federal student aid responsibilities;

Integrity: fraud or misrepresentation;

Governance: issues with division of responsibility, such as between the Board and a college president;

Institutional Effectiveness: issues related to long-term plans for assessing learning and academic achievement;

Page 22:

We found that, on average, accreditors were no more likely to issue terminations or probations to schools with weaker student outcomes compared to schools with stronger student outcomes from October 2009 through March 2014, as seen in table 2 below. This held true for one combined indicator incorporating all of the student outcome characteristics we reviewed, as well as for most of the individual characteristics we examined. (The sidebar describes the student outcome characteristics we examined.38 Regional accreditors, however, were more likely to issue terminations or probations to schools with weaker outcomes on the combined indicator. (See appendix I for additional details on this analysis and appendix III for additional information on accreditor sanctions associated with student outcomes.)

Page 23:

Selected Student Outcome Characteristics:

* Three-Year Cohort Default Rate: the percent of borrowers in default 3 years after entering repayment status. Education views this characteristic as an indicator of academic quality at schools, since students who received a lower quality education may be less likely to have adequate income to repay their loans;

* Forbearance Rate: the percent of borrowers in forbearance (and therefore not repaying their loans on a temporary basis) during the official cohort default period. Education views this characteristic as an indicator of academic quality at schools, since students who received a lower quality education may be less likely to have adequate income to repay their loans;

* Graduation Rate: the percent of first-time full-time degree/certificate-seeking undergraduate students who complete a program within 150 percent of the program length. A low graduation rate may indicate a lack of academic quality;

* Dropout Rate: the percent of students who left school during a particular year, but did not graduate. A high dropout rate may indicate a lack of academic quality;

* Retention Rate: the percent of first-time degree/certificate-seeking students who enrolled in one fall and either successfully completed their program or re-enrolled in the next fall. A low retention rate may indicate a lack of academic quality;

* Increases in Federal Student Aid: annual growth in federal student aid volume, which may indicate in extreme cases that growth may be too rapid to maintain academic and administrative services needed to adequately support students;

* Number of Program Review Findings: the number of findings at schools selected by Education for in-depth review due to the presence of certain risk factors, and the number of issues found in those reviews.

Pages 23-24:

Although accreditors are required by law to have standards in academic and financial areas, among others, they are not required to use the student outcome characteristics that we selected to assess school academic quality, or to sanction members with weaker outcomes. Some accreditors do examine school student-level outcomes as benchmarks to determine whether their member schools are providing quality education, but would not necessarily sanction or revoke the accreditation of a school for not meeting these benchmarks.

Pages 24-25:

Table 3: Likelihood of Termination or Probation for Schools with Weaker vs. Stronger Individual Student Outcome Characteristics, by Type of Accreditor, October 2009 through March 2014:

Was there a significant difference in accreditors’ responses to weaker and stronger student outcomesa at schools?

Overall: Default Rate: Yes; Graduation Rate: No; Dropout Rate: No; Retention Rate: No; Forbearance Rate: No.

Regional accreditors: Default Rate: Yes; Graduation Rate: Yes; Dropout Rate: Yes; Retention Rate: Yes; Forbearance Rate: No.

National accreditors: Default Rate: No; Graduation Rate: No; Dropout Rate: No; Retention Rate: No; Forbearance Rate: No.

Source: GAO analysis of school-level student outcome characteristics collected by Education and data from the accreditation database. GAO-15-59.

Notes: We used statistical techniques that allowed us to examine accreditors’ likelihood of sanctioning schools with weaker student outcome characteristics, compared to schools with stronger outcomes, for each individual outcome. Schools with weaker student outcomes were considered to be those in the bottom vs. the top for each characteristic (those in the 1st vs. 99th percentile and 5th vs. 95th percentile). “Yes” indicates that the difference between the 1st and 99th percentiles and/or 5th and 95th percentiles was statistically significant at the 95 percent confidence level. All comparisons were significant for the 1st and 99th percentiles as well as for the 5th and 95th percentiles, with the exception of default rate for regional accreditors, which was only significant when comparing the 5th and 95th percentiles.

a Default rate indicates the percent of borrowers who entered repayment in fiscal 2009 or 2010 and were in default as of the end of the second following fiscal year; graduation rates reported to IPEDS in 2011 and 2012 are for first-time full-time degree/certificate-seeking undergraduate students that completed their degree within 150 percent of the expected time; dropout rate indicates the total number of withdrawals reported by each school during a particular year divided by the total number of graduates plus withdrawals reported to the National Student Loan Data System for that year for award years 2008-2009 through 2012-2013; retention rate indicates the percent of first-time degree/certificate-seeking students who enrolled in the previous fall and either successfully completed their program or re-enrolled in the next fall as reported to IPEDS in the fall of 2010 and 2011; and forbearance rate indicates the percent of borrowers who entered repayment status in fiscal year 2009 and 2010 and were in forbearance as of the end of the following fiscal year.

Because the graduation rate collected by Education is limited to first-time full-time degree/certificate-seeking undergraduate students, we also estimated accreditors’ likelihood of sanctioning schools with higher dropout rates.40 Similar to the results of our graduation rate analysis, we found that national accreditors were not more likely to issue terminations or probations to schools with higher dropout rates than those with lower dropout rates. In contrast, regional accreditors were more likely to issue terminations or probations to schools with higher dropout rates (see table 3 above).

Pages 33-34:

For 36 of the 93 schools receiving federal student aid funds that were placed on probation by their accreditors in fiscal year 2012, we found no indication of follow-up activities by Education between the beginning of fiscal year 2012 and December 2013.56 57 Not all accreditor sanctions require follow-up by Education, such as a sanction issued for failure to obtain student feedback. However, oversight actions by Education may be warranted if accreditor sanctions indicate potential federal student aid violations or other weaknesses affecting a school’s ability to appropriately administer federal student aid programs. As discussed above, our review of 10 schools with fiscal year 2012 accreditor sanctions found three cases in which analysts had no record of accreditor sanctions that could indicate a need for heightened federal student aid oversight. Because Education did not capture its decisions or the rationale for them in these cases, it is not possible to know if analysts did not review the cases at all, or if they reviewed them and determined that no action should be taken.

Page 35:

Unclear guidance from Education may also make it difficult for Education staff who oversee schools to respond consistently to these sanction notifications and contribute to lapses in oversight of schools, since the guidance does not lay out the recommended approach to specific types of accreditor sanctions.60 Moreover, although several officials who oversee schools told us they believed official guidance required them to restrict access to federal student aid funds for schools with show cause orders, the guidance does not specifically refer to show cause orders. In addition, the fact that Education may not have reviewed accreditor information about up to one-third of the 93 schools that were receiving federal student aid funds and that were placed on probation in fiscal year 2012, as discussed above, may also reflect the lack of clear guidance by sanction type.

Pages 35-36:

Moreover, in part because Education’s guidance does not lay out the recommended approach to specific types of accreditor sanctions, officials who oversee schools do not consistently view accreditor sanction notifications as a valuable oversight tool. For example, one official noted that her team would never respond to accreditor probations because they occur too frequently to track and would disrupt other work.62 However, our review found that just under 100 schools of the more than 6,000 participating in federal student aid programs were placed on probation by their accreditor in fiscal year 2012. Another official said reviewing accreditor sanctions was not very useful in overseeing schools, as accreditors would take additional action to prompt a response by Education if a school’s situation became more serious. However, other officials who oversee schools stated that they found show cause order notifications helpful.63 Consequently, Education’s response to sanctions is inconsistent. Since accreditors may take other, informal steps prior to issuing a sanction, as discussed earlier in the report, accreditor sanctions can in fact be a serious indication of problems at a school. More specifically, all accreditor sanctions--including probations--can be an important source of information on schools. Consistent with federal internal control standards that call for ongoing, continual monitoring, reviewing accreditor sanctions in a timely manner can help analysts who oversee schools detect school compliance issues as they occur and prevent more serious problems from developing in the future.64

[236] Report: “Higher Education: Education Should Strengthen Oversight of Schools and Accreditors.” U.S. Government Accountability Office, January 22, 2015. <www.gao.gov>

Page 40:

However, our analysis found that accreditors were no more likely to sanction schools with weaker student outcomes than schools with stronger student outcomes. These findings raise questions about whether existing accreditor standards are sufficient to ensure the quality of schools, whether Education is effectively determining if these standards ensure educational quality, and whether federal student aid funds are appropriately safeguarded.

[237] Press release: “Fact Sheet: Protecting Students from Abusive Career Colleges.” U.S. Department of Education, June 8, 2015. <www.ed.gov>

Over the past six years, the Education Department has taken unprecedented steps to hold career colleges accountable for giving students what they deserve: a high-quality, affordable education that prepares them for their careers. The Department established tougher regulations targeting misleading claims by colleges and incentives that drove sales people to enroll students through dubious promises. The Department has cracked down on bad actors through investigations and enforcement actions. The Department also issued “gainful employment” regulations, which will help ensure that students at career colleges don’t end up with debt they cannot repay. The Department will continue to hold institutions accountable in order to improve the value of their programs, protect students from abusive colleges, and safeguard the interests of taxpayers.

[238] Report: “The Budget and Economic Outlook: Fiscal Years 2013 to 2023.” U.S. Congressional Budget Office, February 2013. <www.cbo.gov>

Page 25:

However, several factors—collectively labeled other means of financing and not directly included in budget totals—also affect the government’s need to borrow from the public. Among them are reductions (or increases) in the government’s cash balance and in the cash flows associated with federal credit programs (such as those related to student loans and mortgage guarantees) because only the subsidy costs of those programs (calculated on a present-value basis) are reflected in the budget deficit.

CBO projects that Treasury borrowing will be $104 billion more than the projected budget deficit in fiscal year 2013, mainly to finance student loans. Each year from 2014 to 2023, borrowing by the Treasury is expected to exceed the amount of the deficit, mostly because of the need to provide financing for student loans and other credit programs. CBO projects that the government will need to borrow $76 billion more per year, on average, during that period than the budget deficits would suggest.

[239] Report: “Analytical Perspectives: Budget of the U.S. Government, Fiscal Year 2012.” White House Office of Management and Budget. <www.whitehouse.gov>

Page 139:

To illustrate the budgetary and non-budgetary components of a credit program, consider a portfolio of new direct loans made to a cohort of college students. To encourage higher education, the Government offers loans at a lower cost than private lenders. Students agree to repay the loans according to the terms of their promissory notes. The loan terms may include lower interest rates or longer repayment periods than would be available from private lenders. Some of the students are likely to become delinquent or default on their loans, leading to Government losses to the extent the Government is unable to recover the full amount owed by the students. … In other words, the subsidy cost is the difference in present value between the amount disbursed by the Government and the estimated value of the loan assets the Government receives in return. Because the loan assets have value, the remainder of the transaction (beyond the amount recorded as a subsidy) is simply an exchange of financial assets of equal value and does not result in a cost to the Government.

[240] Report: “Fair-Value Accounting for Federal Credit Programs.” U.S. Congressional Budget Office, March 2012. <www.cbo.gov>

Page 1:

According to the rules for budgetary accounting prescribed in the Federal Credit Reform Act of 1990 (FCRA, incorporated as title V of the Congressional Budget Act of 1974), the estimated lifetime cost of a new loan or loan guarantee is recorded in the budget in the year in which the loan is disbursed.2 That lifetime cost is generally described as the subsidy provided by the loan or loan guarantee. It is measured by discounting all of the expected future cash flows associated with the loan or loan guarantee—including the amounts disbursed, principal repaid, interest received, fees charged, and net losses that accrue from defaults—to a present value at the date the loan is disbursed. A present value is a single number that expresses a flow of current and future income, or payments, in terms of a lump sum received, or paid, today; the present value depends on the rate of interest, known as the discount rate, that is used to translate future cash flows into current dollars.3

Page 3: “CBO has estimated that the average subsidy for direct student loans made between 2010 and 2020 would be a negative 9 percent under FCRA accounting…. (A negative subsidy indicates that, for budgetary purposes, the transactions are recorded as generating net income for the government.)”

[241] Report: “Fair-Value Accounting for Federal Credit Programs.” U.S. Congressional Budget Office, March 2012. <www.cbo.gov>

Pages 1-2:

FCRA [Federal Credit Reform Act]-based cost estimates, however, do not provide a comprehensive measure of what federal credit programs actually cost the government and, by extension, taxpayers. Under FCRA’s rules, the present value of expected future cash flows is calculated by discounting them using the rates on U.S. Treasury securities with similar terms to maturity. Because that procedure does not fully account for the cost of the risk the government takes on when issuing loans or loan guarantees, it makes the reported cost of federal direct loans and loan guarantees in the federal budget lower than the cost that private institutions would assign to similar credit assistance based on market prices. Specifically, private institutions would generally calculate the present value of expected future cash flows by discounting those flows using the rates of return on private loans (or securities) with similar risks and maturities. Because the rates of return on private loans exceed Treasury rates, the discounted value of expected loan repayments is smaller under this alternative approach, which implies a larger cost of issuing a loan. (Similar reasoning implies that the private cost of a loan guarantee would be higher than its cost as estimated under FCRA.)4

FCRA and market-based cost estimates alike take into account expected losses from defaults by borrowers. However, because FCRA estimates use Treasury interest rates instead of market-based rates for discounting, FCRA estimates do not incorporate the cost of the market risk associated with the loans. Market risk is the component of financial risk that remains even after investors have diversified their portfolios as much as possible; it arises from shifts in macroeconomic conditions, such as productivity and employment, and from changes in expectations about future macroeconomic conditions. Loans and loan guarantees expose the government to market risk because future repayments of loans tend to be lower when the economy as a whole is performing poorly and resources are more highly valued.

Some observers argue that using market-based rates for discounting loan repayments to the federal government would be inappropriate because the government can fund its loans by issuing Treasury debt and thus does not seem to pay a price for market risk. However, Treasury rates are lower than those market-based rates primarily because Treasury debt holders are protected against default risk. If payments from borrowers fall short of what is owed to the federal government, the shortfall must be made up eventually either by raising taxes or by cutting other spending. (Issuing additional Treasury debt can postpone but not avert the need to raise taxes or cut spending.) Therefore, a more comprehensive approach to measuring the cost of federal credit programs would recognize market risk as a cost to the government and would calculate present values using market-based discount rates. Under such an approach, the federal budget would reflect the market values of loans and loan guarantees.

Page 11:

Federal student loans expose the government to losses from defaults, and they involve significant administrative expenses for origination, servicing, and collection on defaults; at the same time, the government collects fees and interest from borrowers. As with other types of credit, student loans are exposed to market risk, meaning that default rates tend to be higher, and recoveries smaller, when the economy is weak and the losses are most costly.

[242] Report: “Fair-Value Accounting for Federal Credit Programs.” U.S. Congressional Budget Office, March 2012. <www.cbo.gov>

Page 2:

What is termed the fair-value approach to budgeting for federal credit programs would measure those programs’ costs at market prices or at some approximation of market prices when directly comparable market prices are unavailable. A fair-value approach generally entails applying the discount rates on expected future cash flows that private financial institutions would apply.5 In the view of the Congressional Budget Office (CBO), adopting a fair-value approach would provide a more comprehensive way to measure the costs of federal credit programs and would permit more level comparisons between those costs and the costs of other forms of federal assistance. …

Page 3:

In some cases, fair-value estimates of budgetary costs as a percentage of loan amounts are considerably higher than FCRA [Federal Credit Reform Act] estimates: CBO has estimated that the average subsidy for direct student loans made between 2010 and 2020 would be a negative 9 percent under FCRA accounting but a positive 12 percent on a fair-value basis. (A negative subsidy indicates that, for budgetary purposes, the transactions are recorded as generating net income for the government.) Subsequent changes in CBO’s interest rate projections would affect both estimates of the amounts of those subsidies, but the large gap between them would remain.

Page 6:

Because FCRA accounting requires the use of Treasury rates for discounting, it implicitly treats the market risk associated with federal credit programs as having no cost to the government. As a result, the subsidy provided by the government is understated under FCRA accounting. Moreover, the higher the market risk that is associated with a credit obligation, the greater is that understatement.

Page 7:

When the government extends credit, the associated market risk of those obligations is effectively passed along to citizens who, as investors, would view that risk as costly.

If the federal government is able to spread certain risks more widely than the private sector can, the government may be a relatively efficient provider of certain types of insurance. That is, a private provider of such insurance might charge higher fees if it is unable to transfer the risk to a wide group of investors. However, even if the federal government can spread risks widely, it cannot eliminate the component of risk that is associated with fluctuations in the aggregate economy—market risk—and which investors require compensation to bear.

The federal government’s ability to borrow at Treasury rates also does not reduce the cost to taxpayers of the market risk associated with federal credit programs. Treasury rates are relatively low because the securities are backed by the government’s ability to raise taxes. When the government finances a risky loan or loan guarantee by selling a Treasury security, it is effectively shifting risk to members of the public. If such a loan is repaid as expected, the interest and principal payments cover the government’s obligation to the holder of the Treasury security, but if the borrower defaults, the obligation to the security holder must be paid for either by raising taxes or by cutting other spending to be able to repay the Treasury debt. (Issuing additional Treasury debt can postpone but not avert the need to raise taxes or cut spending.) Thus, the risk is effectively borne by taxpayers (or by beneficiaries of government programs); like investors, taxpayers and government beneficiaries generally value resources more highly when the economy is performing poorly.

[243] Report: “Billions of Dollars in Potentially Erroneous Education Credits Continue to Be Claimed for Ineligible Students and Institutions.” Treasury Inspector General for Tax Administration, March 27, 2015. <www.treasury.gov>

Page 1:

Education tax credits help taxpayers offset the costs of higher education and have become an increasingly important component of Federal higher education policy. The amount of education credits individuals claim each year has increased from more than $3 billion for Tax Year 1998 to almost $19 billion for Tax Year 2012. …

The Taxpayer Relief Act of 19973 created two permanent education tax credits, the Hope Credit and the Lifetime Learning Credit. The American Recovery and Reinvestment Act of 20094 temporarily replaced the Hope Credit with a refundable tax credit5 known as the American Opportunity Tax Credit (AOTC). The AOTC was initially set to expire at the end of Calendar Year 2010 but has since been extended through Calendar Year 2017.

Page 20: “To accomplish our objective, we … identified 12,214,137 taxpayers2 on the IRS Individual Return Transaction File3 who claimed education credits for 13,351,478 students on Tax Year4 2012 returns. We verified the accuracy and reliability of the data obtained by comparing 30 tax returns to return information found on the Integrated Data Retrieval System.5 The data were determined to be sufficiently reliable for the purposes of the audit.”

[244] Report: “The Alternative Minimum Tax for Individuals: A Growing Burden.” By Kurt Schuler. U.S. Congress, Joint Economic Committee. May 2001.

<taxpolicycenter.org>

Page 2: “A tax credit is a provision that allows a reduction in tax liability by a specific dollar amount, regardless of income. For example, a tax credit of $500 allows both taxpayers with income of $40,000 and those with income of $80,000 to reduce their taxes by $500, if they qualify for the credit.”

[245] Report: “Overview of the Federal Tax System.” By Molly F. Sherlock and Donald J. Marples. Congressional Research Service, November 21, 2014.

<www.fas.org>

Page 7: “If a tax credit is refundable, and the credit amount exceeds tax liability, a taxpayer receives a payment from the government.”

[246] Report: “Options for Reducing the Deficit: 2015 to 2024.” Congressional Budget Office, November 20, 2014. <www.cbo.gov>

Page 38: “Low- and moderate-income people are eligible for certain refundable tax credits under the individual income tax if they meet specified criteria. If the amount of a refundable tax credit exceeds a taxpayer’s tax liability before that credit is applied, the government pays the excess to that person.”

[247] Report: “Existing Compliance Processes Will Not Reduce the Billions of Dollars in Improper Earned Income Tax Credit and Additional Child Tax Credit Payments.” Treasury Inspector General for Tax Administration, September 29, 2014. <www.treasury.gov>

Page 15: “The Internal Revenue Code requires the IRS to process tax returns and pay any related tax refunds within 45 calendar days of receipt of the tax return or the tax return due date, whichever is later. Because of this requirement, the IRS cannot conduct extensive eligibility checks similar to those that occur with other Federal programs that typically certify eligibility prior to the issuance of payments or benefits.”

[248] Report: “Individuals Who Are Not Authorized to Work in the United States Were Paid $4.2 Billion in Refundable Credits.” Treasury Inspector General for Tax Administration, July 7, 2011. <www.treasury.gov>

Page 2:

Two of the largest refundable tax credits are the EITC [Earned Income Tax Credit] and the ACTC [Additional Child Tax Credit]. …

The ACTC is the refundable portion of the Child Tax Credit (CTC). The CTC can reduce an individual’s taxes owed by as much as $1,000 for each qualifying child. The ACTC is provided in addition to the CTC to individuals whose taxes owed were less than the amount of CTC they were entitled to claim. The ACTC is always the refundable portion of the CTC, which means an individual claiming the ACTC receives a refund even if no income tax was withheld or paid. As with all refundable credits, the risk of fraud for these types of claims is significant.

[249] Report: “Billions of Dollars in Potentially Erroneous Education Credits Continue to Be Claimed for Ineligible Students and Institutions.” Treasury Inspector General for Tax Administration, March 27, 2015. <www.treasury.gov>

Page 2:

Prior TIGTA [Treasury Inspector General for Tax Administration] report raised concerns with IRS efforts to identify and prevent erroneous education credit claims In September 2011,6 we reported that the IRS does not have effective processes to identify taxpayers who claim erroneous education credits; 2.1 million taxpayers received a total of $3.2 billion in education credits ($1.6 billion in refundable credits and $1.6 billion in nonrefundable credits) that appeared to be erroneous. …

This review was performed with information obtained from the Wage and Investment Division office in Atlanta, Georgia, and the Small Business/Self-Employed Division office in Lanham, Maryland, during the period November 2013 through November 2014. We conducted this performance audit in accordance with generally accepted government auditing standards. …

The IRS does not have effective processes to identify erroneous claims for education credits. Although the IRS has taken steps to address some of our recommendations to improve the identification and prevention of erroneous education credit claims, many of the deficiencies we previously identified still exist. Based on our analysis of education credits claimed and received on Tax Year 2012 tax returns, we estimate more than 3.6 million taxpayers (claiming more than 3.8 million students) received more than $5.6 billion ($2.5 billion in refundable credits and $3.1 billion in nonrefundable credits) in potentially erroneous education credits.

Page 7:

Our analysis of Tax Year 2012 tax returns identified the following education credit claims that appear to be erroneous based on IRS records:

• More than 2 million taxpayers (claiming more than 2.2 million students) who received education credits totaling more than $3.2 billion with no Form 1098-T filed with the IRS by a postsecondary educational institution for the student claimed.16

• More than 1.6 million taxpayers (claiming nearly 1.7 million students) who received education credits totaling approximately $2.5 billion for which Department of Education data show the educational institution listed on the Form 8863 was not certified to receive Federal student aid funding, i.e. not an eligible educational institution. To qualify for an education credit, students must attend a postsecondary educational institution that is certified by the Department of Education to receive Federal student aid funding.

• 427,345 taxpayers (claiming 431,622 students) who received AOTCs totaling approximately $662 million for students who, based on Forms 1098-T, did not attend school at least half-time as required.17 Students must attend an eligible institution at least half-time to qualify for the AOTC.

Further analysis of the more than 3.6 million taxpayers we identified showed that 765,943 (21 percent) claimed both a student for which the IRS received no Form 1098-T and listed an ineligible institution on their Form 8863. Figure 6 shows the results of our analysis of taxpayers who received education credits for students with no Form 1098-T or who attended an ineligible institution.

Page 12:

Erroneous American Opportunity Tax Credits Are Being Allowed for Students Claimed for More Than Four Years

Analysis of tax return filings between Tax Years 2007 and 2013 identified 1.1 million students for which the AOTC was claimed for more than four years. Our review of a statistically valid sample of 139 of the 1.1 million students identified that 130 (94 percent) students were erroneously claimed for the AOTC.23 Based on the results of our sample, we estimate that more than 1 million (94 percent) of the more than 1.1 million students we identified were used to receive potentially erroneous AOTCs totaling nearly $1.7 billion.24 Specifically, each of these students were claimed in excess of the four-year limit between Tax Years 2007 and 2013. For Tax Year 2012 alone, we estimate that 419,827 students who had already been claimed in four previous tax years were used to receive potentially erroneous AOTCs totaling more than $650 million.

Page 15:

Potentially Erroneous Education Credits Are Being Received for Students Who Are Incarcerated or of Unlikely Ages to Be Eligible

The IRS has yet to establish effective processes to identify taxpayers who claim potentially erroneous education credits for students who are of an unlikely age to pursue postsecondary education or who are incarcerated. Our review identified:

• 39,763 taxpayers who received more than $61.5 million in potentially erroneous education credits as of December 31, 2013, for 43,800 students who were under age 14 or over age 65. For each of these students, the IRS did not receive a Form 1098-T for the student being claimed.

• 2,148 taxpayers who received potentially erroneous education credits totaling approximately $3.9 million for students who were incarcerated for all of Calendar Year 2012. For each of these students, the IRS did not receive a Form 1098-T for the student being claimed.

Education credit requirements do not require a student to be a specific age to qualify for an education credit, nor do they specify that a student must not be incarcerated. However, both conditions call into question the validity of a taxpayer’s education credit claim. For example, students under the age of 14 or over the age of 65 are not likely to be attending a postsecondary educational institution. In addition, individuals who are incarcerated for a full calendar year are unlikely to meet the requirement that they be a taxpayer’s dependent or incur qualifying educational expenses at an eligible educational institution.

Page 20:

Appendix I Detailed Objective, Scope, and Methodology

Our overall objective was to assess the IRS’s efforts to improve the detection and prevention of questionable education credit claims. We conducted follow-up testing to evaluate the effectiveness of the IRS’s actions to address recommendations made in five prior audit reports.1 To accomplish our objective, we:

B. Identified 12,214,137 taxpayers2 on the IRS Individual Return Transaction File3 who claimed education credits for 13,351,478 students on Tax Year4 2012 returns. We verified the accuracy and reliability of the data obtained by comparing 30 tax returns to return information found on the Integrated Data Retrieval System.5 The data were determined to be sufficiently reliable for the purposes of the audit.

C. Identified 1,167,119 students for which the AOTC was claimed for more than four tax years between Tax Year 2007 and Tax Year 2013.

Page 25: “Figure 1: Computation of the Average Refundable and Nonrefundable Education Credit Received Per Tax Return … Average Credit Per Tax Return [=] $1,548.90”

[250] U.S. Code Title 38, Part III, Chapter 34, Subchapter I, Section 3452: Veterans’ Benefits, Definitions.” Accessed August 10, 2015 at <www.law.cornell.edu>

(f) The term “institution of higher learning” means a college, university, or similar institution, including a technical or business school, offering postsecondary level academic instruction that leads to an associate or higher degree if the school is empowered by the appropriate State education authority under State law to grant an associate or higher degree. When there is no State law to authorize the granting of a degree, the school may be recognized as an institution of higher learning if it is accredited for degree programs by a recognized accrediting agency. Such term shall also include a hospital offering educational programs at the postsecondary level without regard to whether the hospital grants a postsecondary degree. Such term shall also include an educational institution which is not located in a State, which offers a course leading to a standard college degree, or the equivalent, and which is recognized as such by the secretary of education (or comparable official) of the country or other jurisdiction in which the institution is located.

[251] Webpage: “Academic Degree and Certificate Definitions.” Arkansas Department of Higher Education, Research and Planning Division. Accessed July 17, 2015 at <www.adhe.edu>

Associate degree (two years or more): a degree granted upon completion of a program that requires at least two, but fewer than four, academic years of postsecondary education. It includes a level of general education necessary for growth as a lifelong learner and is comprised of 60-72 semester credit hours. There are four types of associate degrees: …

Baccalaureate (bachelor’s) degree: a degree granted upon completion of a program that requires four to five years of full-time college work and carries the title of bachelor. …

Master’s degree: a degree which requires at least one, but no more than two, full-time equivalent years of study beyond the bachelor’s degree.

Doctoral degree: a degree awarded upon completion of an educational program at the graduate level which terminates in a doctor’s degree. …

First professional degree: a degree awarded upon completion of a program which meets all of these criteria: a) completion of academic requirements to begin practice in the profession; b) at least two years of college work before entering the program; and c) at least six academic years of college work to complete the degree program, including the prior required college work. First professional degrees are awarded in these fields:

• Chiropractic (DC)

• Dentistry (DDS or DMD)

• Law (LLB or JD)

• Medicine (MD)

• Optometry (OD)

• Osteopathic Medicine (DO)

• Pharmacy (Pharm.D.)

• Podiatry (Pod D or DP)

• Theology (M Div or MHL)

• Veterinary Medicine (DVM)

[252] Brief: “Time to Degree of U.S. Research Doctorate Recipients.” By Thomas B. Hoffer and Vincent Welch. National Science Foundation, March 2006. <www.nsf.gov>

This InfoBrief draws on data from the Survey of Earned Doctorates (SED) to document average time-to-degree differences among research doctorate recipients from U.S. universities. … [T] three measures of time to degree are examined here:

• total elapsed time from completion of the baccalaureate to the doctorate (total time to degree)

• time in graduate school less reported periods of nonenrollment (registered time to degree)

• age at doctorate …

For the 2003 doctorate recipients, the median total time from baccalaureate to

doctorate was 10.1 years, while the median registered time was 7.5 years and the median age at doctorate was 33.3 years.

Pages 2-3:

Table 3 shows time-to-degree differences for 2003 by more detailed science fields of study. Chemistry has the lowest [median] times to degree on all three measures. For the registered time-to-degree variable, mathematics (6.8 years), engineering (6.9 years), and biological sciences (6.9 years), and physics and astronomy (7.0 years) were the next closest fields to chemistry (6.0 years). The longest registered time-to-degree total was found for anthropology (9.6 years).

[253] Webpage: “Path to Graduate and Professional Education.” Grand Valley State University. Accessed August 10, 2015 at <www.gvsu.edu>

“A doctoral degree typically involves both coursework and a major research project. Usually 5 to 7 years of full-time study is needed to complete a Ph.D. or other research doctorate. The first 2 to 3 years usually involve classes, seminars, and directed reading to give you comprehensive knowledge of an academic field. This period of study is followed by a written or oral examination that tests your knowledge.”

[254] Webpage: “The Difference Between a PhD and Professional Doctorate.” Capella University, Jan 28, 2015. <www.capella.edu>

Some people say that a PhD prepares you to teach, while a professional doctorate is geared more toward a professional career. But the answer to the question is more complex. …

The one primary difference between PhD and professional doctorate programs is the intent and deliverable of the independent research phase. PhD students are expected to create, expand, and contribute to knowledge, research, and theory in their field of study in the form of a dissertation. Professional doctorate students are expected to expand and apply existing knowledge and research to their professional field in one of a variety of forms, such as a dissertation, action research, professional project or portfolio.

[255] Book: Academically Adrift: Limited Learning on College Campuses. By Richard Arum and Josipa Roksa. University of Chicago Press, 2011.

Pages 146-147:

During graduate training, future faculty members receive little if any formal instruction on teaching. Doctoral training focuses primarily, and at times exclusively, on research. Although recent decades have seen a proliferation of interest in improving the preparation of graduate students, a recent survey of doctoral students indicated that only 50 percent either had an opportunity to take a teaching assistant’s training course lasting at least one term, or reported that they had an opportunity to learn about teaching in their respective disciplines through workshops and seminars.38 Not surprisingly, one of the main concerns of students in doctoral programs is a lack of systematic opportunities to help them learn how to teach.39

Graduate students are not only entering classrooms without much preparation, but more problematically, they are learning in their graduate programs to deprioritize and perhaps even devalue teaching. This aspect of graduate training, which neither prepares students to teach nor always instills in them a respect for the importance of teaching, is problematic not only on principled grounds but also from a functional standpoint: “Many, if not most [PhDs], will not be tenure-track faculty members,” and only a few will have jobs at research universities.41

[256] Webpage: “Difference between academic and professional doctorate degrees.” University of California Berkeley, Office of Planning and Analysis. Accessed August 10, 2015 at <opa.berkeley.edu>

“Although the work for the professional doctor’s degree may extend the boundaries of knowledge in the field, it is directed primarily towards distinguished practical performance.”

[257] Webpage: “The Difference Between a PhD and Professional Doctorate.” Capella University, Jan 28, 2015. <www.capella.edu>

Some people say that a PhD prepares you to teach, while a professional doctorate is geared more toward a professional career. But the answer to the question is more complex. …

The one primary difference between PhD and professional doctorate programs is the intent and deliverable of the independent research phase. PhD students are expected to create, expand, and contribute to knowledge, research, and theory in their field of study in the form of a dissertation. Professional doctorate students are expected to expand and apply existing knowledge and research to their professional field in one of a variety of forms, such as a dissertation, action research, professional project or portfolio.

[258] Webpage: “Academic Degree and Certificate Definitions.” Arkansas Department of Higher Education, Research and Planning Division. Accessed July 17, 2015 at <www.adhe.edu>

Associate degree (two years or more): a degree granted upon completion of a program that requires at least two, but fewer than four, academic years of postsecondary education. It includes a level of general education necessary for growth as a lifelong learner and is comprised of 60-72 semester credit hours. There are four types of associate degrees: …

Baccalaureate (bachelor’s) degree: a degree granted upon completion of a program that requires four to five years of full-time college work and carries the title of bachelor. …

Master’s degree: a degree which requires at least one, but no more than two, full-time equivalent years of study beyond the bachelor’s degree.

Doctoral degree: a degree awarded upon completion of an educational program at the graduate level which terminates in a doctor’s degree. …

First professional degree: a degree awarded upon completion of a program which meets all of these criteria: a) completion of academic requirements to begin practice in the profession; b) at least two years of college work before entering the program; and c) at least six academic years of college work to complete the degree program, including the prior required college work. First professional degrees are awarded in these fields:

• Chiropractic (DC)

• Dentistry (DDS or DMD)

• Law (LLB or JD)

• Medicine (MD)

• Optometry (OD)

• Osteopathic Medicine (DO)

• Pharmacy (Pharm.D.)

• Podiatry (Pod D or DP)

• Theology (M Div or MHL)

• Veterinary Medicine (DVM)

[259] Webpage: “Back to school statistics.” U.S. Department Of Education, National Center for Education Statistics. Accessed August 15, 2015 at <nces.ed.gov>

In fall 2015, some 20.2 million students are expected to attend American colleges and universities, constituting an increase of about 4.9 million since fall 2000 (source).

Females are expected to account for the majority of college students: about 11.5 million females will attend in fall 2015, compared with 8.7 million males. Also, more students are expected to attend full time than part time (an estimated 12.6 million, compared with about 7.6 million) (source).

About 7.0 million students will attend 2-year institutions and 13.2 million will attend 4-year institutions in fall 2015.

[260] Dataset: “Table 302.10. Recent high school completers and their enrollment in 2-year and 4-year colleges, by sex: 1960 through 2013.” U.S. Department Of Education, National Center for Education Statistics, July 2014. <nces.ed.gov>

[261] Dataset: “Table 302.20. Percentage of recent high school completers enrolled in 2- and 4-year colleges, by race/ethnicity: 1960 through 2013.” U.S. Department Of Education, National Center for Education Statistics, July 2014. <nces.ed.gov>

[262] Dataset: “Table 326.20. Graduation rate from first institution attended within 150 percent of normal time for first-time, full-time degree/certificate-seeking students at 2-year postsecondary institutions, by race/ethnicity, sex, and control of institution: Selected cohort entry years, 2000 through 2010.” U.S. Department Of Education, National Center for Education Statistics, November 2014. <nces.ed.gov>

[263] Dataset: “Table 326.10. Graduation rate from first institution attended for first-time, full-time bachelor’s degree- seeking students at 4-year postsecondary institutions, by race/ethnicity, time to completion, sex, control of institution, and acceptance rate: Selected cohort entry years, 1996 through 2007.” U.S. Department Of Education, National Center for Education Statistics, November 2014. <nces.ed.gov>

[264] Dataset: “Table 326.10. Graduation rate from first institution attended for first-time, full-time bachelor’s degree- seeking students at 4-year postsecondary institutions, by race/ethnicity, time to completion, sex, control of institution, and acceptance rate: Selected cohort entry years, 1996 through 2007.” U.S. Department Of Education, National Center for Education Statistics, November 2014. <nces.ed.gov>

[265] Calculated with the dataset: “PINC-03. Educational Attainment--People 25 Years Old and Over, by Total Money Earnings in 2013, Work Experience in 2013, Age, Race, Hispanic Origin, and Sex; Current Population Survey 2014 Annual Social and Economic Supplement.” U.S. Census Bureau, October 2, 2014. <www.census.gov>

NOTE: An Excel file containing the data and calculations is available upon request.

[266] Report: “Income and Poverty in the United States: 2013.” By Carmen DeNavas-Walt and Bernadette D. Proctor. U.S. Census Bureau, September 2014. <www.census.gov>

Page 4: “The income and poverty estimates shown in this report are based solely on money income before taxes and do not include the value of noncash benefits, such as those provided by the Supplemental Nutrition Assistance Program (SNAP), Medicare, Medicaid, public housing, or employer-provided fringe benefits.”

[267] Paper: “The Falling Time Cost of College: Evidence from Half a Century of Time Use Data.” By Philip Babcock and Mindy Marks. The Review of Economics and Statistics, May 2011. Pages 468-478. <www.mitpressjournals.org>

Page 468:

Using multiple data sets from different time periods, we document declines in academic time investment by full-time college students in the United States between 1961 and 2003. Full-time students allocated 40 hours per week toward class and studying in 1961, whereas by 2003, they were investing about 27 hours per week. Declines were extremely broad based and are not easily accounted for by framing effects, work or major choices, or compositional changes in students or schools. We conclude that there have been substantial changes over time in the quantity or manner of human capital production on college campuses.

[268] Webpage: “American Time Use Survey, Charts by Topic: Students.” U.S. Bureau of Labor Statistics, September 30, 2014. <www.bls.gov>

Time use on an average weekday for full-time university and college students …

Leisure and sports [=] 4.0

Educational activities [=] 3.3 …

Data include individuals, ages 15 to 49, who were enrolled full time at a university or college. Data include non-holiday weekdays and are averages for 2009-13. …

Average hours per weekday spent by high school students in various activities

Educational; Employed [=] 5.8; Not employed [=] 6.6 …

Socializing, relaxing, and leisure; Employed [=] 2.9; Not employed [=] 3.6

Sports, exercise, and recreation; Employed [=] 0.7; Not employed [=] 0.8

Data include persons ages 15 to 19 who were enrolled in high school. Data include non-holiday weekdays during the months of Jan.- May and Sept. - Dec., and are averages for 2009-13.

CALCULATIONS:

High school students who are employed: 2.9 hours socializing, relaxing, and leisure + 0.7 hours sports, exercise, and recreation = 3.6 hours on leisure activities and sports

High school students who are employed: 3.6 hours socializing, relaxing, and leisure + 0.8 hours sports, exercise, and recreation = 4.4 hours on leisure activities and sports

[269] Book: Academically Adrift: Limited Learning on College Campuses. By Richard Arum and Josipa Roksa. University of Chicago Press, 2011.

Pages 32-33:

Our research was made possible by a collaborative partnership with the Council for Aid to Education … and twenty-four four-year colleges and universities that granted us access to students who were scheduled to take the Collegiate Learning Assessment (CLA) in their first semester (Fall 2005) and at the end of their sophomore year (Spring 2007). … The research in this book is based on longitudinal data of 2,322 students enrolled across a diverse range of campuses. … The schools are dispersed nationally across all four regions of the country. We refer to this multifaceted data as the Determinants of College Learning (DCL) dataset. … On most measures, students in the DCL dataset appeared reasonably representative of traditional-age undergraduates in four-year institutions, and the colleges and universities they attended resembled four-year institutions nationwide. The DCL students’ racial, ethnic, and family backgrounds as well as their English-language backgrounds and high school grades also tracked well with national statistics.

Pages 110-111:

Students in our sample reported spending on average fifteen hours per week attending classes and labs. The rest of the time was divided between studying and myriad other activities. Studying is far from the focus of students’ “free time” (i.e., time outside of class): only twelve hours a week are spent studying. Combining the hours spent studying with the hours spent in classes and labs, students in our sample spent less than one-fifth (16 percent) of their reported time each week on academic pursuits. …

In addition to attending classes and studying, students are spending time working, volunteering, and participating in college clubs, fraternities, and sororities. If we presume that students are sleeping eight hours a night … that leaves 85 hours a week for other activities…. What is this additional time spent on? It seems to be spent mostly on socializing and recreation. A recent study of University of California undergraduates reported that while students spent thirteen hours a week studying, they also spent twelve hours socializing with friends, eleven hours using computers for fun, six hours watching television, six hours exercising, five hours on hobbies, and three hours on other forms of entertainment. Students were thus spending on average 43 hours per week outside the classroom on these activities—that is, over three times more hours than the time they spent studying.

CALCULATIONS:

15 hours attending classes and labs + 12 to 13 hours studying = 27-28 hours

27-28 hours on educational activities / 168 hours per week = 16-17%

43 hours on leisure activities and sports / 168 hours per week = 26%

[270] Paper: “Where A Is Ordinary: The Evolution of American College and University Grading, 1940-2009.” By Stuart Rojstaczer and Christopher Healy. Teachers College Record, July 2012. <www.tcrecord.org>

Page 1: “A’s represent 43% of all letter grades, an increase of 28 percentage points since 1960 and 12 percentage points since 1988.”

Page 3:

The characteristics of the 135 institutions for which we have contemporary data are summarized in Table 1. In addition, we have historical data on grading practices from the 1930s onward for 173 institutions (93 of which also have contemporary data). Time series were constructed beginning in 1960 by averaging data from all institutions on an annual basis. For the 1930s, 1940s, and 1950s, data are sparse, so we averaged over 1936 to 1945 (data from 37 schools) and 1946 to 1955 (data from 13 schools) to estimate average grades in 1940 and 1950, respectively. For the early part of the 1960s, there are 11–13 schools represented by our annual averages. By the early part of the 1970s, the data become more plentiful, and 29–30 schools are averaged. Data quantity increases dramatically by the early 2000s with 82–83 schools included in our data set. Because our time series do not include the same schools every year, we smooth our annual estimates with a moving centered three-year average.

Page 4: “Table 1. Characteristics of Schools With Contemporary Data Including Grading Averages … Totals … %A [=] 43.0 … %B [=] 33.8 … %C [=] 14.9 … %D [=] 4.1 … %F [=] 4.2”

Page 10:

Our sample has a student population of 1.5 million, far greater than any other previous detailed study on national grading patterns for four-year colleges and universities. It should be noted, however, that although we randomly found and sought data, in comparison with national student populations, our sample underrepresents private schools (which grade higher than national averages) and overrepresents Southern schools (which grade lower than national averages) … The average SAT score of our sampled student body weighted by student population (math plus verbal) is about 40 points higher than that seen nationally for 2008 in a survey of 2,343 four-year institutions….

The combined effect of undersampling private schools, oversampling Southern schools, and (probably) the slightly higher average SAT scores of our sampled students relative to national averages suggests that our weighted average of 42% A’s is a slightly conservative one.

[271] Article: “Going Naked.” By Richard H. Hersh. Peer Review, Spring 2007. <www.aacu.org>

[T]he Collegiate Learning Assessment project (CLA) began as an approach to assessing core outcomes espoused by all of higher education--critical thinking, analytical reasoning, problem solving, and writing. (Fig. 1 provides a small sample of questions used in developing our scoring rubrics.) These outcomes cannot be taught sufficiently in any one course or major but rather are the collective and cumulative result of what takes place or does not place over the four to six years of undergraduate education in and out of the classroom.

The CLA is an institutional measure of value-added rather than an assessment of an individual student or course. It has now been used by more than two-hundred institutions and over 80,000 students in cross-sectional and longitudinal studies to signal where an institution stands with regard to its own standards and to other similar institutions….

[272] Book: Beyond the Bubble Test: How Performance Assessments Support 21st Century Learning. Edited by Linda Darling-Hammond, Frank Adamson. Jossey-Bass (an imprint of John Wiley & Sons), 2014.

The CLA was developed to measure undergraduates’ learning—in particular, their ability to think critically, reason analytically, solve problems, and communicate clearly. …

Both the CLA and its high school counterpart, the CWRA, differs substantially from most other standardized tests, which are based on an empiricist philosophy and a psychometric/behavioral tradition. …

The conceptual underpinnings of the CLA and CWRA are embodied in what has been called a criterion sampling approach to measurement. This approach assumes that the whole is greater than the sum of its parts and that complex tasks require an integration of abilities that cannot be captured if divided into and measured as individual components. The criterion sampling notion is straightforward: if you want to know what a person knows and can do, sample tasks from the domain in which she is to act, observe performance, and infer competence and learning. For example, if you want to know whether a person not only knows the laws that govern driving a car but can also actually drive a car, do not just give her a multiple-choice test. Also administer a driving test with a sample of tasks from the general driving domain (starting the car, pulling into traffic, turning right and left in traffic, backing up, parking). On the basis of this sample of performance, it is possible to draw more general, valid inferences about driving performance.

The CLA/CWRA follows the criterion-sampling approach by defining a domain of real-world tasks that are holistic and drawn from life situations.

[273] Book: Academically Adrift: Limited Learning on College Campuses. By Richard Arum and Josipa Roksa. University of Chicago Press, 2011.

Pages 32-33: “[T]he Council for Aid to Education … brought together leading national psychometricians at the end of the twentieth century to develop a state-of-the-art assessment instrument to measure undergraduate learning … the Collegiate Learning Assessment….”

Pages 35-36:

The Council for Aid to Education has also published a detailed scoring rubric on the criteria that it defines as critical thinking, analytical reasoning, and problem solving—including how well the student assesses the quality and relevance of evidence, analyzes and synthesizes data and information, draws conclusions from his or her analysis, and considers alternative perspectives. In addition, the scoring rubric with respect to written communication requires that the presentation is clear and concise, the structure of the argument is well-developed and effective, the work is persuasive, the written mechanics are proper and correct, and reader interest is maintained.71

71. Council for Aid to Education, Collegiate Learning Assessment Common Scoring Rubric (New York: Council for Aid to Education, 2008).

[274] Book: Academically Adrift: Limited Learning on College Campuses. By Richard Arum and Josipa Roksa. University of Chicago Press, 2011.

Pages 32-33:

Our research was made possible by a collaborative partnership with the Council for Aid to Education … and twenty-four four-year colleges and universities that granted us access to students who were scheduled to take the Collegiate Learning Assessment (CLA) in their first semester (Fall 2005) and at the end of their sophomore year (Spring 2007). … The schools are dispersed nationally across all four regions of the country. We refer to this multifaceted data as the Determinants of College Learning (DCL) dataset. … On most measures, students in the DCL dataset appeared reasonably representative of traditional-age undergraduates in four-year institutions, and the colleges and universities they attended resembled four-year institutions nationwide. The DCL students’ racial, ethnic, and family backgrounds as well as their English-language backgrounds and high school grades also tracked well with national statistics.

Page 159:

The overall retention rate from freshman to sophomore year across the twenty-four institutions included in the DCL dataset was slightly under 50 percent, although this varied notably across institutions and groups of students. If bias is introduced into our analyses by processes of selective attrition, however, it is likely in a direction that leads us to overestimate the overall rate of academic growth that is occurring in these institutions—that is, the dearth of learning we have identified would likely be even more pronounced if we had been able to track down and continue assessing the students who dropped out of the study and / or the institutions they originally attended.

[275] Book: Aspiring Adults Adrift: Tentative Transitions of College Graduates. By Richard Arum and Josipa Roksa. University of Chicago Press, 2014.

Page 37:

Over the full four years of college, students gained an average of 0.47 standard deviations on the CLA.41 Thus, after four years of college, an average-scoring student in the fall of his or her freshman year would score at a level only eighteen percentile points higher in the spring of his or her senior year. Stated differently, freshmen who entered higher education at the 50th percentile would reach a level equivalent to the 68th percentile of the incoming freshman class by the end of their senior year. Since standard deviations are not the most intuitive way of understanding learning gains, it is useful to consider that if the CLA were rescaled to a one-hundred-point scale, approximately one-third of students would not improve more than one point over four years of college. …

41. A recent replication using data from the Wabash National Study of Liberal Arts Education, relying on a different sample and a multiple-choice measure of critical thinking (the Collegiate Assessment of Academic Proficiency, or CAAP), produced a virtually identical estimate; students in the Wabash Study gained 0.44 standard deviations on the CAAP measure of critical thinking over four years of college. See Ernest T. Pascarella et al., “How Robust Are the Findings of Academically Adrift?” Change: The Magazine of Higher Learning, May/ June 2011: 20– 24.

Page 42:

The results indicate that students attending high-selectivity institutions improve on the CLA substantially more than those attending low-selectivity institutions, even when models are adjusted for students’ background and academic characteristics. This association between institutional selectivity and CLA performance is consistent with findings for persistence and graduation in other research. A range of factors, from greater expenditures to unique peer environments at high-selectivity schools, may help to account for these patterns.

Page 44:

These patterns indicate that the issues we have identified, namely weak academic engagement and limited learning, are widespread. They are not concentrated at a few institutions, or even at a specific type of institution. While students in more selective institutions gain more on the CLA, their gains are still modest, and while they spend more time studying alone, their average is still only slightly over ten hours per week.

Pages 137-139

Analyses presented in this book build on the Determinants of College Learning (DCL) dataset…. The CAE initiated the Collegiate Learning Assessment (CLA) Longitudinal Project in the fall of 2005, administering a short survey and the CLA instrument to a sample of freshmen at four-year institutions. The same students were contacted for the sophomore-year follow-up in the spring of 2007 and the senior-year follow-up in the spring of 2009. …

The senior-year sample included 1,666 respondents with valid CLA scores. … Characteristics of the senior-year sample thus correspond reasonably well with the characteristics of students from a nationally representative sample. …

While the CLA as a whole is considered by some as state of the art, the performance task component of the test is the best developed and most sophisticated part of the assessment instrument; it is the component that the Organisation for Economic Cooperation and Development adopted for its cross-national assessment of higher education students’ generic skill strand in the Assessment of Higher Education Learning Outcomes (AHELO) project.

We use students’ scores on the performance task of the CLA as an indicator of their critical thinking, complex reasoning, and writing skills. In addition to being the most developed, this performance task was the most uniformly administered component across time and institutions.

[276] Report: “The Literacy of America’s College Students.” By Justin D. Baer, Andrea L. Cook, and Stéphane Baldi. American Institutes for Research, January 2006. <www.air.org>

Page 4:

The NSACS, sponsored by The Pew Charitable Trusts, collected data from a sample of 1,827 graduating students at 80 randomly selected 2-year and 4-year colleges and universities (68 public and 12 private) from across the United States. The NSACS specifically targeted college and university students nearing the end of their degree program, thus providing a broader and more comprehensive picture of students’ fundamental literacy abilities than ever before.

The NSACS used the same assessment instrument as the 2003 National Assessment of Adult Literacy (NAAL), a nationally representative survey of the English-language literacy abilities of U.S. adults 16 and older residing in households or prisons. The NAAL was developed and administered by the U.S. Department of Education’s National Center for Education Statistics (NCES). Literacy levels were categorized as Below Basic, Basic, Intermediate, or Proficient on the basis of the abilities of participants.

Because literacy is not a single skill used in the same manner for all types of printed and written information, the NSACS measured literacy along three dimensions: prose literacy, document literacy, and quantitative literacy. These three literacy domains were designed to capture an ordered set of information-processing skills and strategies that adults use to accomplish a wide range of literacy tasks and make it possible to profile the various types and levels of literacy among different subgroups in society.

[277] Report: “The Literacy of America’s College Students.” By Justin D. Baer, Andrea L. Cook, and Stéphane Baldi. American Institutes for Research, January 2006. <www.air.org>

Page 4: “Prose Literacy: The knowledge and skills needed to perform prose tasks, that is, to search, comprehend, and use information from continuous texts. Prose examples include editorials, news stories, brochures, and instructional materials.”

Page 21: “Table 2.2. Percentage of U.S. adults in college and the nation in each prose literacy level, by selected characteristics”

[278] Report: “The Literacy of America’s College Students.” By Justin D. Baer, Andrea L. Cook, and Stéphane Baldi. American Institutes for Research, January 2006. <www.air.org>

Page 4: “Document Literacy: The knowledge and skills needed to perform document tasks, that is, to search, comprehend, and use information from noncontinuous texts in various formats. Document examples include job applications, payroll forms, transportation schedules, maps, tables, and drug or food labels.”

Page 22: “Table 2.3. Percentage of U.S. adults in college and the nation in each document literacy level, by selected characteristics”

[279] Report: “The Literacy of America’s College Students.” By Justin D. Baer, Andrea L. Cook, and Stéphane Baldi. American Institutes for Research, January 2006. <www.air.org>

Page 4: “Quantitative Literacy: The knowledge and skills required to perform quantitative literacy tasks, that is, to identify and perform computations, either alone or sequentially, using numbers embedded in printed materials. Quantitative examples include balancing a checkbook, figuring out a tip, completing an order form, or determining the amount of interest on a loan from an advertisement.”

Page 23: “Table 2.4. Percentage of U.S. adults in college and the nation in each quantitative literacy level, by selected characteristics”

[280] Report: “The Literacy of America’s College Students.” By Justin D. Baer, Andrea L. Cook, and Stéphane Baldi. American Institutes for Research, January 2006. <www.air.org>

Page 5: “The literacy of students in 4-year public institutions was comparable to the literacy of students in 4-year private institutions.”

Page 30: “Prose literacy was higher for students in selective 4-year colleges, though differences between selective and nonselective 4-year colleges for document and quantitative literacy could not be determined because of the sample size.”

Page 34:

College students come from a variety of economic backgrounds, with some students supporting themselves and others relying on their families to pay for tuition and other necessities.1 Despite variations in income, most differences in the literacy of students across income groups were not significant (Table 4.1).

1 Students were asked whether they were financially independent or whether they were financially dependent on their parents. Depending on their answer, they were asked to report either their parents’ household income or their personal income. The financial information was combined to create a single measure of personal or parents’ household income.

[281] Report: “The Literacy of America’s College Students.” By Justin D. Baer, Andrea L. Cook, and Stéphane Baldi. American Institutes for Research, January 2006. <www.air.org>

Page 35: “Table 4.1. Average prose, document, and quantitative literacy scores for U.S. adults in 2- and 4-year colleges, by income.”

[282] Report: “How Should Colleges Assess And Improve Student Learning? Employers’ Views On The Accountability Challenge.” By Peter D. Hart Research Associates for the Association of American Colleges and Universities, January 9, 2008. <www.aacu.org>

Page 1: “From November 8 to December 12, 2007, Peter D. Hart Research Associates, Inc., interviewed 301 employers whose companies have at least 25 employees and report that 25% or more of their new hires hold at least a bachelor’s degree from a four-year college. … The margin of error for this survey is ±5.7 percentage points.”

Page 3:

Employers believe that college graduates are reasonably well prepared in a variety of areas, but in no area do employers give them exceptionally strong marks. When asked to evaluate recent college graduates’ preparedness in 12 areas, employers give them the highest marks for teamwork, ethical judgment, and intercultural skills, and the lowest scores for global knowledge, self-direction, and writing. …

In none of the 12 areas tested does a majority of employers give college graduates a high rating (or “8,” “9,” or “10”) for their level of preparedness. …

Employers Evaluate College Graduates’ Preparedness In Key Areas

[283] Calculated with the dataset: “PINC-03. Educational Attainment--People 25 Years Old and Over, by Total Money Earnings in 2013, Work Experience in 2013, Age, Race, Hispanic Origin, and Sex; Current Population Survey 2014 Annual Social and Economic Supplement.” U.S. Census Bureau, October 2, 2014. <www.census.gov>

NOTE: An Excel file containing the data and calculations is available upon request.

[284] Report: “Income and Poverty in the United States: 2013.” By Carmen DeNavas-Walt and Bernadette D. Proctor. U.S. Census Bureau, September 2014. <www.census.gov>

Page 4: “The income and poverty estimates shown in this report are based solely on money income before taxes and do not include the value of noncash benefits, such as those provided by the Supplemental Nutrition Assistance Program (SNAP), Medicare, Medicaid, public housing, or employer-provided fringe benefits.”

[285] Calculated with the dataset: “PINC-03. Educational Attainment--People 25 Years Old and Over, by Total Money Earnings in 2013, Work Experience in 2013, Age, Race, Hispanic Origin, and Sex; Current Population Survey 2014 Annual Social and Economic Supplement.” U.S. Census Bureau, October 2, 2014. <www.census.gov>

NOTE: An Excel file containing the data and calculations is available upon request.

[286] Webpage: “Academic Degree and Certificate Definitions.” Arkansas Department of Higher Education, Research and Planning Division. Accessed July 17, 2015 at <www.adhe.edu>

Associate degree (two years or more): a degree granted upon completion of a program that requires at least two, but fewer than four, academic years of postsecondary education. It includes a level of general education necessary for growth as a lifelong learner and is comprised of 60-72 semester credit hours. There are four types of associate degrees: …

Baccalaureate (bachelor’s) degree: a degree granted upon completion of a program that requires four to five years of full-time college work and carries the title of bachelor. …

Master’s degree: a degree which requires at least one, but no more than two, full-time equivalent years of study beyond the bachelor’s degree.

Doctoral degree: a degree awarded upon completion of an educational program at the graduate level which terminates in a doctor’s degree. …

First professional degree: a degree awarded upon completion of a program which meets all of these criteria: a) completion of academic requirements to begin practice in the profession; b) at least two years of college work before entering the program; and c) at least six academic years of college work to complete the degree program, including the prior required college work. First professional degrees are awarded in these fields:

• Chiropractic (DC)

• Dentistry (DDS or DMD)

• Law (LLB or JD)

• Medicine (MD)

• Optometry (OD)

• Osteopathic Medicine (DO)

• Pharmacy (Pharm.D.)

• Podiatry (Pod D or DP)

• Theology (M Div or MHL)

• Veterinary Medicine (DVM)

[287] Calculated with the dataset: “PINC-03. Educational Attainment--People 25 Years Old and Over, by Total Money Earnings in 2013, Work Experience in 2013, Age, Race, Hispanic Origin, and Sex; Current Population Survey 2014 Annual Social and Economic Supplement.” U.S. Census Bureau, October 2, 2014. <www.census.gov>

NOTE: An Excel file containing the data and calculations is available upon request.

[288] Webpage: “Academic Degree and Certificate Definitions.” Arkansas Department of Higher Education, Research and Planning Division. Accessed July 17, 2015 at <www.adhe.edu>

Associate degree (two years or more): a degree granted upon completion of a program that requires at least two, but fewer than four, academic years of postsecondary education. It includes a level of general education necessary for growth as a lifelong learner and is comprised of 60-72 semester credit hours. There are four types of associate degrees: …

Baccalaureate (bachelor’s) degree: a degree granted upon completion of a program that requires four to five years of full-time college work and carries the title of bachelor. …

Master’s degree: a degree which requires at least one, but no more than two, full-time equivalent years of study beyond the bachelor’s degree.

Doctoral degree: a degree awarded upon completion of an educational program at the graduate level which terminates in a doctor’s degree. …

First professional degree: a degree awarded upon completion of a program which meets all of these criteria: a) completion of academic requirements to begin practice in the profession; b) at least two years of college work before entering the program; and c) at least six academic years of college work to complete the degree program, including the prior required college work. First professional degrees are awarded in these fields:

• Chiropractic (DC)

• Dentistry (DDS or DMD)

• Law (LLB or JD)

• Medicine (MD)

• Optometry (OD)

• Osteopathic Medicine (DO)

• Pharmacy (Pharm.D.)

• Podiatry (Pod D or DP)

• Theology (M Div or MHL)

• Veterinary Medicine (DVM)

[289] Dataset: “PINC-03. Educational Attainment--People 25 Years Old and Over, by Total Money Earnings in 2013, Work Experience in 2013, Age, Race, Hispanic Origin, and Sex; Current Population Survey 2014 Annual Social and Economic Supplement.” U.S. Census Bureau, October 2, 2014. <www.census.gov>

[290] Dataset: “PINC-03. Educational Attainment--People 25 Years Old and Over, by Total Money Earnings in 2013, Work Experience in 2013, Age, Race, Hispanic Origin, and Sex; Current Population Survey 2014 Annual Social and Economic Supplement.” U.S. Census Bureau, October 2, 2014. <www.census.gov>

[291] Webpage: “Academic Degree and Certificate Definitions.” Arkansas Department of Higher Education, Research and Planning Division. Accessed July 17, 2015 at <www.adhe.edu>

Associate degree (two years or more): a degree granted upon completion of a program that requires at least two, but fewer than four, academic years of postsecondary education. It includes a level of general education necessary for growth as a lifelong learner and is comprised of 60-72 semester credit hours. There are four types of associate degrees: …

Baccalaureate (bachelor’s) degree: a degree granted upon completion of a program that requires four to five years of full-time college work and carries the title of bachelor. …

Master’s degree: a degree which requires at least one, but no more than two, full-time equivalent years of study beyond the bachelor’s degree.

Doctoral degree: a degree awarded upon completion of an educational program at the graduate level which terminates in a doctor’s degree. …

First professional degree: a degree awarded upon completion of a program which meets all of these criteria: a) completion of academic requirements to begin practice in the profession; b) at least two years of college work before entering the program; and c) at least six academic years of college work to complete the degree program, including the prior required college work. First professional degrees are awarded in these fields:

• Chiropractic (DC)

• Dentistry (DDS or DMD)

• Law (LLB or JD)

• Medicine (MD)

• Optometry (OD)

• Osteopathic Medicine (DO)

• Pharmacy (Pharm.D.)

• Podiatry (Pod D or DP)

• Theology (M Div or MHL)

• Veterinary Medicine (DVM)

[292] The next 3 footnotes document that:

  • private-sector economic output is equal to personal consumption expenditures (PCE) + gross private domestic investment (GPDI) + net exports of goods and services.
  • PCE is the “primary measure of consumer spending on goods and services” by private individuals and nonprofit organizations.
  • GPDI is a measure of private spending on “structures, equipment, and intellectual property products.”

Since education is not a service that is typically imported or exported, a valid approximation of private spending on education can be arrived at by summing PCE and GPDI. The fourth footnote below details the data used in this calculation.

[293] Report: “Fiscal Year 2013 Analytical Perspectives, Budget Of The U.S. Government.” White House Office of Management and Budget, February 12, 2012. <www.gpo.gov>

Page 471:

The main purpose of the NIPAs [national income and product accounts published by the U.S. Bureau of Economic Analysis] is to measure the Nation’s total production of goods and services, known as gross domestic product (GDP), and the incomes generated in its production. GDP excludes intermediate production to avoid double counting. Government consumption expenditures along with government gross investment — State and local as well as Federal — are included in GDP as part of final output, together with personal consumption expenditures, gross private domestic investment, and net exports of goods and services (exports minus imports).

[294] Report: “Concepts and Methods of the U.S. National Income and Product Accounts (Chapters 1–11 and 13).” U.S. Bureau of Economic Analysis, November 2014. <www.bea.gov>

Page 5-1:

Personal consumption expenditures (PCE) is the primary measure of consumer spending on goods and services in the U.S. economy.1 It accounts for about two-thirds of domestic final spending, and thus it is the primary engine that drives future economic growth. PCE shows how much of the income earned by households is being spent on current consumption as opposed to how much is being saved for future consumption.

PCE also provides a comprehensive measure of types of goods and services that are purchased by households. Thus, for example, it shows the portion of spending that is accounted for by discretionary items, such as motor vehicles, or the adjustments that consumers make to changes in prices, such as a sharp run-up in gasoline prices.2

 

Page 5-2:

PCE measures the goods and services purchased by “persons”—that is, by households and by nonprofit institutions serving households (NPISHs)—who are resident in the United States. Persons resident in the United States are those who are physically located in the United States and who have resided, or expect to reside, in this country for 1 year or more. PCE also includes purchases by U.S. government civilian and military personnel stationed abroad, regardless of the duration of their assignments, and by U.S. residents who are traveling or working abroad for 1 year or less.

5-64:

Nonprofit institutions serving households

In the NIPAs, nonprofit institutions serving households (NPISHs), which have tax-exempt status, are treated as part of the personal sector of the economy. Because NPISHs produce services that are not generally sold at market prices, the value of these services is measured as the costs incurred in producing them.

In PCE, the value of a household purchase of a service that is provided by a NPISH consists of the price paid by the household or on behalf of the household for that service plus the value added by the NPISH that is not included in the price. For example, the value of the educational services provided to a student by a university consists of the tuition fee paid by the household to the university and of the additional services that are funded by sources other than tuition fees (such as by the returns to an endowment fund).

[295] Report: “Measuring the Economy: A Primer on GDP and the National Income and Product Accounts.” U.S. Bureau Of Economic Analysis, October 2014. <www.bea.gov>

Page 8: “Gross private domestic investment consists of purchases of fixed assets (structures, equipment, and intellectual property products) by private businesses that contribute to production and have a useful life of more than one year, of purchases of homes by households, and of private business investment in inventories.”

[296] Calculated with data from:

a) Dataset: “Table 2.3.5U. Personal Consumption Expenditures by Major Type of Product and by Major Function.” U.S. Bureau of Economic Analysis. Last revised June 1, 2015. <www.bea.gov>

b) Dataset: “Table 1.1.5. Gross Domestic Product.” U.S. Bureau of Economic Analysis. Last revised January 30, 2015. <www.bea.gov>

NOTE: An Excel file containing the data and calculations is available upon request.

[297] Report: “GAO Update on the Number of Prekindergarten Care and Education Programs.” U.S. Government Accountability Office, June 2, 2005. <www.gao.gov>

This letter responds to your request concerning our April 2000 report, Early Education and Care: Overlap Indicates Need to Assess Crosscutting Programs (GAO/HEHS-00-78). Given the historical concern regarding the potential for program overlap among federal early childhood education and care programs, you asked that we update the list of programs providing or supporting education or care for children under the age of 5. The 2000 list included 69 programs, which were administered by 9 different agencies.[Footnote 1]

To respond to your request, we replicated the keyword search from our 2000 report using the Catalog of Federal Domestic Assistance (CFDA). After assessing the reliability of the CFDA, we determined it was suitable for our purposes. We obtained explanations regarding programs that were deleted from the CFDA. Our search yielded 254 programs, and we reviewed their descriptions to determine if they met three criteria: (1) directly funded or supported education and/or child care, (2) provided these services to children under age 5, and (3) delivered services in an educational or child care setting. Based on this review, we selected over 70 programs as potentially meeting these criteria and provided agencies with the opportunity to comment on this assessment.

Generally, we found that the landscape of federal programs offered remained largely the same as in 2000. We identified 69 programs as meeting our criteria and found that 10 agencies administer these programs.[Footnote 2] While the total number of programs remained the same, there were, however, some changes in the makeup of the list. Specifically, 16 programs were removed[Footnote 3] from the list and 16 were added.[Footnote 4] The Department of Education, the agency responsible for the most programs on the list, had the biggest change, dropping 11 programs from the original list and adding 5 programs.

For 13 programs in our 2000 report, agencies questioned in their comments whether the program should be included based on our criteria. For purposes of our report, we have interpreted our criteria broadly. Based on our review of the relevant legal and program documents and discussions with agency officials, we found that our criteria warrants including all but 2 of these 13 programs as providing or supporting care and education programs for children under 5 in an educational or child care setting.

There are several caveats associated with this current work. This analysis does not provide information on the types of services provided by these programs, their budget outlays, or the number of children they serve. In addition, it is important to note that providing child care or education is the sole focus for some programs while for others it is only an allowable service. For these programs, some agencies noted that the utilization of these services was minimal or nonexistent. Finally, we did not examine the role that tax expenditures such as the Child and Dependent Care Tax Credit or Employer Provided Child Care Tax Credit play in supporting child care or early education.

[298] Report: “Early Child Care and Education: HHS and Education Are Taking Steps to Improve Workforce Data and Enhance Worker Quality.” U.S. Government Accountability Office, February 2012. <www.gao.gov>

Page 1:

The federal government helps to improve access to high-quality ECCE [early child care and education] programs by subsidizing program costs. The two largest federal efforts are the Head Start program, funded at approximately $7.2 billion, and the Child Care and Development Fund (CCDF), funded at approximately $5.0 billion in fiscal year 2010. These funding sources, as well as billions of dollars in other ECCE federal funding, are overseen by the Departments of Health and Human Services (HHS) and Education (Education) and the relevant state agencies to which these monies are allocated.

[299] Report: “Head Start Program Facts, Fiscal Year 2014.” U.S. Department of Health & Human Services, Office of Head Start, March 10, 2015, Revised 4/17/15. <eclkc.ohs.acf.hhs.gov>

In fiscal year (FY) 2013:

• Head Start programs served 932,164 children and their families

• Early Head Start programs served 150,100 children and 6,391 pregnant women and their families

• Migrant and Seasonal Head Start (MSHS), which serves children from birth to age 5, served an additional 31,907 children

CALCULATION: 932,164 + 150,100 = 1,082,264 children

[300] Report: “Head Start Program Facts, Fiscal Year 2014.” U.S. Department of Health & Human Services, Office of Head Start, March 10, 2015, Revised 4/17/15. <eclkc.ohs.acf.hhs.gov>

Page 1:

Throughout this fact sheet, unless otherwise specified, the term “Head Start” refers to the Head Start program as a whole, including: Head Start services to preschool children; Early Head Start (EHS) services to infants, toddlers, and pregnant women; services to families by American Indian and Alaskan Native (AIAN) programs; and services to families by Migrant and Seasonal Head Start (MSHS) programs.

The term “funded enrollment” refers to the number of children and pregnant women that are supported by federal Head Start funds in a program at any one time during the program year; these are sometimes referred to as enrollment slots. Funded enrollment numbers include enrollment slots funded by state or other funds when used by grantees as required nonfederal match. States may provide additional funding to local Head Start programs, which is not included in federal Head Start reporting.

The term “cumulative enrollment” refers to the actual number of children and pregnant women that Head Start programs serve throughout the entire program year, inclusive of enrollees who left during the program year and the enrollees who filled those empty places. Due to turnover, more children and families may receive Head Start services cumulatively throughout the program year, all of whom are reported in the Program Information Report (PIR), than indicated by the funded enrollment numbers.

Pages 10-11: “Head Start Enrollment and Appropriations History … 2014 … Federal Funding [=] $8,598,095,000 … Funded Enrollment [=] 927,275

CALCULATION: $8,598,095,000 / 927,275 = $9,272 federal funding per enrollee

[301] Report: “Head Start: Undercover Testing Finds Fraud and Abuse at Selected Head Start Centers.” U.S. Government Accountability Office, May 18, 2010. <www.gao.gov>

Summary:

The Head Start program, overseen by the Department of Health and Human Services and administered by the Office of Head Start, provides child development services primarily to low-income families and their children. Federal law allows up to 10 percent of enrolled families to have incomes above 130 percent of the poverty line--GAO refers to them as “over-income.” Families with incomes below 130 percent of the poverty line, or who meet certain other criteria, are referred to as “under-income”. Nearly 1 million children a year participate in Head Start, and the American Recovery and Reinvestment Act provided an additional $2.1 billion in funding.

GAO received hotline tips alleging fraud and abuse by grantees. In response, GAO investigated the validity of the allegations, conducted undercover tests to determine if other centers were committing fraud, and documented instances where potentially eligible children were put on Head Start wait lists. The investigation of allegations is ongoing.

To perform this work, GAO interviewed grantees and a number of informants and reviewed documentation. GAO used fictitious identities and bogus documents for proactive testing of Head Start centers. GAO also interviewed families on wait lists. Results of undercover tests and family interviews cannot be projected to the entire Head Start program. In a corrective action briefing, agency officials agreed to address identified weaknesses.

GAO received allegations of fraud and abuse involving two Head Start nonprofit grantees in the Midwest and Texas. Allegations include manipulating recorded income to make over-income applicants appear under-income, encouraging families to report that they were homeless when they were not, enrolling more than 10 percent of over-income children, and counting children as enrolled in more than one center at a time. GAO confirmed that one grantee operated several centers with more than 10 percent over-income students, and the other grantee manipulated enrollment data to over-report the number of children enrolled. GAO is still investigating the other allegations reported.

Realizing that these fraud schemes could be perpetrated at other Head Start programs, GAO attempted to register fictitious children as part of 15 undercover test scenarios at centers in six states and the District of Columbia. In 8 instances staff at these centers fraudulently misrepresented information, including disregarding part of the families’ income to register over-income children into under-income slots. The undercover tests revealed that 7 Head Start employees lied about applicants’ employment status or misrepresented their earnings. This leaves Head Start at risk that over-income children may be enrolled while legitimate under-income children are put on wait lists. At no point during our registrations was information submitted by GAO’s fictitious parents verified, leaving the program at risk that dishonest persons could falsify earnings statements and other documents in order to qualify. In 7 instances centers did not manipulate information.

Page 10:

Case: 12; State: California; Undercover scenario: Income exceeded poverty guidelines; Undercover scenario.

• The income for the family of three (mother, father, and child) was $12,000 more than allowed for the family to be considered income-eligible.

• A Head Start associate denied this application because the family was over-income.

• The Head Start associate explained that families often lie about being separated or divorced in order to reduce their income and that Head Start is not strict about checking whether that is true. …

We also identified a key vulnerability during our investigation that could allow over-income children to be enrolled in other Head Start centers: income documentation for enrollees is not required to be maintained by grantees. According to HHS guidance, Head Start center employees must sign a statement attesting that the applicant child is eligible and identifying which income documents they examined, such as W-2s or pay stubs; however, they do not have to maintain copies of them. We discovered that the lack of documentation made it virtually impossible to determine whether only under-income children were enrolled in spots reserved for under-income children.

[302] Calculated with the dataset: “Table 202.10. Enrollment of 3-, 4-, and 5-year-old children in preprimary programs, by age of child, level of program, control of program, and attendance status: Selected years, 1970 through 2013.” U.S. Department Of Education, National Center for Education Statistics, August 2014. <nces.ed.gov>

“Preprimary programs include kindergarten and preschool (or nursery school) programs. ‘Preschool,’ which was referred to as ‘nursery school’ in previous versions of this table, is defined as a group or class that is organized to provide educational experiences for children during the year or years preceding kindergarten.”

NOTE: An Excel file containing the data and calculations is available upon request.

[303] Calculated with the dataset: “Table 202.10. Enrollment of 3-, 4-, and 5-year-old children in preprimary programs, by age of child, level of program, control of program, and attendance status: Selected years, 1970 through 2013.” U.S. Department Of Education, National Center for Education Statistics, August 2014. <nces.ed.gov>

“Preprimary programs include kindergarten and preschool (or nursery school) programs. ‘Preschool,’ which was referred to as ‘nursery school’ in previous versions of this table, is defined as a group or class that is organized to provide educational experiences for children during the year or years preceding kindergarten.”

NOTE: An Excel file containing the data and calculations is available upon request.

[304] Webpage: “Fact Sheet: President Obama’s Plan for Early Education for all Americans.” White House, Office of the Press Secretary, February 13, 2013. <www.whitehouse.gov>

In his State of the Union address, President Obama called on Congress to expand access to high-quality preschool to every child in America. As part of that effort, the President will propose a series of new investments that will establish a continuum of high-quality early learning for a child – beginning at birth and continuing to age 5. …

High-quality early childhood education provides the foundation for all children’s success in school and helps to reduce achievement gaps. Despite the individual and economic benefits of early education, our nation has lagged in its commitment to ensuring the provision of high quality public preschool in our children’s earliest years. …

Preschool for All

• The President’s proposal will improve quality and expand access to preschool, through a cost sharing partnership with all 50 states, to extend federal funds to expand high-quality public preschool to reach all low- and moderate-income four-year olds from families at or below 200% of poverty. … The proposal would include an incentive for states to broaden participation in their public preschool program for additional middle-class families, which states may choose to reach and serve in a variety of ways, such as a sliding-scale arrangement. …

• The proposal also encourages states to expand the availability of full-day kindergarten. …

• The President will also launch a new Early Head Start-Child Care Partnership program, to support states and communities that expand the availability of Early Head Start and child care providers that can meet the highest standards of quality for infants and toddlers, serving children from birth through age 3. Funds will be awarded through Early Head Start on a competitive basis to enhance and support early learning settings; provide new, full-day, comprehensive services that meet the needs of working families; and prepare children for the transition into preschool.

• The President is proposing to expand the Administration’s evidence-based home visiting initiative, through which states are implementing voluntary programs that provide nurses, social workers, and other professionals to meet with at-risk families in their homes and connect them to assistance that impacts a child’s health, development, and ability to learn.

[305] Webpage: “Summary: S.1380 - Strong Start for America’s Children Act of 2015.” U.S. Congress. Accessed August 8, 2015 at <www.congress.gov>

This bill directs the Department of Education (ED) to allot matching grants to states and, through them, subgrants to local educational agencies, childhood education program providers, or consortia of those entities to implement high-quality prekindergarten programs for children from low-income families.

Grants are allotted to states based on each state’s proportion of children who are age four and who are from families with incomes at or below 200% of the poverty level.

“High-quality prekindergarten programs” are those that serve children three or four years of age and meet criteria concerning: class size; learning environments; teacher qualifications, salaries, and professional development; program monitoring; and accessibility to comprehensive health and support services.

States may apply to use up to 15% of their grant for subgrants to high-quality early childhood education and care programs for infants and toddlers whose family income is at or below 200% of the poverty level.

ED and the Department of Health and Human Services (HHS) shall develop a process to: (1) provide Head Start program services to children younger than age four in states or regions that already provide four-year-olds whose family income is at or below 200% of the poverty level with sustained access to high-quality prekindergarten programs, or (2) convert programs to serve infants and toddlers.

ED shall award competitive matching grants to states to increase their capacity to offer high-quality prekindergarten programs. States must provide assurances that they will use their grant to become eligible, within three years of receiving the grant, for this Act’s grants for high-quality prekindergarten programs. …

[306] Webpage: “Cosponsors: S.1380 - Strong Start for America’s Children Act of 2015.” U.S. Congress. Accessed August 8, 2015 at <www.congress.gov>

Sponsor: Murray, Patty [D-WA] (Introduced 05/19/2015)

Cosponsors (24):

Casey, Robert P., Jr. [D-PA]

Hirono, Mazie K. [D-HI]

Franken, Al [D-MN]

Markey, Edward J. [D-MA]

Schatz, Brian [D-HI]

Udall, Tom [D-NM]

Kaine, Tim [D-VA]

Mikulski, Barbara A. [D-MD]

Murphy, Christopher S. [D-CT]

Durbin, Richard [D-IL]

Coons, Christopher A. [D-DE]

Heinrich, Martin [D-NM]

Whitehouse, Sheldon [D-RI]

Baldwin, Tammy [D-WI]

Cantwell, Maria [D-WA]

Gillibrand, Kirsten E. [D-NY]

Wyden, Ron [D-OR]

Booker, Cory A. [D-NJ]

Warren, Elizabeth [D-MA]

Sanders, Bernard [I-VT]

Klobuchar, Amy [D-MN]

Cardin, Benjamin L. [D-MD]

Tester, Jon [D-MT]

Reed, Jack [D-RI]

[307] Article: “Exceedingly Social, But Doesn’t Like Parties.” By Michael Powell.

Washington Post, November 5, 2006. <www.washingtonpost.com>

Quoting Sanders: “I’m a democratic socialist.”

[308] Article: “Bernie Sanders: Obamacare is a ‘good Republican program’.” By Bryan Koenig. CNN, September 24th, 2013. <politicalticker.blogs.cnn.com>

“Sanders, an Independent who caucuses with Senate Democrats, reiterated his support of a universal single-payer Medicare for all, inspired by health care programs in Europe.”

[309] Webpage: “Party Division in the Senate, 1789-Present.” U.S. Senate Historical Office. Accessed August 8, 2015 at <www.senate.gov>

Note: Statistics listed below reflect party division immediately following the election, unless otherwise noted. The actual number of senators representing a particular party often changes during a congress, due to the death or resignation of a senator, or as a consequence of a member changing parties.

114th Congress (2015-2017)

Majority Party: Republican (54 seats)

Minority Party: Democrat (44 seats)

Other Parties: 2 Independents (both caucus with the Democrats)

Total Seats: 100

[310] Webpage: “Major Actions: S.1380 - Strong Start for America’s Children Act of 2015.” U.S. Congress. Accessed August 8, 2015 at <www.congress.gov>

“05/19/2015: Introduced in Senate”

[311] Report: “Early Child Care and Education: HHS and Education Are Taking Steps to Improve Workforce Data and Enhance Worker Quality.” U.S. Government Accountability Office, February 2012. <www.gao.gov>

Page 1:

The federal government helps to improve access to high-quality ECCE [early child care and education] programs by subsidizing program costs. The two largest federal efforts are the Head Start program, funded at approximately $7.2 billion, and the Child Care and Development Fund (CCDF), funded at approximately $5.0 billion in fiscal year 2010. These funding sources, as well as billions of dollars in other ECCE federal funding, are overseen by the Departments of Health and Human Services (HHS) and Education (Education) and the relevant state agencies to which these monies are allocated.

[312] Report: “Documentation to the NCES Common Core of Data Public Elementary/Secondary School Universe Survey: School Year 2010–11, Version Provisional 2a.” U.S. Department Of Education, National Center for Education Statistics, September 2012. <nces.ed.gov>

Page C-7: “Head Start Program A federally funded program that provides comprehensive educational, social, health, and nutritional services to low-income preschool children and their families, and children from ages 3 to school entry age (i.e., the age of compulsory school attendance).”

[313] Webpage: “About Us.” U.S. Department of Health & Human Services, Office of Head Start. Accessed June 27, 2015 at <eclkc.ohs.acf.hhs.gov>

Head Start promotes the school readiness of young children from low-income families through agencies in their local community. …

Head Start and Early Head Start programs support the mental, social, and emotional development of children from birth to age 5. In addition to education services, programs provide children and their families with health, nutrition, social, and other services. Head Start services are responsive to each child and family’s ethnic, cultural, and linguistic heritage.

Head Start encourages the role of parents as their child’s first and most important teachers. Programs build relationships with families that support positive parent-child relationships, family well-being, and connections to peers and community. Head Start began as a program for preschoolers. Three- and 4-year-olds made up over 80 percent of the children served by Head Start last year.

Early Head Start serves pregnant women, infants, and toddlers. Early Head Start programs are available to the family until the child turns 3 years old and is ready to transition into Head Start or another pre-K program. Early Head Start helps families care for their infants and toddlers through early, continuous, intensive, and comprehensive services.

[314] Report: “Head Start Impact Study, Final Report.” By Michael Puma and others. U.S. Department of Health and Human Services, Administration for Children and Families, January 2010. <www.acf.hhs.gov>

Page xiii:

Since its beginning in 1965 as a part of the War on Poverty, Head Start’s goal has been to boost the school readiness of low-income children. Based on a “whole child” model, the program provides comprehensive services that include preschool education; medical, dental, and mental health care; nutrition services; and efforts to help parents foster their child’s development. Head Start services are designed to be responsive to each child’s and family’s ethnic, cultural, and linguistic heritage.

Page 1-2:

The Head Start program, created in 1965 as part of the War on Poverty, is intended to boost the school readiness of low-income children. Head Start has grown from its early days of originally offering six-week summer sessions for 4-year-olds, to providing typically nine-month and sometimes year-long programs serving children from three to five years of age. The program is dedicated to promoting school readiness and providing comprehensive child development services to low-income children, their families, and communities, with an underlying premise that low-income children and families need extra support to prepare them for the transition to school.

[315] Report: “Head Start: Undercover Testing Finds Fraud and Abuse at Selected Head Start Centers.” U.S. Government Accountability Office, May 18, 2010. <www.gao.gov>

Page 3: “Head Start operates both full-and part-day programs--most only during the school year.”

[316] Report: “Head Start Impact Study, Final Report.” By Michael Puma and others. U.S. Department of Health and Human Services, Administration for Children and Families, January 2010. <www.acf.hhs.gov>

Page xii:

The Head Start Impact Study was conducted with a nationally representative sample of 84 grantee/delegate agencies and included nearly 5,000 newly entering, eligible 3- and 4-year-old children who were randomly assigned to either: (1) a Head Start group that had access to Head Start program services or (2) a control group that did not have access to Head Start, but could enroll in other early childhood programs or non-Head Start services selected by their parents. Data collection began in fall 2002 and continued through 2006, following children from program application through the spring of their 1st grade year.

Page xx: “For those attending Head Start, the average number of hours spent per week was between 24 and 28 hours, with some variation by cohort and year.”

[317] Report: “Third Grade Follow-up to the Head Start Impact Study, Final Report.” By Michael Puma and others. U.S. Department of Health and Human Services, Administration for Children and Families, October 2012. <www.acf.hhs.gov>

Pages xiii-xix:

The Head Start Impact Study (HSIS) was conducted with a nationally representative sample of 84 grantee/delegate agencies and included nearly 5,000 newly entering, eligible 3- and 4-year-old children who were randomly assigned to either: (1) a Head Start group that had access to Head Start program services or (2) a control group that did not have access to Head Start, but could enroll in other early childhood programs or non-Head Start services selected by their parents. Data collection began in fall 2002 and continued through 2008, following children from program application through the spring of their 3rd grade year. …

This study is unique in its design and differs from prior evaluations of early childhood programs:

Randomized Control. The Congressional mandate for this study had a clearly stated goal of producing causal findings, i.e., the purpose was to determine if access to Head Start caused better developmental and parenting outcomes for participating children and families. To do this, the study randomly assigned Head Start applicants either to a Head Start group that was allowed to enroll, or to a “control” group that could not. This procedure ensured comparability between the two groups at program entry, so that later differences can be causally attributed to Head Start.

Representative Sample of Programs and Children. Most random assignment studies are conducted in small demonstration programs or in a small number of operating sites, usually those that volunteer to be included in the research. In contrast, the Head Start Impact Study is based on a nationally representative sample of Head Start programs and children, with a few exceptions for programs serving particular populations. This makes the study results generalizable to the vast majority of programs nationwide at the time the study was fielded in 2002, not just the selected study sample. Unlike most studies, it examines the average impact of programs that represent the full range of intensity and quality and adherence to the established Head Start program standards (i.e., the best, the worst, and those in the middle of a fully implemented program).

Examination of a Comprehensive Set of Outcomes Over Time. The study quantifies the overall impact of Head Start separately for 3- and 4-year-old children in four key program domains-cognitive development, social-emotional development, health status and services, and parenting practices–following them through early elementary school. These impacts are measured by examining the difference in outcomes between children assigned to the Head Start group and those assigned to the control group.

Other study features that must be considered in interpreting the study findings include:

Control Group Children Did Not All Stay at Home. Children who were placed in the control or comparison group were allowed to enroll in other non-parental care or non-Head Start child care or programs selected by their parents. They could remain at home in parent care, or enroll in a child care or preschool program. Consequently, the impact of Head Start was determined by a comparison to a mixture of alternative care settings rather than against a situation in which children were artificially prevented from obtaining child care or early education programs outside of their home. Approximately 60 percent of the control group children participated in child care or early education programs during the first year of the study, with 13.8 percent of the 4-year-olds in the control group and 17.8 percent of the 3-year-olds in the control group finding their way into Head Start during this year. Preventing families from seeking out alternative care or programs for their children is both infeasible and unethical. The design used here answers the policy question, how well does Head Start do when compared against the other types of services or care that low-income children could receive in fall 2002.

Impacts Represent the Effects of One Year of Head Start. For children in the 4-year-old cohort, the study provides the impact of Head Start for a single year, i.e., the year before they are eligible to enter kindergarten. The impacts for the 3-year-old cohort reflect the benefits of being provided an earlier year of Head Start (as compared to the control group, which received access to Head Start at age 4.) At the end of one year of Head Start participation, the 3-year-old cohort—but not the 4-year-old cohort—had another year to go before they started kindergarten. It was not feasible or desirable for this study to prevent 3-year-olds from participating in Head Start for two years. Thus, the study could not directly assess the receipt of one year versus two years of Head Start. Rather, it addresses the receipt of an earlier year— whether having Head Start available at age three is helpful to children brought to the program at that age, or whether those children would be just as well off, if the program did not enroll them until age four. This is not only important to individual families; it also answers an important policy question. To answer this question, the best approach is to preclude program entry at age three while allowing it at age four and contrast outcomes after that point with statistically equivalent children never excluded from the program. By design, the study did not attempt to control children’s experiences after their first Head Start year.

The Head Start Impact Study is a comprehensive, carefully designed study of a large-scale early childhood program that has existed for more than 40 years. It is designed to address the overall average impact of the Head Start program as it existed in 2002. The findings cannot be directly compared to more narrowly focused studies of other early childhood programs. The Advisory Committee on Head Start Research and Evaluation, which developed the blueprint for this study, recommended that “the research and findings should be used in combination with the rest of the Head Start research effort to improve the effectiveness of Head Start programs for children and families” (Advisory Committee on Head Start Research and Evaluation, 1999, p. 44). The Third Grade Follow-up to the Head Start Impact Study builds upon the existing randomized control design in the HSIS in order to determine the longer term impact of the Head Start program on the well-being of children and families through the end of 3rd grade.

Key Findings

Looking across the full study period, from the beginning of Head Start through 3rd grade, the evidence is clear that access to Head Start improved children’s preschool outcomes across developmental domains, but had few impacts on children in kindergarten through 3rd grade. Providing access to Head Start was found to have a positive impact on the types and quality of preschool programs that children attended, with the study finding statistically significant differences between the Head Start group and the control group on every measure of children’s preschool experiences in the first year of the study. In contrast, there was little evidence of systematic differences in children’s elementary school experiences through 3rd grade, between children provided access to Head Start and their counterparts in the control group.

In terms of children’s well-being, there is also clear evidence that access to Head Start had an impact on children’s language and literacy development while children were in Head Start. These effects, albeit modest in magnitude, were found for both age cohorts during their first year of admission to the Head Start program. However, these early effects rapidly dissipated in elementary school, with only a single impact remaining at the end of 3rd grade for children in each age cohort.

With regard to children’s social-emotional development, the results differed by age cohort and by the person describing the child’s behavior. For children in the 4-year-old cohort, there were no observed impacts through the end of kindergarten but favorable impacts reported by parents and unfavorable impacts reported by teachers emerged at the end of 1st and 3rd grades. One unfavorable impact on the children’s self-report emerged at the end of 3rd grade. In contrast to the 4-year-old cohort, for the 3-year-old cohort there were favorable impacts on parent-reported social emotional outcomes in the early years of the study that continued into early elementary school. However, there were no impacts on teacher-reported measures of social-emotional development for the 3-year-old cohort at any data collection point or on the children’s self-reports in 3rd grade.

In the health domain, early favorable impacts were noted for both age cohorts, but by the end of 3rd grade, there were no remaining impacts for either age cohort. Finally, with regard to parenting practices, the impacts were concentrated in the younger cohort. For the 4-year-old cohort, there was one favorable impact across the years while there were several favorable impacts on parenting approaches and parent-child activities and interactions (all reported by parents) across the years for the 3-year-old cohort.

In summary, there were initial positive impacts from having access to Head Start, but by the end of 3rd grade there were very few impacts found for either cohort in any of the four domains of cognitive, social-emotional, health and parenting practices. The few impacts that were found did not show a clear pattern of favorable or unfavorable impacts for children.

In addition to looking at Head Start’s average impact across the diverse set of children and families who participated in the program, the study also examined how impacts varied among different types of participants. There is evidence that for some outcomes, Head Start had a differential impact for some subgroups of children over others. At the end of 3rd grade for the 3-year-old cohort, the most striking sustained subgroup findings were found in the cognitive domain for children from high risk households as well as for children of parents who reported no depressive symptoms. Among the 4-year-olds, sustained benefits were experienced by children of parents who reported mild depressive symptoms, severe depressive symptoms, and Black children.

Overview of Study Methods

To reliably answer the research questions outlined by Congress, a nationally representative sample of Head Start programs and newly entering 3- and 4-year-old children was selected, and children were randomly assigned either to a Head Start group that had access to Head Start services in the initial year of the study or to a control group that could receive any other non-Head Start services available in the community, chosen by their parents. In fact, approximately 60 percent of control group parents enrolled their children in some other type of preschool program in the first year. In addition, all children in the 3-year-old cohort could receive Head Start services in the second year. Under this randomized design, a simple comparison of outcomes for the two groups yields an unbiased estimate of the impact of access to Head Start in the initial year on children’s school readiness. This research design ensured that the Head Start and control groups did not differ in any systematic or unmeasured way except through their access to Head Start services. It is important to note that, because the control group in the 3-year-old cohort was given access to Head Start in the second year, the findings for this age group reflect the added benefit of providing access to Head Start at age 3 vs. at age 4, not the total benefit of having access to Head Start for two years.

In addition to random assignment, this study is set apart from most program evaluations because it includes a nationally representative sample of programs, making results generalizable to the Head Start program as a whole, not just to the selected samples of programs and children. However, the study does not represent Head Start programs serving special populations, such as tribal Head Start programs, programs serving migrant and seasonal farm workers and their families, or Early Head Start. Further, the study does not represent the 15 percent of Head Start programs in which the pool of applicants for Head Start slots was too small to allow for an adequate control group.

Selected Head Start grantees and centers had to have a sufficient number of applicants for the 2002-2003 program year to allow for the creation of a control group without requiring Head Start slots to go unfilled. As a consequence, the study was conducted in communities that had more children eligible for Head Start than could be served with the existing number of funded slots.

At each of the selected Head Start centers, program staff provided information about the study to parents at the time enrollment applications were distributed. Parents were told that enrollment procedures would be different for the 2002-2003 Head Start year and that some decisions regarding enrollment would be made using a lottery-like process. Local agency staff implemented their typical process of reviewing enrollment applications and screening children for admission to Head Start based on criteria approved by their respective Policy Councils. No changes were made to these locally established ranking criteria.

Information was collected on all children determined to be eligible for enrollment in fall 2002, and an average sample of 27 children per center was selected from this pool: 16 who were assigned to the Head Start group and 11 who were assigned to the control group. Random assignment was done separately for two study samples—newly entering 3-year-olds (to be studied through two years of potential Head Start participation, kindergarten, 1st grade, and 3rd grade) and newly entering 4-year-olds (to be studied through one year of Head Start participation, kindergarten, 1st grade, and 3rd grade).

The total sample, spread over 23 different states, consisted of 84 randomly selected Head Start grantees/delegate agencies, 383 randomly selected Head Start centers, and a total of 4,667 newly entering children, including 2,559 in the 3-year-old group and 2,108 in the 4-year-old group.4

Data collection began in the fall of 2002 and continued through the spring of 2008, following children from entry into Head Start through the end of 3rd grade. Comparable data were collected for both Head Start and control group children, including interviews with parents, direct child assessments, surveys of Head Start, other early childhood, and elementary school teachers, interviews with center directors and other care providers at the preschool level, direct observations of the quality of various preschool care settings, and teacher or care provider assessments of children. For the Third Grade Follow-up, principal surveys and teacher ratings by the principal were added to the data collection.

Response rates were consistently quite high, approximately 80 percent for parents and children throughout the study. Teacher response rates were higher at the preschool level (about 80 percent) and gradually decreased as the child data were collected only during 3rd grade and the response rate was about the same as for 3rd grade teachers.

Although every effort was made to ensure compliance with random assignment, some children accepted into Head Start did not participate in the program (about 15 percent for the 3-year-old cohort and 20 percent for the 4-year-old cohort), and some children assigned to the non-Head Start group nevertheless entered the program in the first year (about 17 percent for 3-year-olds and 14 percent for 4-year-olds), typically at centers that were not in the study sample. These families are referred to as “no shows” and “crossovers.” Statistical procedures for dealing with these events are discussed in the report. Thus, the findings in this report provide estimates of both the impact of access to Head Start using the sample of all randomly assigned children (referred to as Intention to Treat, or ITT) and the impact of actual Head Start participation (adjusting for the no shows and crossovers, referred to as Impacts on the Treated or IOT).

Page xx:

Not surprisingly, the study children attended schools with much higher levels of poverty than schools nationwide (as indicated by proportions of students eligible for free- and reduced-price lunch—66-67 percent) and were in schools with higher proportions of minority students (approximately 60 percent of students). With only a few exceptions, teacher and classroom characteristics did not differ significantly between children in the Head Start group and those in the control group.

Page xxi:

Impacts on Children’s Cognitive Development

The cognitive domain consisted of: (1) direct assessments of language and literacy skills, pre-writing skills (in Head Start years only), and math skills; (2) teacher reports of children’s school performance; and (3) parent reports of child literacy skills and grade promotion.

There is clear evidence that Head Start had a statistically significant impact on children’s language and literacy development while children were in Head Start. These effects, albeit modest in magnitude, were found for both age cohorts during their first year of admission to the Head Start program. However, these early effects dissipated in elementary school, with only a single impact remaining at the end of 3rd grade for children in each age cohort: a favorable impact for the 4-year-old cohort (ECLS-K Reading) and an unfavorable impact for the 3-year-old cohort (grade promotion).

Impacts aside, these children remain disadvantaged compared to their same-age peers; the scores of both the Head Start and the control group children remained lower than the norm for the population. At the end of 3rd grade, HSIS children (both Head Start and control group children) in the 4-year-old cohort, on average, scored about eight points (approximately one-half of a standard deviation) lower than a national sample of third graders on the ECLS-K Reading Assessment and the promotion rate6 for the 3-year old cohort was two to three percent lower than the predicted national promotion rate for children at the end of 3rd grade.

For mathematics, impacts were found only on a single outcome measure (Woodcock Johnson III Applied Problems) and only for the 3-year-old cohort at the end of their Head Start year.

The findings from the cognitive domain are summarized by age cohort below.7 Exhibits 2a and 2b present all statistically significant cognitive impacts and their effect sizes8 from the Intent to Treat (ITT) analysis.

Page xxv:

Impacts on Children’s Social-Emotional Development

The social-emotional domain consisted of parent-reported measures during the Head Start years, reports by both parents and teachers in all elementary school years, with child self-reports added at the end of 3rd grade. Measures of children’s behavior, social skills and approaches to learning, parent-child relationships, teacher child relationships, school adjustment, peer relationships and school experiences were assessed.

With regard to children’s social-emotional development, the results differed by age cohort and by the source of the information on the child’s behavior. For children in the 4-year-old cohort, there were no observed impacts through the end of kindergarten and then favorable impacts reported by parents and unfavorable impacts reported by teachers at the end of 1st and 3rd grades and children at the end of 3rd grade.

In contrast, the early favorable social emotional impacts reported by parents for the 3-year-old cohort continued into early elementary school. There were favorable impacts at all data collection points through the end of 3rd grade on parent-reported measures of children’s social-emotional development. However, there were no impacts on teacher-reported measures of social-emotional development for the 3-year-old cohort at any data collection point or on the children’s self-reports in 3rd grade.

The findings from the social-emotional domain are summarized by age cohort below. Exhibits 3a and 3b provide all statistically significant social-emotional impacts and their effect sizes from the ITT analysis.

Page xxix:

Impact on Health Status and Access to Health Services

The health domain consisted of two categories: (1) children’s receipt of health care services and (2) their current health status. Early favorable impacts in the health domain were noted for both age cohorts but by the end of 3rd grade, there were no remaining impacts for either age cohort.

The findings from the health domain are summarized by age cohort below, while Exhibits 4a and 4b present all statistically significant health impacts and their effect sizes from the ITT analysis.

Page xxxi:

Impact on Parenting Practices

This domain consisted of six categories of outcomes: (1) disciplinary practices, (2) educational supports, (3) safety practices, (4) parenting styles, (5) parent participation in and communication with school and (6) parent and child time together. With regard to parenting practices, the impacts were concentrated in the younger cohort, which showed favorable parent-reported impacts across all years of the study. For the 4-year-old cohort, in contrast, there were few impacts.

The findings from the parenting practices domain are summarized by age cohort below, and Exhibits 5a and 5b provide the statistically significant parenting practices impacts and their effect sizes from the ITT analysis.

Pages 1-2: “In general, during the period of this study, to be eligible for Head Start, a child had to be living in a family whose income was below the Federal poverty line. Programs were permitted, however, to fill ten percent of their enrollment with children from families that are over this income level.”

Page 11: “To be randomly assigned, the child’s eligibility for admission to the program had to have been determined by the local Head Start agency. Thus all children in the study were determined to be eligible for Head Start, regardless of whether they were assigned to the Head Start or control group.”

Pages 25-27:

Child and Family Outcome Measures

Outcome measures were developed in four domains—child cognitive development, child social-emotional development, health, and parenting practices. The selection of these domains was guided by several factors. First, it was important to measure the school readiness skills that are the focus of the Head Start program. The Head Start performance measures and conceptual framework (U.S. Department of Health and Human Services, 2001) indicate that children enrolled in Head Start should demonstrate improved emergent literacy, numeracy, and language skills. The framework also stresses that children should demonstrate positive attitudes toward learning and improved social and emotional well-being, as well as improved physical health and development.

Second, domains were selected to reflect the program’s whole child model, i.e., school readiness is considered to be multi-faceted and comprising five dimensions of early learning: (1) physical well-being and motor development, (2) social and emotional development, (3) approaches toward learning, (4) language usage, and (5) cognition and general knowledge (Kagan, Moore, & Bredekamp, 1995). The whole-child model also was recommended by the Goal One Technical Planning Group of the National Education Goals Panel (Goal One Technical Planning Group, 1991, 1993).

Third, in 2002, the National Institute of Child Health and Human Development (NICHD), the Administration for Children and Families (ACF), and the Office of the Assistant Secretary for Planning and Evaluation (ASPE) within the U.S. Department of Health and Human Services (HHS) convened a panel of experts to discuss the state of measurement and assessment on early childhood education and school readiness in the cognitive and social emotional domains. Language, early literacy, and mathematics were the primary cognitive domains identified by the experts as important to early childhood development. The experts identified social-emotional competency and regulation of attention, behavior, and emotion as critical measures in the social-emotional domain.

Based on these factors and advice from the experts consulting with the Head Start Impact Study team and the Advisory Committee on Head Start Research and Evaluation, measures were selected to assess the cognitive, social-emotional, and health outcomes of children. Considering the major emphasis Head Start places on parent education and involvement, and its importance for promoting children’s development, a fourth domain, parenting practices, was also included. Exhibits 2.6 and 2.7 provide the measures used in pre-K through 3rd grade and the year in which they were administered. The 3rd grade measures are summarized in more detail within this chapter, organized by the four domains. A summary of the measures used in pre-K through 1st grade is provided in the Head Start Impact Study Final Report.

NOTE: Pages 27-30 list the 41 cognitive, social-emotional, health, and parenting measures evaluated in the study.

[318] Paper: “A Reanalysis of the High/Scope Perry Preschool Program.” By James Heckman and others. University of Chicago, January 22, 2010. <www.webmeets.com>

Page 2: “The High/Scope Perry Preschool program, conducted in the 1960s, was an early childhood intervention that provided preschool to low-IQ, disadvantaged African-American children living in Ypsilanti, Michigan, a town near Detroit. … The beneficial long-term effects reported for the Perry program constitute a cornerstone of the argument for early intervention efforts throughout the world.”

Page 3: “The sample size is small: 123 children allocated over five entry cohorts.”

[319] Working paper: “The Rate of Return to the High/Scope Perry Preschool Program.” By James J. Heckman and others. National Bureau of Economic Research, November 2009. <www.nber.org>

Page 2: “The economic case for expanding preschool education for disadvantaged children is largely based on evidence from the High/Scope Perry Preschool Program, an early intervention in the lives of disadvantaged children in the early 1960s.”

[320] Webpage: “About Us.” HighScope Educational Research Foundation. Accessed July 31, 2015 at <www.highscope.org>

“HighScope was established in 1970 by the late David P. Weikart, PhD (1931-2003), who started the organization to continue research and program activities — including the Perry Preschool Project — he originally initiated as an administrator with the Ypsilanti Public Schools.”

[321] “Web Appendix for The Rate of Return to the High/Scope Perry Preschool Program.” By James J. Heckman and others. Elsevier, November 23, 2009. <jenni.uchicago.edu>

Page 11: “Table C.1: Overall Costs 1962–63 … 1963–64 … 1964–65 … 1965–66 …1966–67”

[322] Paper: “A Reanalysis of the High/Scope Perry Preschool Program.” By James Heckman and others. University of Chicago, January 22, 2010. <www.webmeets.com>

Page 4:

The eligibility rules for participation were that the participants (1) be African-American; (2) have a low IQ (between 70 and 85) at study entry,6 and (3) be disadvantaged as measured by parental employment level, parental education, and housing density (people/room). The Perry study targeted families who were more disadvantaged than other African-American families in the U.S. but were representative of a large segment of the disadvantaged African-American population.

Among children in the Perry Elementary School neighborhood, Perry program families were particularly disadvantaged. Table 1 shows that compared to other families with children in the Perry School catchment area, Perry program families were younger, had lower levels of parental education, and had fewer working mothers. Further, Perry program families had fewer educational resources, larger families, and greater participation in welfare, compared to the families with children in another neighborhood elementary school in Ypsilanti (the Erickson School).

6Measured by the Stanford-Binet IQ test (1960s norming), which has approximate mean of 111 and standard deviation of 16 at study entry (ages 3-4).

[323] Working paper: “The Rate of Return to the High/Scope Perry Preschool Program.” By James J. Heckman and others. National Bureau of Economic Research, November 2009. <www.nber.org>

Page 8: “Drawn from the community served by the Perry Elementary School, participants were located through a survey of families associated with that school, as well as through neighborhood group referrals, and door-to-door canvassing. Disadvantaged children living in adverse circumstances were identified using IQ scores and a family socioeconomic status (SES) index.”

[324] Paper: “A Reanalysis of the High/Scope Perry Preschool Program.” By James Heckman and others. University of Chicago, January 22, 2010. <www.webmeets.com>

Page 1: “The study was evaluated by the method of random assignment.”

[325] Paper: “Multiple Inference and Gender Differences in the Effects of Early Intervention: A Reevaluation of the Abecedarian, Perry Preschool, and Early Training Projects.” Michael L. Anderson. Journal of the American Statistical Association, December 2008. Pages 1481-1495. <are.berkeley.edu>

Page 1482: “The Perry data set contains 123 individuals, 58 in the treatment group and 65 in the control group.”

[326] Paper: “A Reanalysis of the High/Scope Perry Preschool Program.” By James Heckman and others. University of Chicago, January 22, 2010. <www.webmeets.com>

Pages 4-5:

The Perry Preschool curriculum was based on the Piagetian concept of active learning, which is centered around play that is based on problem-solving and guided by open-ended questions. Children are encouraged to plan, carry out, and then reflect on their own activities. The topics in the curriculum are not based on specific facts or topics, but rather on key experiences related to the development of planning, expression, and understanding. The key experiences are then organized into ten topical categories, such as “creative representation”, “classification” (recognizing similarities and differences), “number”, and “time.”4 These educational principles are reflected in the types of open-ended questions asked by teachers: for example, “What happened? How did you make that? Can you show me? Can you help another child?” (Schweinhart et al., 1993, p.33)

As the curriculum was developed over the course of the program, its details and application varied from year to year. While the first year involved “thoughtful experimentation” on the part of the teachers, experience with the program and series of seminars during subsequent years led to the development and systematic application of teaching principles with “an essentially Piagetian theory-base.” During the later years of the program, all activities took place within a structured daily routine intended to help children “to develop a sense of responsibility and to enjoy opportunities for independence,” (Schweinhart et al., 1993, pp. 32–33).

[327] Paper: “A Reanalysis of the High/Scope Perry Preschool Program.” By James Heckman and others. University of Chicago, January 22, 2010. <www.webmeets.com>

Page 4: “Beginning at age 3, and lasting two years, treatment consisted of a 2.5-hour educational preschool on weekdays during the school year, supplemented by weekly home visits by teachers.”

[328] “Web Appendix for The Rate of Return to the High/Scope Perry Preschool Program.” By James J. Heckman and others. Elsevier, November 23, 2009. <jenni.uchicago.edu>

Page 4: “During each wave of the experiment, the preschool class consisted of 20–25 children, whose ages ranged from 3 to 4. This is true even of the first and last waves, as the first wave admitted 4-year-olds, who only received one year of treatment, and the last wave was taught alongside a group of 3-year-olds, who are not included in our data.”

[329] “Web Appendix for The Rate of Return to the High/Scope Perry Preschool Program.” By James J. Heckman and others. Elsevier, November 23, 2009. <jenni.uchicago.edu>

Page 4:

Classes were 2-1/2 hours every weekday during the regular school year (mid-October through May). …

Home Visits. Home visits lasting 1-1/2 hours were conducted weekly by the preschool teachers. The purpose of these visits was to “involve the mother in the educational process,” and “implement the curriculum in the home,” (Schweinhart et al., 1993, p.32). By way of encouraging the mothers’ participation, teachers also helped with any other problems arising in the home during the visit. Occasionally, these visits would consist of field trips to stimulating environments such as a zoo.

[330] Paper: “A Reanalysis of the High/Scope Perry Preschool Program.” By James Heckman and others. University of Chicago, January 22, 2010. <www.webmeets.com>

Page 4: “Program intensity was low compared to many subsequent early childhood development programs.4 Beginning at age 3, and lasting two years, treatment consisted of a 2.5-hour educational preschool on weekdays during the school year, supplemented by weekly home visits by teachers.”

[331] Webpage: “Calculate duration between two dates – results.” Accessed August 6, 2015 at <www.timeanddate.com>

From and including: Friday, October 15, 1965

To, but not including Tuesday, May 31, 1966

Result: 228 days

CALCULATIONS:

228 (days per year) / 7 (days per week) = 33 weeks per year

33 (weeks) × 14 (hours per week) = 462 hours per year

462 (hours per year) × 2 (years) = 924 hours

[332] “Web Appendix for The Rate of Return to the High/Scope Perry Preschool Program.” By James J. Heckman and others. Elsevier, November 23, 2009. <jenni.uchicago.edu>

Page 4:

During each wave of the experiment, the preschool class consisted of 20–25 children, whose ages ranged from 3 to 4. This is true even of the first and last waves, as the first wave admitted 4-year-olds, who only received one year of treatment, and the last wave was taught alongside a group of 3-year-olds, who are not included in our data. …

The preschool teaching staff of four produced a child-teacher ratio ranging from 5 to 6.25 over the course of the program. Teaching positions were filled by public-school teachers who were “certified in elementary, early childhood, and special education,” (Schweinhart et al., 1993, p.32).

[333] Working paper: “The Rate of Return to the High/Scope Perry Preschool Program.” By James J. Heckman and others. National Bureau of Economic Research, November 2009. <www.nber.org>

Page 14: “We use estimates of initial program costs reported in Barnett (1996). These include both operating costs (teacher salaries and administrative costs) and capital costs (classrooms and facilities). This information is summarized in Web Appendix C. In undiscounted year-2006 dollars, cost of the program per child is $17,759.”

[334] Webpage: “CPI Inflation Calculator.” United States Department of Labor, Bureau of Labor Statistics. Accessed August 3, 2015 at <www.bls.gov>

“$17,759.00 in 2006 has the same buying power as $20,854.14 in 2014”

“$17,759.00 in 2006 has the same buying power as $2,774.84 in 1965”

“The CPI inflation calculator uses the average Consumer Price Index for a given calendar year. This data represents changes in prices of all goods and services purchased for consumption by urban households. This index value has been calculated every year since 1913. For the current year, the latest monthly index value is used.”

[335] Dataset: “Table 236.55. Total and current expenditures per pupil in public elementary and secondary schools: Selected years, 1919-20 through 2011-12.” U.S. Department Of Education, National Center for Education Statistics, July 2014. <nces.ed.gov>

“Expenditure per pupil in fall enrollment … Unadjusted dollars … Total expenditure … 1965-66 [=] $607 … 2011-12 [=] $12,010”

$2,775 × $12,010 / $607 = $54,906

[336] Paper: “A Reanalysis of the High/Scope Perry Preschool Program.” By James Heckman and others. University of Chicago, January 22, 2010. <www.webmeets.com>

Page 2: “Participants were followed through age 40. There are plans for an age-50 followup.”

Pages 3-4: “Data were collected at age 3, the entry age, and through annual surveys until age 15, with additional follow-ups conducted at ages 19, 27, and 40. Program attrition remains low through age 40, with over 90% of the original subjects interviewed. Numerous measures were collected on economic, criminal, and educational outcomes over this span as well as on cognition and personality.”

[337] Working paper: “The Rate of Return to the High/Scope Perry Preschool Program.” By James J. Heckman and others. National Bureau of Economic Research, November 2009. <www.nber.org>

Page 22:

For each subject, the Perry data provide a full record of arrests, convictions, charges and incarcerations for most of the adolescent and adult years. They are obtained from administrative data sources.36 The empirical challenges addressed in this section are twofold: obtaining a complete lifetime profile of criminal activities for each person, and assigning values to that criminal activity. Web Appendix H presents a comprehensive analysis of the crime data which we summarize in this section.

36The earliest records cover ages 8–39 and the oldest cover ages 13–44. However, there are some limitations. At the county (Washtenaw) level, arrests, all convictions, incarceration, case numbers, and status are reported. At the state (Michigan) level, arrests are only reported if they lead to convictions. For the 38 Perry subjects spread across the 19 states other than Michigan at the time of the age-40 interview, only 11 states provided criminal records. No corresponding data are provided for subjects residing abroad.

[338] Paper: “Multiple Inference and Gender Differences in the Effects of Early Intervention: A Reevaluation of the Abecedarian, Perry Preschool, and Early Training Projects.” Michael L. Anderson. Journal of the American Statistical Association, December 2008. Pages 1481-1495. <are.berkeley.edu>

Page 1482:

Researchers gathered data from four primary sources: interviews with subjects and parents, program-administered tests, school records, and criminal records. IQ tests were administered on an annual basis from program entry until age 10, and then once more at age 14. Information on special education, grade retention, and graduation status was collected from school records. Arrest records were obtained from the relevant authorities, supplemented with interview data on criminal behavior. Economic outcome data come primarily from interviews conducted at age 19, 27, and 40. Follow-up attrition rates for most variables were generally low, ranging between 0 to 10%.

NOTE: The tables on pages 1489-1492 provide data from the Perry program for ages 5, 6, 10, 12, 17, 15, 18, 19, 27, and 40.

[339] Paper: “A Reanalysis of the High/Scope Perry Preschool Program.” By James Heckman and others. University of Chicago, January 22, 2010. <www.webmeets.com>

Page 9: “In the case of the Perry study, there are approximately 25 observations per gender per treatment assignment group, and the distribution of observed measures is often highly skewed.”

Page 38: “In summary, our analysis shows that accounting for corrupted randomization, multiple-hypothesis testing and small sample sizes, there are strong effects of the Perry Preschool program on the outcomes of boys and girls. However, there are important differences by age in the strengths of treatment effects by gender.”

[340] Handbook of Statistics: Epidemiology and Medical Statistics. Edited by C.R. Rao and others. Elsevier, 2008. Chapter 21: “The Multiple Comparison Issue in Healthcare Research.” By Lemuel A. Moyé. Pages 616-650. Page 644:

The analysis of subgroups is a popular, necessary, and controversial component of the complete evaluation of a research effort. …

However useful and provocative these results can be, it is well-established that subgroup analyses are often misleading…. Assmann et al. (2000) has demonstrated how commonly subgroup analyses are misused, while others point out the dangers of accepting subgroup analyses as confirmatory….

[341] Paper: “Multiple Inference and Gender Differences in the Effects of Early Intervention: A Reevaluation of the Abecedarian, Perry Preschool, and Early Training Projects.” By Michael L. Anderson. Journal of the American Statistical Association, December 2008. Pages 1481-1495. <are.berkeley.edu>

Page 1481: “This article focuses on the three prominent early intervention experiments: the Abecedarian Project, the Perry Preschool Program, and the Early Training Project.”

Pages 1493-1494: “As a final demonstration of the value of correcting for multiple inference, we conduct a stand-alone reanalysis of the Perry Preschool Project, arguably the most influential of the three experiments.”

Pages 1489-1492:

Table 4. Effects on preteen IQ scores

Table 5. Effects on preteen primary school outcomes

Table 6. Effects on teenage academic outcomes

Table 7. Effects on teenage economic and social outcomes

Table 8. Effects on adult academic outcomes

Table 9. Effects on adult economic outcomes

Table 10. Effects on adult social outcomes

NOTE: The authors of this paper did not embolden statistically significant outcomes in their tables of results (cited above). However, based on the text of the paper, the authors treat results with q values (also called FDR q values) less than .10 as statistically significant. They also sometimes apply a stricter standard of q < .05 and refer to a gray zone in which q values up to .13 may be significant. Hence, Just Facts has listed all of the Perry preschool program results with q <= .13. For an Excel file containing the data in the tables above, contact us.

[342] Paper: “Multiple Inference and Gender Differences in the Effects of Early Intervention: A Reevaluation of the Abecedarian, Perry Preschool, and Early Training Projects.” By Michael L. Anderson. Journal of the American Statistical Association, December 2008. Pages 1481-1495. <are.berkeley.edu>

Page 1481:

This article focuses on the three prominent early intervention experiments: the Abecedarian Project, the Perry Preschool Program, and the Early Training Project. …

But serious statistical inference problems affect these studies. The experimental samples are very small, ranging from approximately 60 to 120. Statistical power is therefore limited, and the results of conventional tests based on asymptotic theory may be misleading. More importantly, the large number of measured outcomes raises concerns about multiple inference: Significant coefficients may emerge simply by chance, even if there are no treatment effects. This problem is well known in the theoretical literature … and the biostatistics field … but has received limited attention in the policy evaluation literature. These issues—combined with a puzzling pattern of results in which early test score gains disappear within a few years and are followed a decade later by significant effects on adult outcomes—have created serious doubts about the validity of the results….

Page 1484:

[M]ost randomized evaluations in the social sciences test many outcomes but fail to apply any type of multiple inference correction. To gauge the extent of the problem, we conducted a survey of randomized evaluation works published from 2004 to 2006 in the fields of economic or employment policy, education, criminology, political science or public opinion, and child or adolescent welfare. Using the GSA Illumina social sciences databases, we identified 44 such articles in peer-reviewed journals.

Of these 44 articles, 37 (84%) reported testing 5 or more outcomes, and 27 (61%) reported testing 10 or more outcomes. These figures represent lower bounds for the total number of tests conducted, because many tests may be conducted but not reported. Nevertheless, only three works (7%) implemented any type of multiple-inference correction. … Although multiple-inference corrections are standard (and often mandatory) in psychological research … they remain uncommon in other social sciences, perhaps because practitioners in these fields are unfamiliar with the techniques or because they have seen no evidence that they yield more robust conclusions.

Pages 1490-1491:

The disaggregated [by sex] results suggest that early intervention improves high school graduation, employment, and juvenile arrest rates for females but has no significant effect on male outcomes. …

Unlike females, males show little evidence of positive effects as adults.

Pages 1493-1494:

As a final demonstration of the value of correcting for multiple inference, we conduct a stand-alone reanalysis of the Perry Preschool Project, arguably the most influential of the three experiments. …

… Do these findings replicate in the other two studies? In general, yes. The early male IQ effect replicates strongly in Abecedarian. The female high school graduation effect replicates in both Abecedarian and Early Training, and the early female IQ effect replicates weakly in Abecedarian and strongly in Early Training. …

In contrast to females, males appear to not derive lasting benefits from the interventions. …

[A] conventional research design [i.e., one that that not account for multiple inference problems] … adds eight more significant or marginally significant outcomes: female adult arrests, female employment, male monthly income, female government transfers, female special education rates, male drug use (in the adverse direction), male employment, and female monthly income. Of these eight outcomes, two (male and female monthly income) are not included in the other two studies [Abecedarian and Early Training]. The remaining six fail to replicate in either of the other studies. …

[Previous] researchers have emphasized the subset of unadjusted significant outcomes rather than applying a statistical framework that is robust to problems of multiple inference. …

Many studies in this field test dozens of outcomes and focus on the subset of results that achieve significance.

[343] Working paper: “The Rate of Return to the High/Scope Perry Preschool Program.” By James J. Heckman and others. National Bureau of Economic Research, November 2009. <www.nber.org>

Page 3:

In a highly cited paper, Rolnick and Grunewald (2003) report a rate of return of 16 percent to the Perry program. Belfield et al. (2006) report a 17 percent rate of return. …

… All of the reported estimates of rates of return are presented without standard errors, leaving readers uncertain as to whether the estimates are statistically significantly different from zero. The paper by Rolnick and Grunewald (2003) reports few details and no sensitivity analyses exploring the consequences of alternative assumptions about costs and benefits of key public programs and the costs of crime. The study by Belfield et al. (2006) also does not report standard errors. It provides more details on how its estimates are obtained, but conducts only a limited sensitivity analysis.

[344] Paper: “Multiple Inference and Gender Differences in the Effects of Early Intervention: A Reevaluation of the Abecedarian, Perry Preschool, and Early Training Projects.” Michael L. Anderson. Journal of the American Statistical Association, December 2008. Pages 1481-1495. <are.berkeley.edu>

Page 1481: “[S]everal randomized early intervention experiments have reported striking increases in short-term IQ scores and long-term outcomes for treated children… This article focuses on the three prominent early intervention experiments: the Abecedarian Project, the Perry Preschool Program, and the Early Training Project. … But serious statistical inference problems affect these studies.”

Page 1482: “Of the three early intervention projects, Abecedarian was by far the most intensive.”

Page 1483: “Nevertheless, there are some important differences in these studies’ findings. In particular, the Perry Preschool Program reported large, statistically significant reductions in juvenile and adult criminal behavior that were not replicated in the Abecedarian Program.”

Page 1492: “Abecedarian females … experience no significant reduction in conviction or incarceration rates by age 21.”

Page 1493: “Previous findings demonstrating significant long-term effects for boys, primarily from the Perry program, do not survive multiplicity [multiple inference] adjustment [for statistical significance] and do not replicate in the other experiments.”

[345] Paper: “Comparative benefit–cost analysis of the Abecedarian program and its policy implications.” By W.S. Barnett and Leonard N. Masse. Economics of Education Review, February 2007. Pages 113-125. <nieer.org>

Page 122: “Yet, the [Abecedarian] program did not produce gains in social and emotional development that elsewhere [the Perry program] have been found to account for a very large portion of potential benefits.”

[346] Paper: “A Reanalysis of the High/Scope Perry Preschool Program.” By James Heckman and others. University of Chicago, January 22, 2010. <www.webmeets.com>

Page 4: “The [Perry] Program intensity was low compared to many subsequent early childhood development programs.”

[347] Paper: “A Reanalysis of the High/Scope Perry Preschool Program.” By James Heckman and others. University of Chicago, January 22, 2010. <www.webmeets.com>

Page 28:

Tables 3–6 show many statistically significant treatment effects and gender differences that survive multiple hypothesis testing. In summary, females show strong effects for educational outcomes, early employment and other early economic outcomes, as well as reduced numbers of arrests. Males, on the other hand, show strong effects on a number of outcomes, demonstrating a substantially reduced number of arrests and lower probability of imprisonment, as well as strong effects on earnings at age 27, employment at age 40, and other economic outcomes recorded at age 40.

Page 24: “Table 3: Main Outcomes, Females: Part 1”

Page 25: “Table 4: Main Outcomes, Females: Part 2”

Page 26: “Table 5: Main Outcomes, Males: Part 1”

Page 27: “Table 6: Main Outcomes, Males: Part 2”

NOTE: Contact us for an Excel file containing the data in the tables above.

Page 38:

Proper analysis of the Perry experiment presents many statistical challenges. These challenges include small-sample inference, accounting for imperfections in randomization, and accounting for large numbers of outcomes. The last of these refers to the risk of selecting statistically significant outcomes that are “cherry picked” from a larger set of unreported results.

We propose and implement a combination of methods to account for these problems. We control for the violations of the initial randomization protocol and imbalanced background variables. We estimate family-wise error rates that account for the multiplicity of the outcomes. We consider the external validity of the program. …

The pattern of treatment response by gender varies with age. Males exhibit statistically significant treatment effects for criminal activity, later life income, and employment (ages 27 and 40), whereas, female treatment effects are strongest for education and early employment (ages 19 and 27).”

Page 39: “In summary, our analysis shows that accounting for corrupted randomization, multiple-hypothesis testing and small sample sizes, there are strong effects of the Perry Preschool program on the outcomes of boys and girls. However, there are important differences by age in the strengths of treatment effects by gender.”

[348] Working paper: “The Rate of Return to the High/Scope Perry Preschool Program.” By James J. Heckman and others. National Bureau of Economic Research, November 2009. <www.nber.org>

Page 9: “The Compromised Randomization Protocol. A potential problem with the Perry study is that after random assignment, treatment and controls were reassigned, compromising the original random assignment and making simple interpretation of the evidence problematic. In addition, there was some imbalance in the baseline variables between treatment and control groups.”

[349] Paper: “A Reanalysis of the High/Scope Perry Preschool Program.” By James Heckman and others. University of Chicago, January 22, 2010. <www.webmeets.com>

Page 9: “In the case of the Perry study, there are approximately 25 observations per gender per treatment assignment group, and the distribution of observed measures is often highly skewed.”

Page 36: “We estimate that 17% of the male cohort and 15% of the female cohort would be eligible for the Perry program if it were applied nationwide. This translates into a population estimate of 712,000 persons out of this 4.5 million black cohort resemble the Perry population.”

[350] “Margin of Error Calculator.” ComRes. Accessed August 6, 2015 at <comres.co.uk>

<www.comres.co.uk>

The margin of error shows the level of accuracy that a random sample of a given population has. Our calculator gives the percentage points of error either side of a result for a chosen sample size.

It is calculated at the standard 95% confidence level. Therefore we can be 95% confident that the sample result reflects the actual population result to within the margin of error. This calculator is based on a 50% result in a poll, which is where the margin of error is at its maximum.

This means that, according to the law of statistical probability, for 19 out of every 20 polls the ‘true’ result will be within the margin of error shown.

Population Size: 356,000 [= 712,000 (people eligible for the Perry program) / 2 (roughly half male and half female)]

Sample Size: 25

Margin of Error: 19.6

[351] Book: Multiple Regression: A Primer. By Paul D. Allison. Pine Forge Press, 1998. Chapter 3: “What Can Go Wrong With Multiple Regression?” <us.sagepub.com>

Pages 57-58:

Sample size has a profound effect on tests of statistical significance. With a sample of 60 people, a correlation has to be at least .25 (in magnitude) to be significantly different from zero (at the .05 level). With a sample of 10,000 people, any correlation larger than .02 will be statistically significant. The reason is simple: There’s very little information in a small sample, so estimates of correlations are very unreliable. If we get a correlation of .20, there may still be a good chance that the true correlation is zero. …

Statisticians often describe small samples as having low power to test hypotheses. There is another, entirely different problem with small samples that is frequently confused with the issue of power. Most of the test statistics that researchers use (such as t tests, F tests, and chi-square tests) are only approximations. These approximations are usually quite good when the sample is large but may deteriorate markedly when the sample is small. That means that p values calculated for small samples may be only rough approximations of the true p values. If the calculated p value is .02, the true value might be something like .08. …

That brings us to the inevitable question: What’s a big sample and what’s a small sample? As you may have guessed, there’s no clear-cut dividing line. Almost anyone would consider a sample less than 60 to be small, and virtually everyone would agree that a sample of 1,000 or more is large. In between, it depends on a lot of factors that are difficult to quantify, at least in practice.

[352] Working paper: “The Rate of Return to the High/Scope Perry Preschool Program.” By James J. Heckman and others. National Bureau of Economic Research, November 2009. <www.nber.org>

Page 10: “As the oldest and most cited early childhood intervention evaluated by the method of random assignment, the Perry study serves as a flagship for policy makers advocating public support for early childhood programs.”

[353] Paper: “Multiple Inference and Gender Differences in the Effects of Early Intervention: A Reevaluation of the Abecedarian, Perry Preschool, and Early Training Projects.” By Michael L. Anderson. Journal of the American Statistical Association, December 2008. Pages 1481-1495. <are.berkeley.edu>

Page 1481:

The education literature contains dozens of papers showing inconsistent or low returns to publicly funded human capital investments…. In contrast to these studies, several randomized early intervention experiments have reported striking increases in short-term IQ scores and long-term outcomes for treated children… These results have been highly influential and often are cited as proof of efficacy for many types of early interventions…. The experiments underlie the growing movement for universal prekindergarten education….

This article focuses on the three prominent early intervention experiments: the Abecedarian Project, the Perry Preschool Program, and the Early Training Project.

Page 1493: “[T]he Perry Preschool Project [is] arguably the most influential of the three experiments.”

Page 1494: “[T]he most famous (and dramatic) preschool experiment [is] the Perry program….”

[354] Paper: “A Reanalysis of the High/Scope Perry Preschool Program.” By James Heckman and others. University of Chicago, January 22, 2010. <www.webmeets.com>

Page 2: “The case for universal pre-K is often based on the Perry study, even though the project only targeted a disadvantaged segment of the population.”

NOTE: For examples of such claims, see the next three footnotes.

[355] Commentary: “Capitalists for Preschool.” By John E. Pepper Jr. and James M. Zimmerman. New York Times, March 1, 2013. <www.nytimes.com>

“Research by the University of Chicago economist James J. Heckman, a Nobel laureate, points to a 7- to 10-percent annual return on investment in high-quality preschool.”

NOTE: The statement above refers to the following working paper: “The Rate of Return to the High/Scope Perry Preschool Program.” By James J. Heckman and others. National Bureau of Economic Research, November 2009. <www.nber.org>

[356] Report: “The Case for Pre-K in Education Reform: A Summary of Program Evaluation Findings.” By Albert Wat. Pew Center on the States, April 2010. <www.pewtrusts.org>

Page 1:

The short- and long-term benefits of high-quality pre-kindergarten have been well documented by researchers for the last 50 years. By now, even many outside the education field have heard about the academic and lifetime gains and the significant returns on investment yielded from the High/Scope Perry Preschool Project and the Chicago Child-Parent Centers.1

1 See for example: Albert Wat, “Dollars and Sense: A Review of Economic Analyses of Pre-K,” (Washington, DC: Pre-K Now, 2007).

[357] Commentary: “The Vague Promise of Obama’s Ambitious Preschool Plan.” By Jonathan Cohn. New Republic, February 15, 2013. <www.newrepublic.com>

“President Barack Obama visited Georgia on Thursday to tout his ambitious new proposal for universal preschool. … Obama’s plan comes from two ‘amazing preschools’—the Perry Preschool Project, in Michigan, and the Abecedarian Project, in North Carolina.”

[358] Textbook: Applied Statistics: From Bivariate Through Multivariate Techniques. By Rebecca M. Warner. Sage Publications, 2008.

Page 5:

Researchers in the behavioral and social sciences almost always want to make inferences beyond their samples; they hope that the attitudes or behaviors that they find in small groups of college students who actually participate in their studies will provide evidence about attitudes or behaviors in broader populations in the world outside the laboratory. Thus, almost all the statistics reported in journal articles are inferential statistics

However, in many types of research (such as experiments and small-scale surveys in psychology, education, and medicine), it is not practical to obtain random samples from the entire population of the country. Instead, researchers in these disciplines often use convenience samples when they conduct small-scale studies. …

When researchers obtain information about behavior from convenience samples, they cannot confidently use their results to make inferences about the responses of an actual, well-defined population.

Page 6:

Generalization of results beyond the sample to make inferences about a broader population is always risky, so researchers should be cautious in making generalizations. … It could be misleading, however, to generalize the results of the study to children or to older adults. …

To summarize, when a study uses data from a convenience sample, the researcher should clearly state that the nature of the sample limits the potential generalizability of the results.

It would be questionable to generalize about response to caffeine for populations that have drastically different characteristics from the members of the sample….

[359] Book: Multiple Regression: A Primer. By Paul D. Allison. Pine Forge Press, 1998. Preface. <us.sagepub.com>

Page 9: “The most desirable data come from a probability sample from some well-defined population…. In practice, people often use whatever cases happen to be available. … Although it is acceptable to use such ‘convenience samples,’ you must be very cautious in generalizing the results to other populations.”

[360] Paper: “A Reanalysis of the High/Scope Perry Preschool Program.” By James Heckman and others. University of Chicago, January 22, 2010. <www.webmeets.com>

Page 4:

The eligibility rules for participation were that the participants (1) be African-American; (2) have a low IQ (between 70 and 85) at study entry,6 and (3) be disadvantaged as measured by parental employment level, parental education, and housing density (people/room). The Perry study targeted families who were more disadvantaged than other African-American families in the U.S. but were representative of a large segment of the disadvantaged African-American population.

Among children in the Perry Elementary School neighborhood, Perry program families were particularly disadvantaged. Table 1 shows that compared to other families with children in the Perry School catchment area, Perry program families were younger, had lower levels of parental education, and had fewer working mothers. Further, Perry program families had fewer educational resources, larger families, and greater participation in welfare, compared to the families with children in another neighborhood elementary school in Ypsilanti (the Erickson School).

6 Measured by the Stanford-Binet IQ test (1960s norming), which has approximate mean of 111 and standard deviation of 16 at study entry (ages 3-4).

Pages 35-36:

Comparability in later life outcomes between the restricted group and the Perry control group suggests that the Perry sample, while not necessarily representative of the African-American population as a whole, is representative of a particular subsample of that population. Specifically, this subsample reflects the eligibility requirements of the Perry program, such as low IQ of the child and a low parental SES [socio-economic status] index.

The US population in 1960 was 180 million people, of which 10.6% (19 million) were black.52 We use the NLSY79 [1979 National Longitudinal Survey of Youth] a representative sample of the total population that was born between 1957 and 1964, to estimate the number of persons in the US that resemble the Perry population at entry (age 3). According to the NLSY79, the black cohort born in 1957–1964 is composed of 2.2 million males and 2.3 million females. We estimate that 17% of the male cohort and 15% of the female cohort would be eligible for the Perry program if it were applied nationwide. This translates into a population estimate of 712,000 persons out of this 4.5 million black cohort resemble the Perry population.53 For further information on the comparison groups and their construction, see Web Appendix I and Tables I.1 and I.2 for details.

CALCULATIONS:

712,000 / 4,500,000 = 15.8%

10.6% × 15.8 = 1.7%

[361] Working paper: “The Rate of Return to the High/Scope Perry Preschool Program.” By James J. Heckman and others. National Bureau of Economic Research, November 2009. <www.nber.org>

Page 8: “Drawn from the community served by the Perry Elementary School, participants were located through a survey of families associated with that school, as well as through neighborhood group referrals, and door-to-door canvassing. Disadvantaged children living in adverse circumstances were identified using IQ scores and a family socioeconomic status (SES) index.”

[362] Paper: “Multiple Inference and Gender Differences in the Effects of Early Intervention: A Reevaluation of the Abecedarian, Perry Preschool, and Early Training Projects.” Michael L. Anderson. Journal of the American Statistical Association, December 2008. Pages 1481-1495. <are.berkeley.edu>

Page 1482: “The Abecedarian Project recruited and treated four cohorts of children in the Chapel Hill, North Carolina area from 1972 to 1977. … The Abecedarian data set contains 111 children, 57 assigned to the treatment group and 54 assigned to the control group.”

[363] Paper: “Comparative benefit–cost analysis of the Abecedarian program and its policy implications.” By W.S. Barnett and Leonard N. Masse. Economics of Education Review, February 2007. Pages 113-125. <nieer.org>

Page 116: “The curricula are called ‘Learningames, The Abecedarian Curriculum’ and ‘Partners for Learning’ …. The curriculum emphasized language development, but addressed all developmental domains.”

[364] Article: “How Preschool Can Make You Smarter and Healthier.” By Madeline Ostrander. PBS, April 9, 2015. <www.pbs.org>

There was a sense of idealism in the air in 1971 when Craig Ramey, a psychologist in his late 20s with a newly minted Ph.D., took a job in Chapel Hill, North Carolina, to launch what would become one of the longest-running educational experiments in history. He became a lead researcher at the University of North Carolina’s Frank Porter Graham Child Development Center…. He and Joseph Sparling, the center’s senior investigator and associate director and a former school principal, wanted to study a sample of Chapel Hill children and test whether it was possible to change the course of a life by stepping in early, from infancy. They named their experiment the Abecedarian Project, from an obscure Latinate word for an alphabetical sequence.

[365] Paper: “Comparative benefit–cost analysis of the Abecedarian program and its policy implications.” By W.S. Barnett and Leonard N. Masse. Economics of Education Review, February 2007. Pages 113-125. <nieer.org>

Pages 115-116: “The study randomly assigned to a treatment or control condition 112 children, mostly African American, born between 1972 and 1977 and who were believed to be at risk of retarded intellectual and social development. Family background characteristics at study entry were: maternal education of approximately 10 yr, maternal IQ of 85, 25% of households with both parents, and 55% of households receiving Aid to Families with Dependent Children.”

[366] Paper: “Multiple Inference and Gender Differences in the Effects of Early Intervention: A Reevaluation of the Abecedarian, Perry Preschool, and Early Training Projects.” Michael L. Anderson. Journal of the American Statistical Association, December 2008. Pages 1481-1495. <are.berkeley.edu>

Page 1482: “Children were randomly assigned to treated and control groups.”

[367] Paper: “Comparative benefit–cost analysis of the Abecedarian program and its policy implications.” By W.S. Barnett and Leonard N. Masse. Economics of Education Review, February 2007. Pages 113-125. <nieer.org>

Page 116: “Random assignment occurred between 6 and 12 weeks of age.”

[368] Paper: “Multiple Inference and Gender Differences in the Effects of Early Intervention: A Reevaluation of the Abecedarian, Perry Preschool, and Early Training Projects.” Michael L. Anderson. Journal of the American Statistical Association, December 2008. Pages 1481-1495. <are.berkeley.edu>

Page 1482: “The [Abecedarian] program focused on developing cognitive, language, and social skills in classes of about six.”

[369] Paper: “Comparative benefit–cost analysis of the Abecedarian program and its policy implications.” By W.S. Barnett and Leonard N. Masse. Economics of Education Review, February 2007. Pages 113-125. <nieer.org>

Page 116: “The curricula are called ‘Learningames, The Abecedarian Curriculum’ and ‘Partners for Learning’ …. The curriculum emphasized language development, but addressed all developmental domains.”

[370] Paper: “Multiple Inference and Gender Differences in the Effects of Early Intervention: A Reevaluation of the Abecedarian, Perry Preschool, and Early Training Projects.” Michael L. Anderson. Journal of the American Statistical Association, December 2008. Pages 1481-1495. <are.berkeley.edu>

Page 1482: “The treated children entered the program very early (mean age, 4.4 months). They attended … until reaching schooling age.”

[371] Paper: “Multiple Inference and Gender Differences in the Effects of Early Intervention: A Reevaluation of the Abecedarian, Perry Preschool, and Early Training Projects.” Michael L. Anderson. Journal of the American Statistical Association, December 2008. Pages 1481-1495. <are.berkeley.edu>

Page 1482: “The treated children entered the program very early (mean age, 4.4 months). They attended a preschool center for 8 hours per day, 5 days per week, 50 weeks per year until reaching schooling age.”

CALCULATIONS:

8-10 (hours per day) × 5 (days per week) × 50 (weeks per year) = 2,000-2,500 hours per year

2,000-2,500 (hours per year) × 4 (years) = 8,000-10,000 hours

8,000-10,000 (hours for the Abecedarian program) / 924 (hours for the Perry program) = 8.7-10.8

NOTE: An Excel file containing more detailed calculations of preschool hours is available upon request.

[372] Paper: “Comparative benefit–cost analysis of the Abecedarian program and its policy implications.” By W.S. Barnett and Leonard N. Masse. Economics of Education Review, February 2007. Pages 113-125. <nieer.org>

Page 116: “The center was operated from 7:30 a.m. to 5:30 p.m., 5 days per week, and 50 weeks out of the year, with free transportation available. This constitutes 2500 h/yr and is compatible with the needs of most full-time working parents, in contrast to the typical part-day preschool program which might provide 450–540 h/ yr (2.5–3 h/day, 180 days).”

[373] Paper: “Comparative benefit–cost analysis of the Abecedarian program and its policy implications.” By W.S. Barnett and Leonard N. Masse. Economics of Education Review, February 2007. Pages 113-125. <nieer.org>

Page 116: “The preschool program was center-based with teacher/child ratios that ranged from 1:3 for infants/ toddlers to 1:6 for older children.”

[374] Calculated with data from:

a) Paper: “Multiple Inference and Gender Differences in the Effects of Early Intervention: A Reevaluation of the Abecedarian, Perry Preschool, and Early Training Projects.” Michael L. Anderson. Journal of the American Statistical Association, December 2008. Pages 1481-1495. <are.berkeley.edu>

Page 1482: “The treated children entered the program very early (mean age, 4.4 months). They attended a preschool center for 8 hours per day, 5 days per week, 50 weeks per year until reaching schooling age.”

b) Paper: “Comparative benefit–cost analysis of the Abecedarian program and its policy implications.” By W.S. Barnett and Leonard N. Masse. Economics of Education Review, February 2007. Pages 113-125. <nieer.org>

Page 116: “The preschool program was center-based with teacher/child ratios that ranged from 1:3 for infants/ toddlers to 1:6 for older children.”

Page 117: “Average enrollment in the nursery was about 12 infants and the staff/child ratio was 1:3. Average age at entry was 4.4 months. In program years two and three group size averaged about seven children for both age groups and the staff/child ratio was 1:3.5. In program years four and five the average was 12 children per group at each age, and the staff/child ratio was 1:6.”

Page 122: “The Abecedarian program also had strong supervision, a well-designed curriculum, well-compensated staff (comparable to the public schools) and on-going evaluation.”

c) Report: “Kindergarten Entrance Age and Children’s Achievement: Impacts of State Policies, Family Background, and Peers.” by Todd E. Elder and Darren H. Lubotsky. RAND Corporation, June 2006. <www.rand.org>

Page 10: “Figure 3.—Percentage distribution of first-time kindergartners, by age at kindergarten entrance: Fall 1998”

d) Dataset: “Table 203.90. Average daily attendance (ADA) as a percentage of total enrollment, school day length, and school year length in public schools, by school level and state: 2007-08 and 2011-12.” U.S. Department Of Education, National Center for Education Statistics, May 2013. <nces.ed.gov>

“2011-12 … United States … Average hours in school day [=] 6.7 … Average days in school year [=] 179”

e) Webpage: “Teacher trends.” U.S. Department of Education, National Center for Education Statistics. Accessed July 25, 2015 at <nces.ed.gov>

“The public school pupil/teacher ratio increased to 16.0 in 2011.”

f) Dataset: “Table 236.20. Total expenditures for public elementary and secondary education and other related programs, by function and subfunction: Selected years, 1990-91 through 2011-12.” U.S. Department Of Education, National Center for Education Statistics, July 2014. <nces.ed.gov>

g) Dataset: “Table 236.55. Total and current expenditures per pupil in public elementary and secondary schools: Selected years, 1919-20 through 2011-12.” U.S. Department Of Education, National Center for Education Statistics, July 2014. <nces.ed.gov>

“Expenditure per pupil in fall enrollment … Total expenditure … 2011-12 … Constant 2013-14 dollars [=] 12,401”

NOTE: An Excel file containing the data and calculations is available upon request.

[375] A Reevaluation of the Abecedarian, Perry Preschool, and Early Training Projects.” Michael L. Anderson. Journal of the American Statistical Association, December 2008. Pages 1481-1495. <are.berkeley.edu>

Page 1482:

Data collection [for the Abecedarian program] began immediately and has continued, with gaps, through age 21. The data come from three primary sources: interviews with subjects and parents, program-administered tests, and school records. Children received IQ tests on an annual basis from ages 2 through 8, and then once at age 12 and once at age 15. Researchers collected information on grade retention and special education at age 12 and 15 from school records. Data on high school graduation, college attendance, employment, pregnancy, and criminal behavior come from an interview at age 21.”

NOTE: The tables on pages 1489-1492 provide data from the Abecedarian program for ages 5, 6.5, 12, 15, 18, 19, and 21.

[376] Paper: “Multiple Inference and Gender Differences in the Effects of Early Intervention: A Reevaluation of the Abecedarian, Perry Preschool, and Early Training Projects.” Michael L. Anderson. Journal of the American Statistical Association, December 2008. Pages 1481-1495. <are.berkeley.edu>

Page 1482: “The Abecedarian Project recruited and treated four cohorts of children in the Chapel Hill, North Carolina area from 1972 to 1977. … The Abecedarian data set contains 111 children, 57 assigned to the treatment group and 54 assigned to the control group. … Follow-up attrition rates are low, ranging from 3% to 6% for most outcomes.”

Page 1483: “Table 1. Summary statistics … Abecedarian … Percent female [=] 53.2”

[377] Paper: “Comparative benefit–cost analysis of the Abecedarian program and its policy implications.” By W.S. Barnett and Leonard N. Masse. Economics of Education Review, February 2007. Pages 113-125. <nieer.org>

Page 116: “By 1978, 104 participants remained in the study, and the follow-up at age 21 involved all 104 of these participants.”

[378] Paper: “Multiple Inference and Gender Differences in the Effects of Early Intervention: A Reevaluation of the Abecedarian, Perry Preschool, and Early Training Projects.” By Michael L. Anderson. Journal of the American Statistical Association, December 2008. Pages 1481-1495. <are.berkeley.edu>

Page 1481: “This article focuses on the three prominent early intervention experiments: the Abecedarian Project, the Perry Preschool Program, and the Early Training Project.”

Pages 1489-1492:

Table 4. Effects on preteen IQ scores

Table 5. Effects on preteen primary school outcomes

Table 6. Effects on teenage academic outcomes

Table 7. Effects on teenage economic and social outcomes

Table 8. Effects on adult academic outcomes

Table 9. Effects on adult economic outcomes

Table 10. Effects on adult social outcomes

NOTE: The authors of this paper did not embolden statistically significant outcomes in their tables of results (cited above). However, based on the text of the paper, the authors treat results with q values (also called FDR q values) less than .10 as statistically significant. They also sometimes apply a stricter standard of q < .05 and refer to a gray zone in which q values up to .13 may be significant. Hence, Just Facts has listed all of the Abecedarian program results with q <= .13. For an Excel file containing the data in the tables above, contact us.

[379] Paper: “Multiple Inference and Gender Differences in the Effects of Early Intervention: A Reevaluation of the Abecedarian, Perry Preschool, and Early Training Projects.” By Michael L. Anderson. Journal of the American Statistical Association, December 2008. Pages 1481-1495. <are.berkeley.edu>

Page 1481:

This article focuses on the three prominent early intervention experiments: the Abecedarian Project, the Perry Preschool Program, and the Early Training Project. …

But serious statistical inference problems affect these studies. The experimental samples are very small, ranging from approximately 60 to 120. Statistical power is therefore limited, and the results of conventional tests based on asymptotic theory may be misleading. More importantly, the large number of measured outcomes raises concerns about multiple inference: Significant coefficients may emerge simply by chance, even if there are no treatment effects. This problem is well known in the theoretical literature … and the biostatistics field … but has received limited attention in the policy evaluation literature. These issues—combined with a puzzling pattern of results in which early test score gains disappear within a few years and are followed a decade later by significant effects on adult outcomes—have created serious doubts about the validity of the results….

Page 1484:

[M]ost randomized evaluations in the social sciences test many outcomes but fail to apply any type of multiple inference correction. To gauge the extent of the problem, we conducted a survey of randomized evaluation works published from 2004 to 2006 in the fields of economic or employment policy, education, criminology, political science or public opinion, and child or adolescent welfare. Using the GSA Illumina social sciences databases, we identified 44 such articles in peer-reviewed journals.

Of these 44 articles, 37 (84%) reported testing 5 or more outcomes, and 27 (61%) reported testing 10 or more outcomes. These figures represent lower bounds for the total number of tests conducted, because many tests may be conducted but not reported. Nevertheless, only three works (7%) implemented any type of multiple-inference correction. … Although multiple-inference corrections are standard (and often mandatory) in psychological research … they remain uncommon in other social sciences, perhaps because practitioners in these fields are unfamiliar with the techniques or because they have seen no evidence that they yield more robust conclusions.

Page 1494:

[Previous] researchers have emphasized the subset of unadjusted significant outcomes rather than applying a statistical framework that is robust to problems of multiple inference. …

Many studies in this field test dozens of outcomes and focus on the subset of results that achieve significance.

[380] Working paper: “The Rate of Return to the High/Scope Perry Preschool Program.” By James J. Heckman and others. National Bureau of Economic Research, November 2009. <www.nber.org>

Page 3:

In a highly cited paper, Rolnick and Grunewald (2003) report a rate of return of 16 percent to the Perry program. Belfield et al. (2006) report a 17 percent rate of return. …

… All of the reported estimates of rates of return are presented without standard errors, leaving readers uncertain as to whether the estimates are statistically significantly different from zero. The paper by Rolnick and Grunewald (2003) reports few details and no sensitivity analyses exploring the consequences of alternative assumptions about costs and benefits of key public programs and the costs of crime. The study by Belfield et al. (2006) also does not report standard errors. It provides more details on how its estimates are obtained, but conducts only a limited sensitivity analysis.

[381] Paper: “Multiple Inference and Gender Differences in the Effects of Early Intervention: A Reevaluation of the Abecedarian, Perry Preschool, and Early Training Projects.” Michael L. Anderson. Journal of the American Statistical Association, December 2008. Pages 1481-1495. <are.berkeley.edu>

Page 1481: “[S]everal randomized early intervention experiments have reported striking increases in short-term IQ scores and long-term outcomes for treated children… This article focuses on the three prominent early intervention experiments: the Abecedarian Project, the Perry Preschool Program, and the Early Training Project. … But serious statistical inference problems affect these studies.”

Page 1482: “Of the three early intervention projects, Abecedarian was by far the most intensive.”

Page 1483: “Nevertheless, there are some important differences in these studies’ findings. In particular, the Perry Preschool Program reported large, statistically significant reductions in juvenile and adult criminal behavior that were not replicated in the Abecedarian Program.”

Page 1492: “Abecedarian females … experience no significant reduction in conviction or incarceration rates by age 21.”

Page 1493: “Previous findings demonstrating significant long-term effects for boys, primarily from the Perry program, do not survive multiplicity [multiple inference] adjustment [for statistical significance] and do not replicate in the other experiments.”

[382] Paper: “Comparative benefit–cost analysis of the Abecedarian program and its policy implications.” By W.S. Barnett and Leonard N. Masse. Economics of Education Review, February 2007. Pages 113-125. <nieer.org>

Page 122: “Yet, the [Abecedarian] program did not produce gains in social and emotional development that elsewhere [the Perry program] have been found to account for a very large portion of potential benefits.”

[383] Paper: “A Reanalysis of the High/Scope Perry Preschool Program.” By James Heckman and others. University of Chicago, January 22, 2010. <www.webmeets.com>

Page 4: “The [Perry] Program intensity was low compared to many subsequent early childhood development programs.”

[384] Paper: “Multiple Inference and Gender Differences in the Effects of Early Intervention: A Reevaluation of the Abecedarian, Perry Preschool, and Early Training Projects.” Michael L. Anderson. Journal of the American Statistical Association, December 2008. Pages 1481-1495. <are.berkeley.edu>

Page 1481: “The view that the returns to educational investments are highest for early childhood interventions is widely held and stems primarily from several influential randomized trials—Abecedarian, Perry, and the Early Training Project—that point to super-normal returns to early interventions. … The experiments underlie the growing movement for universal prekindergarten education….”

[385] Commentary: “The Vague Promise of Obama’s Ambitious Preschool Plan.” By Jonathan Cohn. New Republic, February 15, 2013. <www.newrepublic.com>

“President Barack Obama visited Georgia on Thursday to tout his ambitious new proposal for universal preschool. … Obama’s plan comes from two ‘amazing preschools’—the Perry Preschool Project, in Michigan, and the Abecedarian Project, in North Carolina.”

[386] Article: “Educational Support Services.” By Stephen T. Schroth (Know College). Encyclopedia of Human Services and Diversity. Edited by Linwood H. Cousins. Sage Publications, 2014.

Page 447: “All 50 states and the District of Columbia provide public education for children from kindergarten through grade 12. Additionally, many states also fund preschool programs that permit some children as young as 3 years of age to attend classes.”

[387] Handbook of Research on School Choice. Edited by Mark Berends, Matthew G. Springer, Dale Ballou, and Herbert J. Walberg. Routledge, 2009.

Page xvi: “In the lower-left cell of Table 1.1 are traditional public schools, which are government-funded and government-operated.”

[388] Dataset: “Table 235.10. Revenues for public elementary and secondary schools, by source of funds: Selected years, 1919-20 through 2011-12.” U.S. Department Of Education, National Center for Education Statistics, July 2014. <nces.ed.gov>

“2011-12 … Federal [=] 10.2% … State [=] 45.2% … Local [=] 44.6% … [Local] Property Taxes [=] 35.9% … Other [Local] Public Revenue [=] 6.7% … [Local] Private [=] 2.0%”

[389] Book: Educational Administration: Concepts and Practices (Sixth edition). By Fred Lunenburg and Allan Ornstein. Wadsworth Cengage Learning, 2012.

Page 343:

School Attendance

All fifty states have some form of compulsory school attendance law. These statutes provide the right of children residing in a district to receive a free public education up to a certain age and exact penalties for noncompliance on parents or guardians.

Compulsory Attendance Laws The courts have sustained compulsory attendance laws on the basis of the legal doctrine of parens patriae. Under this doctrine, the state has the legal authority to provide for the welfare of its children. In turn, the welfare of the state is served by the development of an enlightened citizenry.

Attendance at a public school is not the only way to satisfy the compulsory attendance law. Over eighty years ago, the U.S. Supreme Court in Pierce v. Society of Sisters invalidated an Oregon statute requiring children between the ages of eight and sixteen to attend public schools.67 The Court concluded that by restricting attendance to public schools, the state violated both the property rights of the school and the liberty interests of parents in choosing the plan of education for their children, protected by the Fourteenth Amendment to the Constitution.

Subsequent to Pierce, states have expanded the options available to parents (guardians) for meeting the compulsory attendance law. For example, currently in the state of Kentucky, parents are in compliance with that state’s statute by selecting from the following options: enrolling their children, who must regularly attend, in a private, parochial, or church-related day school; enrolling their children, who must regularly attend, in a private, parochial, church- or state-supported program for exceptional children; or providing home, hospital, institutional, or other regularly scheduled, suitable, equivalent instruction that meets standards of the state board of education.68

Parents or guardians who select one of the options to public school instruction must obtain equivalent instruction. For example, the Washington Supreme Court held that home instruction did not satisfy that states compulsory attendance law, for the parents who were teaching the children did not hold a valid teaching certificate.69 In its decision, the court described four essential elements of a school: a certified teacher, pupils of school age, an institution established to instruct school-age children, and a required program of studies (curriculum) engaged in for the full school term and approved by the state board of education. Subsequently, statutes establishing requirements for equivalent instruction (such as certified teachers, program of studies, time devoted to instruction, school-age children, and place or institution) generally have been sustained by the courts.70

Exceptions to Compulsory Attendance The prevailing view of the courts is that religious beliefs cannot abrogate a states compulsory attendance law. An exception is the U.S. Supreme Court ruling in Wisconsin v. Yoder, which prevented that state from requiring Amish children to submit to compulsory formal education requirements beyond the eighth grade.71 The Court found that this was a violation of the free exercise of religion clause of the First Amendment. However, most other attempts to exempt students from school based on religious beliefs have failed.

It is commonly held that married pupils, regardless of age, are exempt from compulsory attendance laws. The rationale is that married persons assume adult status, and consequently the doctrine of parens patriae no longer applies. The precedent in this area is based on two Louisiana cases in which fifteen- and fourteen-year-old married women were considered not “children” under the compulsory attendance law.72 A later New York case followed the rationale of the two Louisiana cases in declaring that the obligations of a married woman were inconsistent with school attendance.73 It should be noted, however, that a state cannot deny married minors the right to attend school if they wish.

[390] Handbook of Research on School Choice. Edited by Mark Berends, Matthew G. Springer, Dale Ballou, and Herbert J. Walberg. Routledge, 2009.

Page xvi: “In the lower-left cell of Table 1.1 are traditional public schools, which are government-funded and government-operated. Students within their boundaries are normally assigned to them, and they represent by far the largest number of American schools.”

Page xvii:

Perhaps surprisingly, an estimated one million youngsters (see the homeschooling chapter in this book) are now schooled at home. …

Also in the upper-right quadrant are for-profit tutoring and schooling. When families believe they lack the knowledge, skills, time, or desire to provide homeschooling, yet want things that they think the public schools do not adequately provide, they may voluntarily choose to pay for private tutoring. …

Non-profit private schools, both independent and sectarian, are a long-standing form of privately-funded and privately-operated choice. Parents place such value on the education and circumstances private schools offer that they pay the tuition to send their children to them. …

The upper left quadrant refers to rare schools that are privately operated with the partial or complete financial support of government, either for the school or for individual student tuition. An example is the provision made for autistic, severely physically handicapped, and other types of students with low-incidence, very special needs. Small districts that have insufficient numbers of such students to justify special schools may pay private schools within or outside their boundaries to educate them.

[391] Report: “Documentation to the NCES Common Core of Data Public Elementary/Secondary School Universe Survey: School Year 2010–11, Version Provisional 2a.” U.S. Department Of Education, National Center for Education Statistics, September 2012. <nces.ed.gov>

Page C-3: “Charter School A school providing free public elementary and/or secondary education to eligible students under a specific charter granted by the state legislature or other appropriate authority, and designated by such authority to be a charter school.”

[392] Ruling 536 U.S. 639: Zelman v. Simmons-Harris. U.S. Supreme Court, June 27, 2002. Decided 5-4. Rehnquist, O’Connor, Scalia, Kennedy, Thomas. Dissenting: Stevens, Souter, Ginsburg, and Breyer. <caselaw.findlaw.com>

Majority: “Magnet schools are public schools operated by a local school board that emphasize a particular subject area, teaching method, or service to students.”

[393] Handbook of Research on School Choice. Edited by Mark Berends, Matthew G. Springer, Dale Ballou, and Herbert J. Walberg. Routledge, 2009.

Page xvii:

In the lower-right quadrant are charter schools. They are government-funded but governed and operated by private boards. The aim of charter-enabling state legislation is to promote educational diversity, effectiveness, and accountability. Charter boards may appoint their own staff or hire nonprofit or for-profit management organizations.

The extent to which charter schools are freed from conventional public school regulations and oversight varies substantially from state to state, but in all cases charter schools are accountable to their chartering authority for student achievement and progress. From their beginnings, charter schools were subject to closure for poor achievement performance, but now, if traditional public schools repeatedly fail to improve student achievement, they are also subject to NCLB sanctions and eventual closure or other means of restructuring.

[394] Book: The Education Gap: Vouchers and Urban Schools (Revised Edition). By William G. Howell and Paul E. Peterson with Patrick J. Wolf and David E. Campbell. Brookings Institution Press, 2006 (first published in 2002). <www.brookings.edu>

Page 11:

The first major choice initiative emerged from the conflicts surrounding desegregation in the 1960s. So unpopular was compulsory busing with many Americans that the magnet school was developed as an alternative way of increasing racial and ethnic integration. According to magnet school theory, families could be enticed into choosing integrated schools by offering them distinctive, improved education programs. Although the magnet idea was initially broached in the 1960s, it was not until after 1984 that the magnet school concept, supported by federal funding under the Magnet Schools Assistance program, began to have a national impact.

[395] Handbook of Research on School Choice. Edited by Mark Berends, Matthew G. Springer, Dale Ballou, and Herbert J. Walberg. Routledge, 2009.

Page xvi:

The questions raised here are simplified in that they group several distinctive forms of school choice into a single category of chosen schools. Consider some fundamental distinctions among the major forms of school choice represented in Table 1.1. The four-fold classification categorizes schools according to the possible combinations of school governance and operation on one hand and school funding on the other. As in the case of universities, these distinctions are hardly crisp. Public universities, for example, receive private tuition and donations. Sizable fractions of private universities’ research budgets come from the federal government. Still, these terms are common and offer useful starting points for discussion before turning to more precise operational definitions in the following sections and chapters.

In the lower-left cell of Table 1.1 are traditional public schools, which are government-funded and government-operated. Students within their boundaries are normally assigned to them, and they represent by far the largest number of American schools. In school choice research and policy deliberations, such traditional public schools, also called “neighborhood schools,” are often compared to choice schools such as charter and private schools, which may be near to or far from a student’s home. …

Page xvii:

Perhaps surprisingly, an estimated one million youngsters (see the homeschooling chapter in this book) are now schooled at home. (Again, such categorization isn’t precise since some primarily homeschooled students take supplementary classes and play sports in local public schools and colleges.)

Also in the upper-right quadrant are for-profit tutoring and schooling. When families believe they lack the knowledge, skills, time, or desire to provide homeschooling, yet want things that they think the public schools do not adequately provide, they may voluntarily choose to pay for private tutoring. At least in part, East Asia’s thriving private tutoring sector is often credited for that region’s top scores on international achievement tests. Private tutoring is also popular with East Asian immigrants to the United States, whose children tend to be highly successful students. ….

The NCLB [No Child Left Behind] legislation has also accelerated the growth of for-profit companies, called educational management organizations, which operate schools for school districts and charter boards. They contract with local school districts to take over repeatedly failing public schools.

Non-profit private schools, both independent and sectarian, are a long-standing form of privately-funded and privately-operated choice. Parents place such value on the education and circumstances private schools offer that they pay the tuition to send their children to them. “Public vouchers” provide full or partial tuition at public expense to enable families, often poor and urban, to send their children to these schools. In more than 50 cities, “private vouchers” support such families with contributions from firms and wealthy individuals.

In the lower-right quadrant are charter schools. They are government-funded but governed and operated by private boards. The aim of charter-enabling state legislation is to promote educational diversity, effectiveness, and accountability. Charter boards may appoint their own staff or hire nonprofit or for-profit management organizations. …

Magnet schools arose in response to court-ordered racial desegregation plans that required involuntary bussing of students away from their racially isolated schools to maintain school racial percentages close to their overall district’s percentages. …

The upper left quadrant refers to rare schools that are privately operated with the partial or complete financial support of government, either for the school or for individual student tuition. An example is the provision made for autistic, severely physically handicapped, and other types of students with low-incidence, very special needs. Small districts that have insufficient numbers of such students to justify special schools may pay private schools within or outside their boundaries to educate them.

[396] Dataset: “Table 333.40. Total revenue of private nonprofit degree-granting postsecondary institutions, by source of funds and level of institution: 1999-2000 through 2011-12.” U.S. Department Of Education, National Center for Education Statistics, January 2014. <nces.ed.gov>

“2011-2012 … Student tuition and fees (net of allowances†) [=] 38.93% … Federal appropriations, grants, and contracts [=] 14.92% … State and local appropriations, grants, and contracts [=] 1.21%”

NOTE: † This includes some government revenues:

FASB [Financial Accounting Standards Board] standards give private institutions the option to treat [federal] Pell grants as scholarships or as pass-through transactions, using the logic that the federal government determines who is eligible for the grant, not the institution. Because of this difference in requirements, public institutions will report Pell grants as federal revenues and as allowances (reducing tuition revenues), whereas FASB institutions may do this as well or (as seems to be the majority) treat Pell grants as pass-through transactions. The result is that in the case where a FASB institution and GASB [Governmental Accounting Standards Board] institution each receive the same amount of Pell grants on behalf of their students, the GASB institution will appear to have less tuition and more federal revenues, whereas the FASB institution treating Pell as pass-through will appear to have more tuition and less federal revenues.

[Webpage: “IPEDS (Integrated Postsecondary Education Data System) Finance Survey Tips Scholarships, Grants, Discounts, and Allowances.” U.S. Department Of Education, National Center for Education Statistics. Accessed June 13, 2015 at <nces.ed.gov>]

[397] Ruling 536 U.S. 639: Zelman v. Simmons-Harris. U.S. Supreme Court, June 27, 2002. Decided 5-4. Rehnquist, O’Connor, Scalia, Kennedy, Thomas. Dissenting: Stevens, Souter, Ginsburg, and Breyer. <caselaw.findlaw.com>

O’Connor concurrence:

Federal aid to religious schools is also substantial. Although data for all States is not available, data from Minnesota, for example, suggest that a substantial share of Pell Grant and other federal funds for college tuition reach religious schools. Roughly one-third or $27.1 million of the federal tuition dollars spent on students at schools in Minnesota were used at private 4-year colleges. … The vast majority of these funds--$23.5 million--flowed to religiously affiliated institutions.

[398] Handbook of Research on School Choice. Edited by Mark Berends, Matthew G. Springer, Dale Ballou, and Herbert J. Walberg. Routledge, 2009.

Page xvi: “Sizable fractions of private universities’ research budgets come from the federal government.”

Page xvii: “The upper left quadrant refers to rare schools that are privately operated with the partial or complete financial support of government, either for the school or for individual student tuition. An example is the provision made for autistic, severely physically handicapped, and other types of students with low-incidence, very special needs.”

Page 80:

Hence, since the 19th century, the conception of public education within the United States has been slightly at odds with the conception held in most other industrialized democracies. Outside the U.S. public education implies that the state helps provide common content, regulation of attendance and public financing, but not necessarily public delivery. Within the U.S. public education implies public delivery as well as the prohibition of public financing of faith-based schools.

Page 81: “[T]he degree of private financing in non-state [K-12] schools also varies widely, ranging from 0% in France, Austria, Spain, and Hungary to 100% in the United States.”

[399] Handbook of Research on School Choice. Edited by Mark Berends, Matthew G. Springer, Dale Ballou, and Herbert J. Walberg. Routledge, 2009. Chapter 5: “International Perspectives on School Choice.” By Stephen P. Heyneman (Vanderbilt University).

Page 81: “[T]he degree of private financing in non-state schools also varies widely, ranging from 0% in France, Austria, Spain, and Hungary to 100% in the United States.”

Page 84: “In 1991, New Zealand shifted its traditional highly centralized school system to allow parents to send their children to whatever school they wish, without regard to school ownership or geographical catchment area.”

Page 85: “Without being as visible as either Chile or New Zealand, the approach to school choice in Australia may be more worthy of note. The federal government has funded nonreligious, non-state education since the 1970s.”

Pages 85-86:

Canada is a good source of evidence on school choice, in part, because each province has set its own policies. Public support for nongovernment and nonreligious schools began in the 1960s in Alberta. Today, up to one half of the recurring cost for educating a child in public education is offered to nongovernment schools in Alberta, Manitoba, Quebec, and British Columbia. On the other hand, no subsidy is offered to private schools in Newfoundland, Nova Scotia, Ontario, Saskatchewan, New Brunswick, or Prince Edward Island. One province, Alberta, provides public funding for homeschooling. In three provinces (Alberta, Ontario, and Saskatchewan), full public funding is supplied to Catholic or Protestant schools through school boards.

Pages 87-88: “This country [the Netherlands] has perhaps the oldest and most pervasive policy of school choice…. Today, 76% of Holland students attend non-state schools, and 90% of those are affiliated with either Catholic or Protestant churches.”

[400] Handbook of Research on School Choice. Edited by Mark Berends, Matthew G. Springer, Dale Ballou, and Herbert J. Walberg. Routledge, 2009. Chapter 5: “International Perspectives on School Choice.” By Stephen P. Heyneman (Vanderbilt University).

Page 81:

Fourth and most important is the issue of administrative autonomy. Some assume that non-state schools have more administrative autonomy, as do private and charter schools in the United States. But, in fact, there is a wide variation in the degree to which non-state schools are free of governmental regulation…. In Australia, non-state schools are financed by the government but experience a very low level of government regulation. In Italy and Greece, just the opposite pertains.

Page 82:

Toma argues that the critical difference is not whether the schools are government or non-government but their degree of administrative and financial latitude. She finds that, in general, students and non-government schools tend to perform better in mathematics, but that restrictions on the decision-making authority in nongovernment schools significantly reduce the performance advantage.

Page 83:

The evidence from middle and low income countries is generally consistent with the evidence from OECD countries. Students tend to perform better academically in private schools, and schools with control over their own resources, and school systems which have been administratively decentralized….

[401] Handbook of Research on School Choice. Edited by Mark Berends, Matthew G. Springer, Dale Ballou, and Herbert J. Walberg. Routledge, 2009. Chapter 5: “International Perspectives on School Choice.” By Stephen P. Heyneman (Vanderbilt University). Page 80:

Much of the debate over school choice is based on the premise that there is a public monopoly over the provision of schooling and that schools are inefficient, in part, because of the absence of competition. If families could be treated as consumers and had the right to freely choose which kind of education they would prefer for their children, choice advocates assert that both government and non-government schools would improve…. Choice is believed to have the potential of a stimulant to better teaching, more creative curriculum, more attention to outcomes, and more transparency with respect to results. In short, competition is believed to represent a “tide which will lift all boats” (Hoxby, 2003).

[402] Dataset: “Table 236.55. Total and current expenditures per pupil in public elementary and secondary schools: Selected years, 1919-20 through 2011-12.” U.S. Department Of Education, National Center for Education Statistics, July 2014. <nces.ed.gov>

Expenditure per pupil in fall enrollment1 … Total expenditure4 … 2011-12 …

Unadjusted dollars2 [=] 12,010

Constant 2013-14 dollars3 [=] 12,401 …

1Data for 1919-20 to 1953-54 are based on school-year enrollment. …

2 Unadjusted (or “current”) dollars have not been adjusted to compensate for inflation.

3 Constant dollars based on the Consumer Price Index, prepared by the Bureau of Labor Statistics, U.S. Department of Labor, adjusted to a school-year basis.

4 Excludes “Other current expenditures,” such as community services, private school programs, adult education, and other programs not allocable to expenditures per student at public schools.

[403] Report: “Documentation to the NCES Common Core of Data Public Elementary/Secondary School Universe Survey: School Year 2010–11, Version Provisional 2a.” U.S. Department Of Education, National Center for Education Statistics, September 2012. <nces.ed.gov>

Page C-6: “Elementary A general level of instruction classified by state and local practice as elementary, composed of any span of grades not above grade 8; preschool or kindergarten included only if it is an integral part of an elementary school or a regularly established school system.”

Page C-14: “Secondary The general level of instruction classified by state and local practice as secondary and composed of any span of grades beginning with the next grade following the elementary grades and ending with or below grade 12.”

[404] See these 13 footnotes for documentation that the following items are excluded from spending data published by the National Center for Education Statistics:

  • State administration spending
  • Unfunded pension benefits
  • Post-employment non-pension benefits like health insurance

[405] Report: “Documentation to the NCES Common Core of Data Public Elementary/Secondary School Universe Survey: School Year 2010–11, Version Provisional 2a.” U.S. Department Of Education, National Center for Education Statistics, September 2012. <nces.ed.gov>

Page C-6: “Elementary A general level of instruction classified by state and local practice as elementary, composed of any span of grades not above grade 8; preschool or kindergarten included only if it is an integral part of an elementary school or a regularly established school system.”

Page C-14: “Secondary The general level of instruction classified by state and local practice as secondary and composed of any span of grades beginning with the next grade following the elementary grades and ending with or below grade 12.”

[406] The next 3 footnotes document that:

  • private-sector economic output is equal to personal consumption expenditures (PCE) + gross private domestic investment (GPDI) + net exports of goods and services.
  • PCE is the “primary measure of consumer spending on goods and services” by private individuals and nonprofit organizations.
  • GPDI is a measure of private spending on “structures, equipment, and intellectual property products.”

Since private school education is not a service that is typically imported or exported, a valid approximation of spending on private K-12 schools can be arrived at by summing PCE, GPDI, and government spending on private K-12 schools. The fourth footnote below details the data used in this calculation. The results of this calculation are consistent with the working paper: “Estimates of Expenditures for Private K-12 Schools.” By Michael Garet, Tsze H. Chan, and Joel D. Sherman. U.S. Department of Education, National Center for Education Statistics, May 1995. <nces.ed.gov>

[407] Report: “Fiscal Year 2013 Analytical Perspectives, Budget Of The U.S. Government.” White House Office of Management and Budget, February 12, 2012. <www.gpo.gov>

Page 471:

The main purpose of the NIPAs [national income and product accounts published by the U.S. Bureau of Economic Analysis] is to measure the Nation’s total production of goods and services, known as gross domestic product (GDP), and the incomes generated in its production. GDP excludes intermediate production to avoid double counting. Government consumption expenditures along with government gross investment — State and local as well as Federal — are included in GDP as part of final output, together with personal consumption expenditures, gross private domestic investment, and net exports of goods and services (exports minus imports).

[408] Report: “Concepts and Methods of the U.S. National Income and Product Accounts (Chapters 1–11 and 13).” U.S. Bureau of Economic Analysis, November 2014. <www.bea.gov>

Page 5-1:

Personal consumption expenditures (PCE) is the primary measure of consumer spending on goods and services in the U.S. economy.1 It accounts for about two-thirds of domestic final spending, and thus it is the primary engine that drives future economic growth. PCE shows how much of the income earned by households is being spent on current consumption as opposed to how much is being saved for future consumption.

PCE also provides a comprehensive measure of types of goods and services that are purchased by households. Thus, for example, it shows the portion of spending that is accounted for by discretionary items, such as motor vehicles, or the adjustments that consumers make to changes in prices, such as a sharp run-up in gasoline prices.2

Page 5-2:

PCE measures the goods and services purchased by “persons”—that is, by households and by nonprofit institutions serving households (NPISHs)—who are resident in the United States. Persons resident in the United States are those who are physically located in the United States and who have resided, or expect to reside, in this country for 1 year or more. PCE also includes purchases by U.S. government civilian and military personnel stationed abroad, regardless of the duration of their assignments, and by U.S. residents who are traveling or working abroad for 1 year or less.

Page 5-64:

Nonprofit institutions serving households

In the NIPAs, nonprofit institutions serving households (NPISHs), which have tax-exempt status, are treated as part of the personal sector of the economy. Because NPISHs produce services that are not generally sold at market prices, the value of these services is measured as the costs incurred in producing them.

In PCE, the value of a household purchase of a service that is provided by a NPISH consists of the price paid by the household or on behalf of the household for that service plus the value added by the NPISH that is not included in the price. For example, the value of the educational services provided to a student by a university consists of the tuition fee paid by the household to the university and of the additional services that are funded by sources other than tuition fees (such as by the returns to an endowment fund).

[409] Report: “Measuring the Economy: A Primer on GDP and the National Income and Product Accounts.” U.S. Bureau Of Economic Analysis, October 2014. <www.bea.gov>

Page 8: “Gross private domestic investment consists of purchases of fixed assets (structures, equipment, and intellectual property products) by private businesses that contribute to production and have a useful life of more than one year, of purchases of homes by households, and of private business investment in inventories.”

[410] Calculated with data from:

a) Dataset: “Table 2.3.5U. Personal Consumption Expenditures by Major Type of Product and by Major Function.” U.S. Bureau of Economic Analysis. Last revised June 1, 2015. <www.bea.gov>

“PCE on Elementary and Secondary Schools [=] $26,525,000,000”

b) “Table 236.20. Total expenditures for public elementary and secondary education and other related programs, by function and subfunction: Selected years, 1990-91 through 2010-11.” U.S. Department Of Education, National Center for Education Statistics, July 2013. <nces.ed.gov>

c) Dataset: “Table 1.1.5. Gross Domestic Product.” U.S. Bureau of Economic Analysis. Last revised February 27, 2015. <www.bea.gov>

d) Dataset: “Table 105.20. Enrollment in educational institutions, by level and control of institution, enrollment level, and attendance status and sex of student: Selected years, fall 1990 through fall 2023.” U.S. Department Of Education, National Center for Education Statistics, January 2014. <nces.ed.gov>

NOTE: An Excel file containing the data and calculations is available upon request.

[411] Book: Antitrust Law (Second edition). By Richard A. Posner. University of Chicago Press, 2001. Pages 12-13:

The optimum monopoly price may be much higher than the competitive price, depending on the intensity of consumer preference for the monopolized product—how much of it they continue to buy at successively higher prices-in relation to its cost. And the monopoly output will be smaller.3

So we now know that output is smaller under monopoly4 than under competition but not that the reduction in output imposes a loss on society. After all, the reduction in output in the monopolized market frees up resources that can and will be put to use in other markets. There is a loss in value, however. The increase in the price of the monopolized product above its cost induces the consumer to substitute products that must cost more (adjusting for any quality difference) to produce (or else the consumer would have substituted them before the price increase), although now they are relatively less expensive, assuming they are priced at a competitive level, that is, at the economically correct measure of cost. Monopoly pricing confronts the consumer with false alternatives: the product that he chooses because it seems cheaper actually requires more of society’s scarce resources to produce. Under monopoly, consumer demands are satisfied at a higher cost than necessary.

This analysis identifies the cost of monopoly with the output that the monopolist does not produce, and that a competitive industry would. I have said nothing about the higher prices paid by those consumers who continue to purchase the product at the monopoly price. Those higher prices are the focus of the layperson’s concern about monopoly—an example of the often sharp divergence between lay economic intuition and economic analysis. Antitrust economists used to treat the transfer of wealth from consumer to monopoly producer as completely costless to society, on the theory that the loss to the consumer was exactly offset by the gain to the producer.6 The only cost of monopoly in that analysis was the loss in value resulting from substitution for the monopolized product, since the loss to the substituting consumers is not recouped by the monopolist or anyone else and is thus a net loss, rather than merely a transfer payment and therefore a mere bookkeeping entry on the social books. But the traditional analysis was shortsighted.7 It ignored the fact that an opportunity to obtain a lucrative transfer payment in the form of monopoly profits will attract real resources into efforts by sellers to monopolize and by consumers to avoid being charged monopoly prices (other than by switching to other products, the source of the cost of monopoly on which the conventional economic analysis of monopoly focused). The costs of the resources consumed in these endeavors are costs of monopoly just as much as the costs resulting from the substitution of products that cost society more to produce than the monopolized product, though we’ll see that there may sometimes be offsetting benefits in this competition to become or fend off a monopolist.

[412] Textbook: Economics: Private and Public Choice. By James D. Gwartney and others. South-Western Cengage Learning, 2009. Page 338:

As Adam Smith stressed long ago, when competition is present, even self-interested individuals will tend to promote the general welfare. Conversely, when competition is weakened, business firms will have more leeway to raise prices and pursue their own objectives and less incentive to innovate and develop better ways of doing things.

Competition is a disciplining force for both buyers and sellers. In a competitive environment, producers must provide goods at a low cost and serve the interests of consumers; if they don’t, other suppliers will. Firms that develop improved products and figure out how to produce them at low cost will succeed. Sellers that are unwilling or unable to provide consumers with quality goods at competitive prices will be driven from the market. This process leads to improved products and production methods and directs resources toward projects that create more value. It is a powerful stimulus for economic progress.

[413] Textbook: Business Process Modeling, Simulation and Design. By Manuel Laguna and Johan Marklund. Pearson, 2011. Pages 55:

Each market segment where goods and services are sold establishes the basis for competition. The same product, for example, may be sold in different markets by emphasizing price in one, quality in another, functionality (attributes) in yet another, and reliability or service elsewhere. Free trade agreements among countries, such as the North Atlantic Free Trade Agreement (NAFTA), or within the European Union (EU) compound the complexity and the intensity of competition because governments are less willing to implement policies designed to protect the local industry. The good news for consumers is that this intense competition tends to drive quality up and prices down. The challenge for companies is that the level of efficiency in their operations must increase (to various degrees, depending upon the status quo), because companies must be able to compete with the world’s best.

[414] Ruling 431 U.S. 209: Abood v. Detroit Board of Education. U.S. Supreme Court, May 23, 1977. Decided 9-0 (with three separate concurrences from four Justices, who sometimes expressed opposing views to the Court’s opinion). <www.law.cornell.edu>

NOTE: This portion of the ruling was not disputed by any of the Justices.

The appellants’ second argument is that in any event collective bargaining in the public sector is inherently “political” and thus requires a different result under the First and Fourteenth Amendments. This contention rests upon the important and often-noted differences in the nature of collective bargaining in the public and private sectors.24 A public employer, unlike his private counterpart, is not guided by the profit motive and constrained by the normal operation of the market. Municipal services are typically not priced, and where they are they tend to be regarded as in some sense “essential” and therefore are often price-inelastic. Although a public employer, like a private one, will wish to keep costs down, he lacks an important discipline against agreeing to increases in labor costs that in a market system would require price increases. A public-sector union is correspondingly less concerned that high prices due to costly wage demands will decrease output and hence employment.

The government officials making decisions as the public “employer” are less likely to act as a cohesive unit than are managers in private industry, in part because different levels of public authority department managers, budgetary officials, and legislative bodies are involved, and in part because each official may respond to a distinctive political constituency. And the ease of negotiating a final agreement with the union may be severely limited by statutory restrictions, by the need for the approval of a higher executive authority or a legislative body, or by the commitment of budgetary decisions of critical importance to others.

Finally, decisionmaking by a public employer is above all a political process. The officials who represent the public employer are ultimately responsible to the electorate, which for this purpose can be viewed as comprising three overlapping classes of voters taxpayers, users of particular government services, and government employees. Through exercise of their political influence as part of the electorate, the employees have the opportunity to affect the decisions of government representatives who sit on the other side of the bargaining table. Whether these representatives accede to a union’s demands will depend upon a blend of political ingredients, including community sentiment about unionism generally and the involved union in particular, the degree of taxpayer resistance, and the views of voters as to the importance of the service involved and the relation between the demands and the quality of service. It is surely arguable, however, that permitting public employees to unionize and a union to bargain as their exclusive representative gives the employees more influence in the decisionmaking process than is possessed by employees similarly organized in the private sector. …

24 See, e. g., K. Hanslowe, The Emerging Law of Labor Relations in Public Employment (1967); H. Wellington & R. Winter, Jr., The Unions and the Cities (1971); Hildebrand, The Public Sector, in J. Dunlop and N. Chamberlain (eds.), Frontiers of Collective Bargaining 125-154 (1967); Rehmus, Constraints on Local Governments in Public Employee Bargaining, 67 Mich.L.Rev. 919 (1969); Shaw & Clark, The Practical Differences Between Public and Private Sector Collective Bargaining, 19 U.C.L.A.L.Rev. 867 (1972); Smith, State and Local Advisory Reports on Public Employment Labor Legislation: A Comparative Analysis, 67 Mich.L.Rev. 891 (1969); Summers, Public Employee Bargaining: A Political Perspective, 83 Yale L.J. 1156 (1974); Project, Collective Bargaining and Politics in Public Employment, 19 U.C.L.A.L.Rev. 887 (1972). The general description in the text of the differences between private- and public-sector collective bargaining is drawn from these sources.

[415] Paper: “Two faces of union voice in the public sector.” By Morley Gunderson. Journal of Labor Research, Summer 2005. <link.springer.com>

Pages 404-405:

Public services, in contrast, are less subject to the pressures of globalization and trade liberalization, but they are not immune. Countries, and political jurisdictions within countries, are increasingly competing for business investment and the jobs associated with that investment,30 creating stronger incentives to provide public services and infrastructures more cost effectively. Physical capital, financial capital, and human capital are increasingly mobile and footloose—able to escape jurisdictions that have excessive taxes and costs for public services and infrastructures. They can increasingly vote with their feet, moving to jurisdictions that provide the Tiebout-type tax and public expenditure package that suits their needs and preferences compelling governments to face a harder rather than softer budget constraint. Government may not go out of business, but they may not survive the next election.

[416] Paper: “Binding Interest Arbitration in the Public Sector: Is It Constitutional?” William & Mary Law Review, 1977. Pages 787-821. <scholarship.law.wm.edu>

Page 790: “Overburdened taxpayers, on the other hand, also have been harmed by higher costs of living. They surrender a material portion of their paychecks to the government and expect quality public services at reasonable rates. Public officials, in an effort to appease constituents, attempt to maximize the productivity of public employees as much as possible while holding public spending to a minimum.”

[417] See the sources in the forthcoming footnotes.

[418] The following footnotes contain the primary sources of all experimental (i.e., random assignment) school choice studies known to Just Facts. The studies are arranged from newest to oldest, and the list of these studies was obtained from the Friedman Foundation for Educational Choice.† In September 2015, Just Facts wrote to the Friedman Foundation to ask how it can be sure that there are no other random-assignment school choice studies beyond these. A senior fellow replied:

Obviously no literature review can ever be totally sure that it hasn’t overlooked something. That is why we lay out in the report the procedure we use to check our knowledge. We start with what we know of, then we use the procedure (which is described in the report) to search for any studies we don’t know about. However, that having been said, the amount of empirical scientific research on school choice programs is not very great, and the world of people who publish and discuss this research professionally is small, so it is unlikely that something as important as a random-assignment study could come out and not be noticed by the entire field.‡

Just Facts’ summary of these studies (“Ten of them found statistically significant positive effects on certain groups of students, and none found statistically significant negative effects”) is corroborated by the authors of one of these studies in a 2013 paper:

How do our results square with the previous empirical literature on school voucher effects? Regarding vouchers and educational attainment, the research record is sparse, almost exclusively comprised of quasi-experimental studies, focused primarily on Catholic private schools, and universally positive. Our experimental results here provide strong confirmation of those prior findings. Regarding vouchers and achievement, our results also fit easily into the existing research literature. The nine prior experimental analyses of the achievement effects of publicly and privately funded voucher programs tended to report positive and statistically significant effects but not necessarily in every year, in every subject, and for every subgroup of participants. The positive achievement effects of vouchers from prior experimental studies have tended to be qualified in various ways and somewhat modest, like our results for the OSP [District of Columbia Opportunity Scholarship Program] reported here.§

† Webpage: “Gold Standard Studies: Evaluating School Choice Programs.” Friedman Foundation. Accessed September 2, 2015. <www.edchoice.org>

‡ Email from the Friedman Foundation to Just Facts, September 14, 2015.

§ Paper: “School Vouchers and Student Outcomes: Experimental Evidence from Washington, DC.” By Patrick J. Wolf and others. Journal of Policy Analysis and Management, Spring 2013. Pages 246-270. <onlinelibrary.wiley.com>

[419] Paper: “Experimentally Estimated Impacts of School Vouchers on College Enrollment and Degree Attainment.” By Matthew M. Chingosa and Paul E. Peterson. Journal of Public Economics, February 2015. Pages 1-12. <www.sciencedirect.com>

Abstract:

We provide the first experimental estimates of the long-term impacts of a voucher to attend private school by linking data from a privately sponsored voucher initiative in New York City, which awarded the scholarships by lottery to low-income families, to administrative records on college enrollment and degree attainment. We find no significant effects on college enrollment or four-year degree attainment of the offer of a voucher. However, we find substantial, marginally significant impacts for minority students and large, significant impacts for the children of women born in the United States. Negative point estimates for the children of non-minority and foreign-born mothers are not statistically significant at conventional levels.

[420] Paper: “School Vouchers and Student Outcomes: Experimental Evidence from Washington, DC.” By Patrick J. Wolf and others. Journal of Policy Analysis and Management, Spring 2013. Pages 246-270. <onlinelibrary.wiley.com>

Abstract:

Here we examine the empirical question of whether or not a school voucher program in Washington, DC, affected achievement or the rate of high school graduation for participating students. The District of Columbia Opportunity Scholarship Program (OSP) has operated in the nation’s capital since 2004, funded by a federal government appropriation. Because the program was oversubscribed in its early years of operation, and vouchers were awarded by lottery, we were able to use the “gold standard” evaluation method of a randomized experiment to determine what impacts the OSP had on student outcomes. Our analysis revealed compelling evidence that the DC voucher program had a positive impact on high school graduation rates, suggestive evidence that the program increased reading achievement, and no evidence that it affected math achievement.

Page 258: “Results are described as statistically significant or highly statistically significant if they reach the 95 percent or 99 percent confidence level, respectively.”

Page 260:

The attainment impact analysis revealed that the offer of an OSP scholarship raised students’ probability of graduating from high school by 12 percentage points (Table 3). The graduation rate was 82 percent for the treatment group compared to 70 percent for the control group. The impact of using a scholarship was an increase of 21 percentage points in the likelihood of graduating. The positive impact of the program on this important student outcome was highly statistically significant.

Page 261:

We observed no statistically significant evidence of impacts on graduation rates at the subgroup level for students who applied to the program from non-SINI schools, with relatively lower levels of academic performance, and male students. For all subgroups, the graduation rates were higher among the treatment group compared with the control group, but the differences did not reach the level of at least marginal statistical significance for these three student subgroups. …

Our analysis indicated a marginally statistically significant positive overall impact of the program on reading achievement after at least four years. No significant impacts were observed in math. The reading test scores of the treatment group as a whole averaged 3.9 scale score points higher than the scores of students in the control group, equivalent to a gain of about 2.8 months of additional learning. The calculated impact of using a scholarship was a reading gain of 4.8 scale score points or 3.4 months of additional learning (Table 4).

Page 262:

Reading … Adjusted impact estimate [=] 4.75 … p-value of estimates [=] .06 …

The reading impacts appeared to cumulate over the first three years of the evaluation, reaching the marginal level of statistical significance after two years and the standard level after three years. By that third-year impact evaluation, only 85 of the 2,308 students in the evaluation (3.7 percent) had graded-out of the impact sample, having exceeded 12th grade. Between the third-year and final-year evaluation, an additional 211 students (12.2 percent) graded-out of the sample, reducing the final test score analytic sample to a subgroup of the original analytic sample. Due to this loss of cases for the final test score analysis, the confidence interval around the final point estimates is larger than it was after three years, and the positive impact of the program on reading achievement was only statistically significant at the marginal level.

Page 266: “Here, in the form of the DC school voucher program, Congress and the Obama administration uncovered what appears to be one of the most effective urban dropout prevention programs yet witnessed.”

Page 267: “We did find evidence to suggest that scholarship use boosted student reading scores by the equivalent of about one month of additional learning per year. Most parents, especially in the inner city, would welcome such an improvement in their child’s performance.”

[421] Paper: “A Modified General Location Model for Noncompliance With Missing Data: Revisiting the New York City School Choice Scholarship Program Using Principal Stratification.” By Hui Jin and others. Journal of Educational and Behavioral Statistics, April 2010. Pages 154-173. <jeb.sagepub.com>

Page 156:

In February 1997, the School Choice Scholarship Foundation (SCSF) launched the New York City School Choice Scholarship Program and invited applications from eligible low-income families interested in scholarships toward private school expenses; these scholarships offered up to $1,400 for the academic year 1997–1998. Eligibility requirements included that the children were attending public school in Grades K through 4 in the New York City at the time of application and that their families were poor enough to qualify for free school lunch. The SCSF received applications from over 20,000 students. In a mandatory information session before the lottery to assign the scholarships, each family provided background information, and the children in Grades 1 through 4 took the Iowa Test of Basic Skills (ITBS), the pretest in reading and math. In the final lottery held in May 1997, about 1,000 students were randomly selected to the treatment group and were awarded offers of scholarships; about another 1,000 were selected to the control group without the scholarship. Both groups were followed up and strongly encouraged to take a posttest, again the ITBS, at the end of the 1997-1998 academic year.

Pages 168-170:

[B]oth models find that for compliers [students who moved to private schools] originally from schools with low average scores, attendance in private school will unambiguously improve their overall math performance … as compared to attendance in public school. Such an improvement is especially evident for children in Grade 1…. However, results from the two models differ in some other groups…. Using our model, we find that reading score was likely improved for children from Grade 4 of low average … and children from Grade 1 of high average schools … [T]he estimates of Barnard et al. (2003) of the two groups … respectively, were much smaller.

[422] Paper: “School Choice as a Latent Variable: Estimating the ‘Complier Average Causal Effect’ of Vouchers in Charlotte.” By Joshua M. Cowen. Policy Studies Journal, May 2008. Pages 301–315. <onlinelibrary.wiley.com>

Page 307:

Incoming second- through eighth-grade students from low-income families in Charlotte were offered the opportunity to apply for a $1,700 scholarship to attend a private school for the 1999-2000 school year. Of the original applicants, 347 (30%) agreed to participate in a program evaluation the following spring. At the end of the school year, Iowa Tests of Basic Skills (ITBS) were administered to all students, while their parents completed surveys designed to obtain background information. There was no pretest Families who had either lost the lottery or had chosen not to accept the voucher were offered $20 and a chance to win a new scholarship to attend the testing sessions.

Page 309:

I begin the analysis of a voucher impact by estimating a typical “Intention-to-Treat” (ITT) model. In this model, students are considered to receive the treatment regardless of whether they use the voucher….

The ITT results indicate a positive voucher impact of 5 points on math scores and roughly 6 points on reading scores, all else equal. …

Next, I … [estimate] the mean effect of voucher treatment using an IV analysis, where the instrument for treatment is the random voucher offer itself….The results are similar to the ITT estimates in their statistical significance: A positive voucher effect appears evident at the p ≤ 0.10 for math achievement and p ≤ 0.05 for reading. The point estimates of the voucher effect increase from 5 to nearly 7 points in math, and from 6 to 8 points in reading.

[423] Paper: “Another Look at the New York City School Voucher Experiment.” By Alan B. Krueger and Pei Zhu. American Behavioral Scientist, January 2004. Page 658-698. <abs.sagepub.com>

Abstract:

This article reexamines data from the New York City school choice program, the largest and best-implemented private school scholarship experiment yet conducted. In the experiment, low-income public school students in kindergarten to Grade 4 were eligible to participate in a series of lotteries for a private school scholarship in May 1997. Data were collected from students and their parents at baseline and in the spring of each of the next 3 years.

Page 693:

Our reanalysis of the New York City school voucher experiment suggests that the positive effect of vouchers on the achievement of African American stu¬dents emphasized by previous researchers is less robust than commonly acknowledged. Most important, if the cohort of students who were enrolled in kindergarten when the experiment began is included in the sample, the effect of vouchers is greatly attenuated. As the results in Table 5 indicate, treating mother’s and father’s race symmetrically further attenuates the effect of school vouchers for African American children. The evidence is stronger that the availability of private school vouchers raised achievement on math than on reading exams after 3 years, but both effects are relatively small if the sample includes students with missing baseline test scores and students who have at least one Black parent.

[424] Paper: “Principal Stratification Approach to Broken Randomized Experiments: A Case Study of School Choice Vouchers in New York City.” By John Barnard and others. Journal of the American Statistical Association, June 2003. Pages 299-323. <biosun01.biostat.jhsph.edu>

Abstract:

Although this study benefits immensely from a randomized design, it suffers from complications common to such research with human subjects: noncompliance with assigned “treatments” and missing data. Recent work has revealed threats to valid estimates of experimental effects that exist in the presence of noncompliance and missing data, even when the goal is to estimate simple intention-to-treat effects. Our goal was to create a better solution when faced with both noncompliance and missing data. This article presents a model that accommodates these complications that is based on the general framework of “principal stratification” and thus relies on more plausible assumptions than standard methodology. Our analyses revealed positive effects on math scores for children who applied to the program from certain types of schools—those with average test scores below the citywide median. Among these children, the effects are stronger for children who applied in the first grade and for African-American children.

[425] NOTE: The following source conducted three different random-assignment school choice studies:

Book: The Education Gap: Vouchers and Urban Schools (Revised Edition). By William G. Howell and Paul E. Peterson with Patrick J. Wolf and David E. Campbell. Brookings Institution Press, 2006 (first published in 2002). <www.brookings.edu>

Page 39: “We evaluated the privately funded voucher programs in New York City, Dayton, Ohio, and Washington, D.C., and the nationwide CSF program by means of randomized field trials (RFTs), a research design that is well known in the medical field. … In an RFT, subjects are randomly assigned to a treatment or control group.”

Page 45: “A total of 1,500 vouchers were offered to public school students in New York City, 811 in Washington, and 515 in Dayton.46 Because vouchers were allocated randomly, the characteristics of those offered vouchers did not differ significantly from members of the control group.”

Pages 145-147:

All impacts are calculated in terms of national percentile ranking (NPR) points, which vary between 0 and 100, with a national median of 50. … As mentioned, to produce more stable estimates, we provide estimates that combine reading and math scores. (However, impacts did not differ significantly by subject matter.) …

Table 6-1 … reveals no overall private school impact of switching to a private school on student test scores in the three cities. Nor does it reveal any private school impact on the test scores of students from other than African American backgrounds (mainly Hispanic students in New York and white students in Dayton). However, the table shows that the switch to a private school had significantly positive impacts on the test scores of African American students.

Table 6-1 shows that African Americans in all three cities gained, on average, roughly 3.9 NPR points after Year I, 6.3 points after Year II, and 6.6 points after Year III.21 Results for African American students varied by city. In Year I, the only significant gains were observed in New York City, where African Americans attending a private school scored, on average, 5.4 percentile points higher than members of the control group.22 In Year II, significant impacts on African American test scores were evident in all three cities, ranging from 4.3 percentile points in New York City, to 6.5 points in Dayton, to 9.2 points in Washington, D.C. The Year III impact of 9.2 points on African American students’ test scores in New York City is statistically significant. The -1.9 point impact in Year III in Washington, however, is not.

[426] Paper: “Private School Vouchers and Student Achievement: An Evaluation of the Milwaukee Parental Choice Program.” By Cecilia Elena Rouse. Quarterly Journal of Economics, May, 1998. Pages 553-602. <faculty.smu.edu>

Page 553:

In 1990 Wisconsin began providing vouchers to a small number of low-income students to attend nonsectarian private schools. Controlling for individual fixed-effects, I compare the test scores of students selected to attend a participating private school with those of unsuccessful applicants and other students from the Milwaukee public schools. I find that students in the Milwaukee Parental Choice Program had faster math score gains than, but similar reading score gains to, the comparison groups. The results appear robust to data imputations and sample attrition, although these deficiencies of the data should be kept in mind when interpreting the results.

Page 554:

In 1990 Wisconsin became the first state in the country to implement a school choice program that provides vouchers to low-income students to attend nonsectarian private schools.2 The number of students in any year was originally limited to 1 percent of the Milwaukee public schools membership, but was expanded to 1.5 percent in 1994. Only students whose family income was at or below 1.75 times the national poverty line were eligible to apply.

Page 558:

I find that students selected for the choice program scored approximately 1.5-2.3 extra percentile points per year in math compared with unsuccessful applicants and the sample of other students in the Milwaukee public schools. The achievement gains of those actually enrolled in the choice schools were quite similar. Given a (within-sample) standard deviation of about nineteen percentile points on the math test, this suggests effect sizes on the order of 0.080-0.120- per year, or 0.320-0.480- over four years, which are quite large for education production functions. I do not estimate statistically significant differences between sectors in reading scores.

Page 561:

Table II. Numbers of Applicants, Selections, And Enrollments

Year of “first” application

1990

1991

1992

1993

Number of applicants

583

558

558

559

Number selected

376

452

321

395

[427] Book: Learning From School Choice. Edited by Paul E. Peterson and Bryan C. Hassel. Brookings Institution, 1998. <www.brookings.edu>

Chapter 13: “School Choice in Milwaukee: A Randomized Experiment.” By Jay P. Greene, Paul E. Peterson, and Jiangtao Du. Pages 342-363.

Page 354: “The results from the Milwaukee choice program reported here are, to the best of our knowledge, the first to estimate from a randomized experiment the comparative achievement effects of public and private schools.”

Pages 346-347:

The Milwaukee choice program, initiated in 1990, provided vouchers to a limited number of students from low-income families to be used to pay tuition at their choice of secular private schools in Milwaukee. …

The number of producers was restricted by the requirement that no more than half of a school’s enrollment could receive vouchers. …

Consumer choice was further limited by excluding the participation of religious schools (thereby precluding use of approximately 90 percent of the private school capacity within the city of Milwaukee). Co-production was also discouraged by prohibiting families from supplementing the vouchers with tuition payments of their own. (But schools did ask families to pay school fees and make voluntary contributions.) Other restrictions also limited program size. Only 1 percent of the Milwaukee public schools could participate, and students could not receive a voucher unless they had been attending public schools or were not of school age at the time of application.

These restrictions significantly limited the amount of school choice that was made available. Most choice students attended fiscally constrained institutions with limited facilities and poorly paid teachers.

Page 352:

The estimated effects of choice schools on mathematics achievement were slight for the first two years students were in the program. But after three years of enrollment students scored 5 percentile points higher than the control group; after four years they scored 10.7 points higher. These differences between the two groups three and four years after their application to choice schools are .24 and .51 standard deviation of the national distribution of math test scores, respectively. They are statistically significant at accepted confidence levels.51 Differences on the reading test were between 2 and 3 percentile points for the first three years and increased to 5.8 percentile points in the fourth. The results for the third and fourth years are statistically significant when the two are jointly estimated.52

Page 356:

The consistency of the results is noteworthy. Positive results were found for all years and for all comparisons except one. The results reported in the main analysis for both math and reading are statistically significant for students remaining in the program for three to four years when these are jointly estimated.

These results after three and four years are moderately large, ranging from . 1 of a standard deviation to as much as .5 of a standard deviation. Studies of educational effects interpret effects of .1 standard deviation as slight, effects of .2 and .3 standard deviation as moderate, and effects of .5 standard deviation as large.54 Even effects of .1 standard deviation are potentially large if they accumulate over time.” The average difference in test performances of whites and minorities in the United States is one standard deviation.56

[428] Transcript: “Unedited: Bill O’Reilly’s Exclusive Interview with President Obama.” Fox News, February 6, 2014. <nation.foxnews.com>

O’Reilly: The secret to getting a je—good job is education. And in these chaotic families, the children aren’t well-educated because it isn’t—it isn’t, um, encouraged at home as much as it is in other precincts. Now, school vouchers is a way to level the playing field. Why do you oppose school vouchers when it would give poor people a chance to go to better schools?

Obama: Actually—every study that’s been done on school vouchers, Bill, says that it has very limited impact if any—

O’Reilly: Try it.

OBAMA: On—it has been tried, it’s been tried in Milwaukee, it’s been tried right here in DC—

O’Reilly: And it worked here.

Obama: No, actually it didn’t. When you end up taking a look at it, it didn’t actually make that much of a difference. So what we have been supportive of is, uh, something called charters. Which, within the public school system gives the opportunity for creative experiments by teachers, by principals to-to start schools that have a different approach. And—

O’Reilly: [OVERLAP] You would revisit that? I-I just think—I used be, teach in a Catholic school, a-and I just know—

Obama: [OVERLAP] Bill—you know, I—I’ve taken, I’ve taken—I’ve taken a look at it. As a general proposition, vouchers has not significantly improved the performance of kids that are in these poorest communities—

O’Reilly: [OVERLAP] [INAUDIBLE]

Obama: Some charters—some charters are doing great. Some Catholic schools do a great job, but what we have to do is make sure every child—

[429] Report: “Evaluation of the DC Opportunity Scholarship Program.” By Patrick Wolf and others. U.S. Department of Education, Institute of Education Sciences, June 2010. <ies.ed.gov>

Page xvii:

Guided by language in the statute, the evaluation of the OSP [Opportunity Scholarship Program] relied on lotteries of eligible applicants—random chance—to create two statistically equivalent groups who were followed over time and whose outcomes were compared to estimate Program impacts. A total of 2,308 eligible applicants in the first two years of Program implementation were entered into scholarship lotteries (492 in year one, called “cohort 1,” and 1,816 in year two, called “cohort 2”). Across the cohorts, 1,387 students were randomly assigned to the impact sample’s treatment group (offered a scholarship), while the remaining 921 were assigned to the control group (not offered a scholarship).

Pages xix-xxi:

Student Achievement

• Overall reading and math test scores were not significantly affected by the Program, based on our main analysis approach. On average over the 40-plus months of potential participation, the treatment group scored 3.90 points higher in reading and .70 points higher in math than the control group, but these differences were not statistically significant (figure ES-2).

• No significant impacts on achievement were detected for students who applied from SINI [Schools in Need of Improvement] 2003-05 schools, the subgroup of students for whom the statute gave top priority, or for male students, or those who were lower performing academically when they applied.

• The Program may have improved the reading but not math achievement of the other three of six student subgroups. These include students who came from not SINI 2003-¬05 schools (by 5.80 scale score points), who were initially higher performing academically (by 5.18 points), or who were female (5.27 points). However, the impact estimates for these groups may be due to chance after applying a statistical test to adjust for multiple comparisons.

High School Graduation (Educational Attainment)

• The offer of an OSP scholarship raised students’ probability of completing high school by 12 percentage points overall. The graduation rate based on parent-provided information6 was 82 percent for the treatment group compared to 70 percent for the control group (figure ES-3). There was a 21 percent difference (impact) for using a scholarship to attend a participating private school.

• The offer of a scholarship improved the graduation prospects by 13 percentage points for the high-priority group of students from schools designated SINI in 2003-05 (79 percent for the treatment group versus 66 percent for the control group) (figure ES-3). The impact of using a scholarship on this group was 20 percentage points.

• Two other subgroups had statistically higher graduation rates as a result of the Program. Those who entered the Program with relatively higher levels of academic performance had a positive impact of 14 percentage points from the offer of a scholarship and 25 percentage points from the use of a scholarship. Female students had a positive impact of 20 percentage points from the offer of a scholarship and 28 percentage points from the use of a scholarship.

• The graduation rates of students from the other subgroups were also higher if they were offered a scholarship, but these differences were not statistically significant.

[430] Webpage: “The Executive Branch.” White House. Accessed February 1, 2013 at <www.whitehouse.gov>

Under Article II of the Constitution, the President is responsible for the execution and enforcement of the laws created by Congress. Fifteen executive departments—each led by an appointed member of the President’s Cabinet—carry out the day-to-day administration of the federal government. …

Department of Education


The mission of the Department of Education is to promote student achievement and preparation for competition in a global economy by fostering educational excellence and ensuring equal access to educational opportunity.

The Department administers federal financial aid for education, collects data on America’s schools to guide improvements in education quality, and works to complement the efforts of state and local governments, parents, and students.

The U.S. Secretary of Education oversees the Department’s 4,200 employees and $68.6 billion budget.

[431] Report: “Losing Our Future: How Minority Youth are Being Left Behind by the Graduation Rate Crisis.” By Gary Orfield and others. Civil Rights Project at Harvard University, Urban Institute, Advocates for Children of New York,

and Civil Society Institute, February 25, 2004. <escholarship.org>

Page 2:

In an increasingly competitive global economy, the consequences of dropping out of high school are devastating to individuals, communities and our national economy. At an absolute minimum, adults need a high school diploma if they are to have any reasonable opportunities to earn a living wage. A community where many parents are dropouts is unlikely to have stable families or social structures. Most businesses need workers with technical skills that require at least a high school diploma. Yet, with little notice, the United States is allowing a dangerously high percentage of students to disappear from the educational pipeline before graduating from high school.

[432] Paper: “The Importance of the Ninth Grade on High School Graduation Rates and Student Success in High School.” By Kyle M. McCallumore and Ervin F.Sparapani. Education, March 2010. Pages 447-456. <connection.ebscohost.com>

Abstract: “[T]here is really not much appealing about the reality of the problems in the American education system that permeate beyond kindergarten. Graduation rates are one of the most troubling concerns.”

[433] Book: “High School Dropout, Graduation, and Completion Rates: Better Data, Better Measures, Better Decisions.” Edited by Robert M. Hauser and Judith Anderson Koenig. By the Committee for Improved Measurement of High School Dropout and Completion Rates: Expert Guidance on Next Steps for Research and Policy Workshop, Board on Testing and Assessment, Division of Behavioral and Social Sciences and Education, National Research Council, National Academy of Education. National Academies Press, 2011. <www.nap.edu>

Page 8: “High school graduation and dropout rates have long been used as indicators of educational system productivity and effectiveness and of social and economic well-being.”

[434] “2012 Democratic Party Platform.” Democratic National Committee, September 2012. <www.presidency.ucsb.edu>

Page 5:

Too many students, particularly students of color and disadvantaged students, drop out of our schools, and Democrats know we must address the dropout crisis with the urgency it deserves. The Democratic Party understands the importance of turning around struggling public schools. We will continue to strengthen all our schools and work to expand public school options for low-income youth, including magnet schools, charter schools, teacher-led schools, and career academies.

[435] Paper: “School Vouchers and Student Outcomes: Experimental Evidence from Washington, DC.” By Patrick J. Wolf and others. Journal of Policy Analysis and Management, Spring 2013. Pages 246-270. <onlinelibrary.wiley.com>

Abstract:

Here we examine the empirical question of whether or not a school voucher program in Washington, DC, affected achievement or the rate of high school graduation for participating students. The District of Columbia Opportunity Scholarship Program (OSP) has operated in the nation’s capital since 2004, funded by a federal government appropriation. Because the program was oversubscribed in its early years of operation, and vouchers were awarded by lottery, we were able to use the “gold standard” evaluation method of a randomized experiment to determine what impacts the OSP had on student outcomes. Our analysis revealed compelling evidence that the DC voucher program had a positive impact on high school graduation rates, suggestive evidence that the program increased reading achievement, and no evidence that it affected math achievement.

Page 258: “Results are described as statistically significant or highly statistically significant if they reach the 95 percent or 99 percent confidence level, respectively.”

Page 260:

The attainment impact analysis revealed that the offer of an OSP scholarship raised students’ probability of graduating from high school by 12 percentage points (Table 3). The graduation rate was 82 percent for the treatment group compared to 70 percent for the control group. The impact of using a scholarship was an increase of 21 percentage points in the likelihood of graduating. The positive impact of the program on this important student outcome was highly statistically significant.

Page 261:

We observed no statistically significant evidence of impacts on graduation rates at the subgroup level for students who applied to the program from non-SINI [Schools in Need of Improvement] schools, with relatively lower levels of academic performance, and male students. For all subgroups, the graduation rates were higher among the treatment group compared with the control group, but the differences did not reach the level of at least marginal statistical significance for these three student subgroups. …

Our analysis indicated a marginally statistically significant positive overall impact of the program on reading achievement after at least four years. No significant impacts were observed in math. The reading test scores of the treatment group as a whole averaged 3.9 scale score points higher than the scores of students in the control group, equivalent to a gain of about 2.8 months of additional learning. The calculated impact of using a scholarship was a reading gain of 4.8 scale score points or 3.4 months of additional learning (Table 4).

Page 262:

Reading … Adjusted impact estimate [=] 4.75 … p-value of estimates [=] .06 …

The reading impacts appeared to cumulate over the first three years of the evaluation, reaching the marginal level of statistical significance after two years and the standard level after three years. By that third-year impact evaluation, only 85 of the 2,308 students in the evaluation (3.7 percent) had graded-out of the impact sample, having exceeded 12th grade. Between the third-year and final-year evaluation, an additional 211 students (12.2 percent) graded-out of the sample, reducing the final test score analytic sample to a subgroup of the original analytic sample. Due to this loss of cases for the final test score analysis, the confidence interval around the final point estimates is larger than it was after three years, and the positive impact of the program on reading achievement was only statistically significant at the marginal level.

Page 266: “Here, in the form of the DC school voucher program, Congress and the Obama administration uncovered what appears to be one of the most effective urban dropout prevention programs yet witnessed.”

Page 267: “We did find evidence to suggest that scholarship use boosted student reading scores by the equivalent of about one month of additional learning per year. Most parents, especially in the inner city, would welcome such an improvement in their child’s performance.”

[436] Paper: “Better Schools, Less Crime?” By David J. Deming. Quarterly Journal of Economics, November 2011. Pages 2063-2115. <scholar.harvard.edu>

Page 2064:

In this article, I link a long and detailed panel of administrative data from Charlotte-Mecklenburg school district (CMS) to arrest and incarceration records from Mecklenburg County and the North Carolina Department of Corrections (NCDOC). In 2002, CMS implemented a district-wide open enrollment school choice plan. Slots at oversubscribed schools were allocated by random lottery. School choice in CMS was exceptionally broad-based.

Page 2065:

Across various schools and for both middle and high school students, I find consistent evidence that winning the lottery reduces adult crime.4 The effect is concentrated among African American males and youth who are at highest risk for criminal involvement. Across several different outcome measures and scalings of crime by severity, high-risk youth who win the lottery commit about 50% less crime. They are also more likely to remain enrolled and “on track” in school, and they show modest improvements on school-based behavioral outcomes such as absences and suspensions. However, there is no detectable impact on test scores for any youth in the sample.

Page 2070: “With over 150,000 students enrolled in the 2008–2009 school year, CMS is the 20th largest school district in the nation.”

Pages 2089-2090:

In Figure II, we see that winning the lottery leads to fewer felony arrests overall (p = .078), and the effect is concentrated among the highest risk youth (0.77 felony arrests for lottery losers, 0.43 for winners, p = .013). Similarly, the trimmed social cost of crime is lower overall for lottery winners (p = .040), but the effect is concentrated among the top risk quintile youth ($11,000 for losers, $6,389 for winners, p=.036). The concentration of effects in the top risk quintile is even more pronounced for the middle school sample. The social cost of arrested crimes is $12,500 for middle school lottery losers and $4,643 for winners (p = .020), and the effect for days incarcerated is similarly large and concentrated among high-risk youth (55.5 days for losers, 17.2 for winners, p = .003).

NOTE: Credit for bringing this paper to the attention of Just Facts belongs to Alex Adrianson of the Heritage Foundation. [Commentary: “School Choice a Crime Fighter.” By Alex Adrianson. InsiderOnline, March 2012. <www.insideronline.org>]

[437] “WWC Review of the Report ‘Better Schools, Less Crime?’ ” U.S. Department Of Education, Institute of Education Sciences, What Works Clearinghouse, July 2013. <ies.ed.gov>

Page 2:

The research described in this report meets WWC [What Works Clearinghouse] evidence standards without reservations.

Strengths: The intervention and comparison groups were formed by a well-implemented random process.

Cautions: The study had high levels of attrition for one outcome, the 2004 reading score. The study author demonstrated that students in the intervention and comparison groups were equivalent at baseline on reading achievement. Therefore, the analysis for this outcome meets WWC standards with reservations.

[438] Book: The Education Gap: Vouchers and Urban Schools (Revised Edition). By William G. Howell and Paul E. Peterson with Patrick J. Wolf and David E. Campbell. Brookings Institution Press, 2006 (first published in 2002). <www.brookings.edu>

Rear cover:

William G. Howell is an associate professor in the Government Department at Harvard University and deputy director of the Program on Education Policy and Governance at Harvard. …

Paul E. Peterson is the Henry Lee Shattuck Professor of Government and director of the Program on Education Policy and Governance at Harvard, a senior fellow at the Hoover institution, and editor-in-chief of Education Next.

Page 30:

No publicly funded voucher program offers all students within a political jurisdiction the opportunity to attend the private school of their choice. All are limited in size and scope, providing vouchers only to students who come from low-income families, who attend “failing” public schools, or who lack a public school in their community.

Page 194: “Most publicly funded voucher programs today are so small that they do little to enrich the existing educational market.”

Pages 196-197:

Most privately funded voucher programs operating today promise financial support for only three to four years.

In the short term, vouchers may yield some educational benefits to the low-income families that use them. But sweeping, systemic change will not materialize as long as small numbers of vouchers, worth small amounts of money, are offered to families for short periods of time. The claims of vouchers’ strongest advocates as well as those of the most ardent opponents, both of whom forecast all kinds of transformations, will be put to the test only if and when the politics of voucher programs stabilizes, support grows, and increasing numbers of educational entrepreneurs open new private schools.

[439] Article: “Spending in nation’s schools falls again, with wide variation across states.” By Emma Brown, January 27, 2016. <www.washingtonpost.com>

 

“Per-pupil spending is ‘the gold standard in school finance,’ said Stephen Cornman, of the National Center for Education Statistics, which produced the analysis.”

NOTE: Cornman is a statistician who specializes in school finance. [Webpage: “Stephen Cornman.” U.S. Department of Education, National Center for Education Statistics. Accessed January 29, 2016 at <nces.ed.gov>]

[440] Book: Economics of Education. Edited by Dominic J. Brewer and Patrick J. McEwan. Academic Press (an imprint of Elsevier), 2010. Chapter: “School Quality and Earnings.” By J. R. Betts (University of California, San Diego). Pages 52-59.

Page 52: “What factors contribute high-quality schooling? A large literature studies the relation between class size, teacher qualifications, spending per pupil, and other measures of school inputs with gains in student achievement.”

Page 53: “The literature review below, unless stated otherwise, discusses the relatively large US literature. … The measure of school resources most typically used is school spending per pupil in the district attended, or in the worker’s state of birth.”

[441] Handbook of Research in Education Finance and Policy (Second edition). Edited by Helen F. Ladd and Margaret E. Goertz. Routledge, 2015. Chapter 15: “Measuring Equity and Adequacy in School Finance.” By Thomas A Downes and Leanna Stiefel. Pages 244-259. Page 244:

Over the past 45 years, researchers have devoted significant effort to developing ways to measure two important goals of state school finance systems: the promotion of equity and, more recently the provision of adequacy. Equity, as the term is traditionally used in the school finance literature, is a relative concept that is based on comparisons of inputs (often aggregated into a per-pupil spending measure across school districts). Thus, an equitable finance system is one that reduces to a “reasonable level” the disparity in per-pupil spending across a state’s districts.

Page 245: “While the equity concepts are defined in terms of the treatment of individuals, school finance systems are designed for districts not individuals. Thus the concepts are translated from the individual to the district level by focusing on averages across groups of individuals.”

[442] Book: The Education Gap: Vouchers and Urban Schools (Revised Edition). By William G. Howell and Paul E. Peterson with Patrick J. Wolf and David E. Campbell. Brookings Institution Press, 2006 (first published in 2002). <www.brookings.edu>

Pages 200-202:

Vouchers, by themselves, do not change the amount spent on public education, only the way it is distributed.

In fact, declining enrollments can actually help school systems that rely principally on local taxes because they secure the same amount of money to educate fewer students.

With state funding, the picture changes. For more than a century, almost all states have allocated most of their money to school districts on a per-pupil basis—apparently in the belief that the cost of schooling varies directly with the number of students being taught.

Fluctuations in student enrollments are not uncommon. Over the past few decades, changing birth rates have had marked consequences for public school enrollments across the nation. In 1971, nearly 51.3 million students were enrolled in elementary and secondary schools. A dozen years later, just under 45 million students attended public schools, a decline of more than 12 percent. …

… Although public schools lose state funding [when enrollment declines], they retain all of their local funding, and they have fewer students to teach. The net fiscal impact, therefore, need not cripple public schools. Indeed, it may actually prove salutary. Some simple math makes the point.

Assume that a district receives 45 percent of its funding from the local government, another 45 percent from the state, and 10 percent from the federal government. Next, assume that a voucher program is introduced and that 20 percent of public school students switch to private schools. The public schools automatically lose the state and federal aid that follows those students. Because the district retains all of its local funding while having fewer students to teach, however, per-pupil expenditures actually increase by roughly 11 percent.

[443] See the section above on the costs of public and private schools.

[444] Book: Economics of Education: Research and Studies. Edited by George Psacharopoulos. Pergamon Press, 1987. Chapter: “Cost Analysis in Education.” By M. Woodhall. Pages 393-399.

Page 394:

The additional cost attributable to one extra student is called the marginal cost, or sometimes the incremental cost. It is measured by the increase in total costs which occurs as a result of increasing enrollment by one unit.

The relationship between average and marginal costs varies between different institutions and depends on the form of the cost function, that is, the relationship between cost and size. It is obvious that total costs will increase if the number of students enrolled in a school or other institution increases, but average and marginal costs may increase, decrease, or remain constant as the number of students changes. There are three possible ways in which average and marginal costs may change as a result of an increase in enrollment. The reason why average and marginal costs vary in different circumstances is that in schools, colleges, or other institutions some costs are fixed, while others are variable with respect to size or number of students. The way in which average and marginal costs change as the total number of students increases depends on whether the majority of costs are fixed or variable and whether all resources are fully utilized or whether there is any spare capacity, which would mean that the number of students could be increased without incurring additional fixed costs.

Whether costs are fixed or variable depends, of course, on the time scale. In the short run, the costs of teachers as well as buildings may be fixed, although the number of books, stationery, and other materials is variable with the number of students. In the long run, however, the number of teachers employed may be varied. The short-run marginal costs of education are therefore likely to be lower than the long-run marginal costs. The extra costs incurred when additional students are enrolled will also depend on the magnitude of the change involved. It may be impossible to measure the extra costs of enrolling one additional student, but perfectly possible to measure the additional costs of enrolling 50 or 100 students, or alternatively the marginal savings made by enrolling 50 or 100 fewer students. However, one recent study of the marginal costs of overseas students in the United Kingdom pointed out that:

clearly there is no such thing as a marginal cost or a marginal saving of overseas students: marginal costs and marginal savings arc not discrete numbers but stepwise functions; the marginal costs of adding 100 students might be zero, whereas the marginal costs of adding 200 students might be considerable; moreover, the marginal costs of adding 1,000 students is not twice the marginal costs of adding 500 students. Similar remarks apply to the case of marginal savings. (Blaug 1981 p. 55)

[445] Article: “Competition Passes the Test: Vouchers Improve Public Schools in Florida.” By Marcus A. Winters and Jay P. Greene. Education Next, Summer 2004. Pages 66-71. <educationnext.org>

Page 66:

The A+ program offers all the students in schools that chronically fail the Florida Comprehensive Assessment Test (FCAT) the opportunity to use a voucher to transfer to a private school. Schools face the threat of vouchers only if they are failing. They can remove the threat by improving their test scores. Comparing the performance of schools that were threatened with vouchers and the performance of those that faced no such threat gives a measure of how public schools respond to competition. …

Schools that receive a grade of F twice during any four-year period are deemed chronically failing. Their students then become eligible to receive vouchers, called opportunity scholarships, which they can use at another public school or at a private school. The vouchers are worth the lesser of per-pupil spending in the public schools or the cost of attending the chosen private school.

Page 67:

To analyze the program’s impact on public schools, we collected school-level test scores on the 2001-02 and 2002-03 administrations of the FCAT and the Stanford-9, a national norm-referenced test that is given to all Florida public school students around the same time as the FCAT. The results from the Stanford-9 are particularly useful for our analysis. Schools are not held accountable for their students’ performance on the Stanford-9. As a result, they have little incentive to manipulate the results by “teaching to the test” or through outright cheating. Thus, if gains are witnessed on both the FCAT and the Stanford-9, we can be reasonably confident that the gains reflect genuine improvements in student learning.

Page 68:

We compared the change in test-score performance for each of these groups relative to the rest of Florida public schools between the 2001-02 and 2002-03 administrations of the FCAT and the Stanford-9. …

Each of these results is statistically significant at a very high level, meaning that we can be highly confident that the test-score gains made by schools facing the actuality or prospect of voucher competition were larger than the gains made by other public schools.

Page 69:

Gains in test scores were 15 points higher among those schools whose students were eligible for vouchers than the gains among the rest of Florida’s public schools. Schools whose students were on the verge of becoming eligible also made greater gains. …

The same pattern—of greater gains among schools facing competition or the threat thereof—was witnessed on the national Stanford-nine exam, confirming that the gains reflect genuine improvements in learning rather than teaching to the test or cheating. The gains among schools whose students were eligible for vouchers were enough to erase almost one-fifth of the gap between their average score in the 2001-02 school year and the average score of all other Florida public schools.

[446] Article: “Measuring Competitive Effects From School Voucher Programs: A Systematic Review.” By Anna J. Egalitea. Journal of School Choice: International Research and Reform, December 2, 2013. Pages 443-464. <www.tandfonline.com>

Page 447: “This article reviews the complete set of studies that measure the effects of private school competition on public school students’ test scores as a result of school voucher or tax credit scholarship programs using one of the high-quality study designs described above or a similarly rigorous alternative specification.”

Page 449: “The third and final phase of this literature search took a systematic approach to ensure no studies had been overlooked.”

Page 452:

All but one of these 21 studies found neutral/positive or positive results. The only study to find no effects across all subjects was a 2006 study by Greene and Winters of the federal voucher program in Washington, DC. Although each choice program examined in this review takes place in a unique environment, the DC voucher program was exceptional because it was restricted to a relatively small number of participants in the year this study was conducted. Furthermore, a “hold-harmless” provision ensured that public schools were insulated from the financial loss from any students that transferred into private schools with a voucher. The absence of a positive competition effect is thus unsurprising, given these design features.

Page 460:

The strongest studies were those employing a regression discontinuity approach. This sophisticated quasi-experimental, empirical method should be used wherever possible … [because it] allows us to interpret estimates from this model as causal estimates of the competitive effects of a voucher program. Results from studies using this approach unanimously find positive impacts on student academic achievement.

[447] Webpage: “Top Organization Contributors (2002-2014 election cycles).” Center for Responsive Politics. Accessed September 21, 2015 at <www.opensecrets.org>

Totals on this page reflect donations from employees of the organization, its PAC and in some cases its own treasury. These totals include all campaign contributions to federal candidates, parties, political action committees (including super PACs), federal 527 organizations, and Carey committees. The totals do not include contributions to 501(c) organizations, whose political spending has increased markedly in recent cycles. Unlike other political organizations, they are not required to disclose the corporate and individual donors that make their spending possible. Only contributions to Democrats and Republicans or liberal and conservative outside groups are included in calculating the percentages the donor has given to either party. …

Rank 4

National Education Assn

Total Contributions $92,972,656

To Dems & Liberals $88,879,720

To Repubs & Conservs $3,236,859 …

Rank 6

American Federation of Teachers

Total Contributions $69,757,113

To Dems & Liberals $68,983,796

To Repubs & Conservs $349,250 …

Based on data released by the FEC on March 09, 2015.

CALCULATIONS:

$88,879,720 / $92,972,656 = 96%

$68,983,796 / $69,757,113 = 99%

[448] Webpage: “Quick Answers to General Questions.” Federal Election Commission. Accessed September 22, 2015 at <www.fec.gov>

What is a 527 organization?

Entities organized under section 527 of the tax code are considered “political organizations,” defined generally as a party, committee or association that is organized and operated primarily for the purpose of influencing the selection, nomination or appointment of any individual to any federal, state or local public office, or office in a political organization. All political committees that register and file reports with the FEC are 527 organizations, but not all 527 organizations are required to file with the FEC. Some file reports with the Internal Revenue Service (IRS).

[449] Glossary: “FEC Terminology for Candidate Committees.” Federal Election Commission, 2013. <www.fec.gov>

Page 1:

Carey Committee (also known as a Hybrid PAC) -- A political committee that maintains one bank account for making contributions in connection with federal elections and a separate “non-contribution account” for making independent expenditures. The first account is subject to all of the limits and prohibitions of the Act, but the non-contribution account may accept unlimited contributions from individuals, corporations, labor organizations and other political committees. The committee must register with the FEC and report all receipts and disbursements for both accounts.

[450] Book: Labor Relations in the Public Sector (Fifth Edition). By Richard C. Kearney and Patrice M. Mareschal. CRC Press, 2014. Page 40:

Public education is the largest public employer by far in state and local government—more than 5 million individuals work in public education. It is also the most expensive of all state and local services, consuming some $571 billion in expenditures.

Two organizations have dominated the union movement in education: the independent National Education Association (NEA) and the AFL—CIO-affiliated American Federation of Teachers (AFT). The NEA, the largest labor union in the United States, claims a national membership of about 3 million active and retired members, but it lost more than 100,000 between 2010 and 2011, attributable to the Great Recession and its aftermath. The NEA operates with 14,000 local affiliates, with its primary strength in mid-sized cities and suburbs. Its largest state affiliate is the California Teachers Association, with a whopping 325,000 members. Eighty percent of its members are classroom teachers.

The AFT, with a membership roll of about 1.4 million, is concentrated in large cities such as New York, Boston, Chicago, Minneapolis, and Denver. The AFT has presented itself as an aggressive union seeking collective bargaining rights for teachers since its inception in 1919. In contrast, the NEA was born in 1857 as a professional organization open to both teachers and supervisory personnel. Even though many NEA locals function as unions today, and the national organization is officially labeled a “union” by the Bureau of Labor Statistics and the Internal Revenue Service, a large portion of the members are found in locals that do not engage in collective bargaining relationships.

[451] “Letter to the Democrats in the House and Senate on DC Vouchers.” By Dennis Van Roekel (President). National Education Association, March 05, 2009. <www.nea.org>

“Opposition to vouchers is a top priority for NEA. Throughout its history, NEA has strongly opposed any diversion of limited public funds to private schools. The more than 10,000 delegates who attend NEA’s national convention each year have consistently reaffirmed this position.”

[452] “2012 Democratic Party Platform.” Democratic National Committee, September 2012. <www.presidency.ucsb.edu>

Page 5: “The Democratic Party understands the importance of turning around struggling public schools. We will continue to strengthen all our schools and work to expand public school options for low-income youth, including magnet schools, charter schools, teacher-led schools, and career academies.”

[453] “2012 Republican Party Platform.” Republican National Committee, August 2012. <cdn.gop.com>

Page 36:

We support options for learning, including home schooling and local innovations like single-sex classes, full-day school hours, and year-round schools. School choice—whether through charter schools, open enrollment requests, college lab schools, virtual schools, career and technical education programs, vouchers, or tax credits—is important for all children, especially for families with children trapped in failing schools. Getting those youngsters into decent learning environments and helping them to realize their full potential is the greatest civil rights challenge of our time.

[454] Constitution of the United States. Signed September 17, 1787. Enacted June 21, 1788. <justfacts.com>

Article 2, Clause 2, Section 2: “[The President] with the Advice and Consent of the Senate, shall appoint Ambassadors, other public Ministers and Consuls, Judges of the supreme Court….”

[455] Report: “Filibusters and Cloture in the Senate.” By Richard S. Beth and Valerie Heitshusen. Congressional Research Service, December 24, 2014. <www.senate.gov>

Summary:

The filibuster is widely viewed as one of the Senate’s most characteristic procedural features. Filibustering includes any use of dilatory or obstructive tactics to block a measure by preventing it from coming to a vote. The possibility of filibusters exists because Senate rules place few limits on Senators’ rights and opportunities in the legislative process. …

Senate Rule XXII, however, known as the cloture rule, enables Senators to end a filibuster on any debatable matter the Senate is considering. Sixteen Senators initiate this process by presenting a motion to end the debate. In most circumstances, the Senate does not vote on this cloture motion until the second day of session after the motion is made. Then, it requires the votes of at least three-fifths of all Senators (normally 60 votes) to invoke cloture. (Invoking cloture on a proposal to amend the Senate’s standing rules requires the support of two-thirds of the Senators present and voting, whereas cloture on nominations other than to the U.S. Supreme Court requires a numerical majority.)

Pages 9-10:

Invoking cloture usually requires a three-fifths vote of the entire Senate—”three-fifths of the Senators duly chosen and sworn.” Thus, if there is no more than one vacancy, 60 Senators must vote to invoke cloture. In contrast, most other votes require only a simple majority (that is, 51%) of the Senators present and voting, assuming those Senators constitute a quorum. In the case of a cloture vote, the key is the number of Senators voting for cloture, not the number voting against. Failing to vote on a cloture motion has the same effect as voting against the motion: it deprives the motion of one of the 60 votes needed to agree to it.

There are two important exceptions to the three-fifths requirement to invoke cloture. First, under Rule XXII, an affirmative vote of two-thirds of the Senators present and voting is required to invoke cloture on a measure or motion to amend the Senate rules. This provision has its origin in the history of the cloture rule. Before 1975, two-thirds of the Senators present and voting (a quorum being present) was required for cloture on all matters. In early 1975, at the beginning of the 94th Congress, Senators sought to amend the rule to make it somewhat easier to invoke cloture. However, some Senators feared that if this effort succeeded, that would only make it easier to amend the rule again, making cloture still easier to invoke. As a compromise, the Senate agreed to move from two-thirds of the Senators present and voting (a maximum of 67 votes) to three-fifths of the Senators duly chosen and sworn (normally, and at a maximum, 60 votes) on all matters except future rules changes, including changes in the cloture rule itself.17 Second, pursuant to precedent established by the Senate on November 21, 2013, the Senate can invoke cloture on nominations other than those to the U.S. Supreme Court by a majority of Senators voting (a quorum being present).18

[456] Webpage: “Rules of the Senate: Rule XXII: Precedence Of Motions.” Accessed September 23, 2015 at <www.rules.senate.gov>

2. Notwithstanding the provisions of rule II or rule IV or any other rule of the Senate, at any time a motion signed by sixteen Senators, to bring to a close the debate upon any measure, motion, other matter pending before the Senate, or the unfinished business, is presented to the Senate, the Presiding Officer, or clerk at the direction of the Presiding Officer, shall at once state the motion to the Senate, and one hour after the Senate meets on the following calendar day but one, he shall lay the motion before the Senate and direct that the clerk call the roll, and upon the ascertainment that a quorum is present, the Presiding Officer shall, without debate, submit to the Senate by a yea-and-nay vote the question:

“Is it the sense of the Senate that the debate shall be brought to a close?” And if that question shall be decided in the affirmative by three-fifths of the Senators duly chosen and sworn -- except on a measure or motion to amend the Senate rules, in which case the necessary affirmative vote shall be two-thirds of the Senators present and voting -- then said measure, motion, or other matter pending before the Senate, or the unfinished business, shall be the unfinished business to the exclusion of all other business until disposed of.

Thereafter no Senator shall be entitled to speak in all more than one hour on the measure, motion, or other matter pending before the Senate, or the unfinished business, the amendments thereto, and motions affecting the same, and it shall be the duty of the Presiding Officer to keep the time of each Senator who speaks. Except by unanimous consent, no amendment shall be proposed after the vote to bring the debate to a close, unless it had been submitted in writing to the Journal Clerk by 1 o’clock p.m. on the day following the filing of the cloture motion if an amendment in the first degree, and unless it had been so submitted at least one hour prior to the beginning of the cloture vote if an amendment in the second degree. No dilatory motion, or dilatory amendment, or amendment not germane shall be in order. Points of order, including questions of relevancy, and appeals from the decision of the Presiding Officer, shall be decided without debate.

After no more than thirty hours of consideration of the measure, motion, or other matter on which cloture has been invoked, the Senate shall proceed, without any further debate on any question, to vote on the final disposition thereof to the exclusion of all amendments not then actually pending before the Senate at that time and to the exclusion of all motions, except a motion to table, or to reconsider and one quorum call on demand to establish the presence of a