Classroom Expernomics: Volume 10 (Fall 2001)

 

Revisiting Teaching Moral Hazard: Additional Class-Room Experimental Results

Noel D. Campbell and Thomas W. De Berry
North Georgia College and State University


Abstract

The authors previously presented results of their attempt to make moral hazard more real to beginning economics students by inducing their own demonstration of it through their behavior regarding exams. Though earlier results provided no basis to support a hypothesis of moral hazard in the exam behavior experiment, the procedure’s pedagogical usefulness was noted. This paper presents the results of the authors’ attempt to correct their perceived experimental design flaws and induce moral hazard with a new sample of beginning economics students. These results are analyzed, and conclusions are described in this paper.

Introduction

This paper presents an account of our further attempts to develop a moral hazard pedagogy by experiment. Drawing on an example in Arnold’s principles text, (Arnold 2000, p. 721), for pedagogical purposes we seek to induce moral hazard on the part of principles of microeconomics students with respect to study effort. We do so by unexpectedly altering the grading procedures during the course, providing a guaranteed minimum grade. In essence, we unexpectedly offer students "grade insurance" at zero price.

Our initial and follow-up findings are contrary to hypothesis. We present both sets of findings and discussion that we believe may lead to results closer to expectations, and therefore improved pedagogy in the future.

Moral Hazard and Other Asymmetric Information Problems

Classroom discussion of moral hazard usually takes place within a larger discussion of the economic effects of asymmetric or incomplete information. Asymmetric information, along with non-internalized externalities and the existence of public or collective goods, is conceived of as a primary cause of market failure. Market failure exists when a particular good or service is not produced in the optimal quantity, which is the predicted perfectly competitive equilibrium quantity. To the extent that market failure occurs, society fails to reach Pareto optimality. Asymmetric information leads to market failure primarily by causing supply of or demand for a good, service, or resource to deviate from the (hypothetical) perfectly competitive supply or demand. Such information-driven deviations lead to over- or under-production.

Awareness of the existence of asymmetric information may lead to non-optimality in other ways as well. As a result of asymmetric information, parties with superior information may strategically select to participate in or abstain from a given market. This is adverse selection, as famously analyzed in George Akerlof’s "lemons model" (Akerlof 1970). Additionally, moral hazard will exist when the party with superior information alters his behavior in such a way that benefits himself while imposing costs on those with inferior information (Pauly, 1974). The most common examples of moral hazard involve insurance (Pauly, 1974). The insured has far better information regarding her behavior than the insurers. After she has contracted for insurance, she can use that informational superiority to alter her behavior in a way that benefits her exclusively and "socializes" the cost among those with inferior information. For example, after purchasing health coverage, the insured may begin eating a diet higher in fat and sodium; or, after purchasing collision coverage, the insured may begin to drive faster and more carelessly.

Pedagogy of Asymmetric Information

Asymmetric information and moral hazard have become standard features of a wide variety of principles texts, including Arnold (2000), Case and Fair (1996), Gregory and Ruffin (1994), Gwartney, Stroup, and Sobel (2000), Heyne (1997), McConnell and Brue (1996), and O’Sullivan and Sheffrin (2000). The inclusion of market failure as a substantive component of the body of economics principles creates interest in the pedagogy of moral hazard: how can instructors teach the concept in a meaningful way?

Concurrent with this trend are the trends toward experiential learning and toward general acceptance of direct economics experimentation as a method. We seek to combine these trends to develop an effective experiential, experimental pedagogy for moral hazard. Both authors teach in the business administration department of a public university with a strong teaching emphasis. Our principles of microeconomics students are overwhelmingly traditional students in business administration majors: accounting, finance, management, and marketing. Often disdaining purely theoretical modes of presentation, our students prefer results they can concretely demonstrate to themselves, and often prefer "hands-on" activities as opposed to "chalk and talk." Insofar as possible, we seek to accommodate these preferences by involving the students in an experiment regarding their study behavior for quizzes that would demonstrate the moral hazard concept.

The Experiment

Our initial experiment involved two sections of principles of microeconomics for the fall 1999 semester. Our follow-up experiment involved both sections of principles of microeconomics for the fall 2000 semester. Both syllabi listed chapter or "topics" quizzes that were weighted as four percent (fall 1999) or five percent (fall 2000) of the total course grade each, among other credit items. Identical quizzes were given in each section, in approximate succession, giving students from the separate sections limited opportunities to interact. Two of these quizzes formed the basis for our moral hazard experiment. In 1999, our first quiz covered elasticity concepts. We graded the elasticity quiz on a straight ten-point scale ("B" awarded for eighty to eighty-nine percent correct responses). The second quiz covered asymmetric information. Days prior to administering the second quiz, both instructors announced that all students would earn a minimum passing grade (low C), regardless of their actual percentage outcome on the quiz. For reasons discussed below, we altered the experiment somewhat for 2000. The first quiz covered elasticity and was graded on the ten-point scale. The second quiz covered the logic of consumer choice and carried a guaranteed minimum grade.

By guaranteeing a minimum grade, we created the situation for moral hazard, rather analogously to offering "grade point insurance" at zero price. Due to the announcement about grading with a "floor", students possessed superior information regarding their study efforts. With reduced risk of lowering their grade point average, based on the informational asymmetry, and without risk of counter-action by the instructors, students could consume more leisure and exert less effort in studying. This behavior represented "cost" to the instructor/insurers. It was expected that this would empirically result in different mean scores between the two quizzes. By hypothesis, the raw mean score for both sections would be lower on the second quiz, as students exhibit moral hazard. Though students were not informed of the experiment while it was on going, later they were apprised of the results, as a capstone to asymmetric information instruction. This fulfilled the pedagogical benefit.

Our institution and department share common and stable demographics. For both years, the institution’s students were predominantly female, but the department’s students exhibited more gender balance. In all cases, the heavy majority of students were native Georgian, Caucasian, traditional students. Our principles of microeconomics students tended to be sophomores with some juniors, and business administration majors.

Regarding experimental design, these sections presented the possibility of cross-sectional comparisons, in addition to or instead of time-series comparisons. We believe there is more control utilizing a strict time series approach, which compares an instructor’s students only to themselves, instead of comparing students across professors. There exists an apparent trade-off of bias in experimental design. If we organized the experiment as a cross-sectional comparison, we feared creating "cross-professor bias," the difficulty arising from comparing students who learned material under one professor with students who learned the same material under another. However, by utilizing a time series approach that compared students only with themselves, we included a variety of biases, which are discussed in the conclusion.

The Results

For 1999, the summary statistics for each quiz are presented by section in Tables 1 and 2. We tested the two sections to determine whether the sample variances between quizzes were similar (Tables 3 and 4). In both instances, we were unable to reject the hypothesis of similar variances, thus determining the next appropriate statistical test. Accordingly, our t-test results comparing the sample means across the two quizzes for section A are presented in Table 5.

Similar results for section B are presented in Table 6. We found no evidence to support a hypothesis of moral hazard. Our evidence is contrary to hypothesis. In section A, we found no statistically significant differences between the first and second quiz means. In section B, we found statistically significant differences between means; however, the second quiz mean was significantly greater than the first quiz mean.

Despite corrective efforts, our 2000 results are similarly counter-hypothetical. Summary statistics and sample variance tests for 2000 are presented in Tables 7 through 10. Our t-test results for 2000 are presented in Tables 11 and 12. In neither section were the mean scores significantly different between quizzes, again not supporting a moral hazard hypothesis.

Table 1: Section A Quiz Summary Statistics-1999

Section A Score Q1 Percent Score Q2 Percent
Mean: 8.763158 58.42 9.00 60.00
Median: 9 60.00 9.00 60.00
Variance: 7.591038 337.3795 7.214286 320.6349
Std. Dev.: 2.755184 18.36789 2.685942 17.90628

Table 2: Section B Quiz Summary Statistics-1999

Section B Score Q1 Percent Score Q2 Percent
Mean: 9.604167 64.03 10.94 72.91
Median: 9.5 63.33 11.00 73.33
Variance: 6.925089 307.7817 6.104533 271.3126
Std. Dev.: 2.631556 17.54371 2.470735 16.47157

Table 3: Section A Test for Similar Variance-1999

Section A analysis:    
F-Test Two-Sample for Variances  
     
  Quiz 1 Quiz 2
Mean 8.763158 9
Variance 7.591038 7.214286
Observations 38 29
Df 37 28

Table 4: Section B Test for Similar Variance-1999

Section B Analysis:  
F-Test Two-Sample for Variances
     
  Quiz1 Quiz 2
Mean 9.604166667 10.93617
Variance 6.925088652 6.104533
Observations 48 47
Df 47 46
F 1.134417462  
P(F<=f) one-tail 0.334912819  
F Critical one-tail 1.629318902  

Table 5: Section A Test for Similar Mean Quiz Scores-1999

Section A analysis:    
t-Test: Two-Sample Assuming Equal Variances
     
  Quiz 1 Quiz 2
Mean 8.76315789 9
Variance 7.59103841 7.21428571
Observations 38 29
Pooled Variance 7.42874494  
Hypothesized Mean Difference 0  
Df 65  
t Stat -0.3524152  
P(T<=t) one-tail 0.36283353  
t Critical one-tail 1.66863629  
P(T<=t) two-tail 0.72566707  
t Critical two-tail 1.99713668  

Table 6: Section B Test for Similar Mean Quiz Scores-1999

Section B analysis:    
t-Test: Two-Sample Assuming Equal Variances
     
  Quiz 1 Quiz 2
Mean 9.60416667 10.9361702
Variance 6.92508865 6.10453284
Observations 48 47
Pooled Variance 6.51922234  
Hypothesized Mean Difference 0  
Df 93  
t Stat -2.5422323  
P(T<=t) one-tail 0.00633378  
t Critical one-tail 1.66140353  
P(T<=t) two-tail 0.01266757  
t Critical two-tail 1.98579983  

Table 7: Section A Quiz Summary Statistics-2000

Section A Score Q1 Percent Score Q2 Percent
Mean: 9.642857 64.21429 10.84615 72.23076923
Median: 10 67 11 73
Variance: 5.93956 268.489 4.474359 199.525641
Std. Dev.: 2.437121 16.38563 2.115268 14.12535455

Table 8: Section B Quiz Summary Statistics-2000

Section B Score Q1 Percent Score Q2 Percent
Mean: 9.27027 62.32432 10.05405 67
Median: 10 67 10 67
Variance: 10.48048 452.0586 6.552553 292.9444
Std. Dev.: 3.237357 21.26167 2.559795 17.11562

Table 9: Section A Test for Similar Variance-2000

Section A Analysis:    
F-Test Two-Sample for Variances  
     
  Quiz1 Quiz2
Mean 9.642857 10.84615
Variance 5.93956 4.474359
Observations 14 13
Df 13 12
F 1.327466  
P(F<=f) one-tail 0.315217  
F Critical one-tail 2.66018  

Table 10: Section B Test for Similar Variance-2000

Section B analysis:    

F-Test Two-Sample for Variances

 
     
  Quiz 1 Quiz 2
Mean 9.27027 10.05405
Variance 10.48048 6.552553
Observations 37 37
Df 36 36
F 1.59945  
P(F<=f) one-tail 0.081821  
F Critical one-tail 1.742972  

Table 11: Section A Quiz Summary Statistics-2000

Section A Analysis:
t-Test: Two-Sample
Assuming Equal Variances
     
  Quiz 1 Quiz 2
Mean 9.642857 10.84615
Variance 5.93956 4.474359
Observations 14 13
Pooled Variance 5.236264  
Hypothesized Mean Difference 0  
df 25  
t Stat -1.365261  
P(T<=t) one-tail 0.092166  
t Critical one-tail 1.70814  
P(T<=t) two-tail 0.184331  
t Critical two-tail 2.059537  

Table 12: Section B Quiz Summary Statistics-2000

Section B Analysis:
t-Test: Two-Sample Assuming Equal Variances  
     
  Score Q1 Score Q2
Mean 9.27027 10.05405
Variance 10.48048 6.552553
Observations 37 37
Pooled Variance 8.516517  
Hypothesized Mean Difference 0  
df 72  
t Stat -1.155184  
P(T<=t) one-tail 0.125917  
t Critical one-tail 1.666294  
P(T<=t) two-tail 0.251834  
t Critical two-tail 1.993462  

Conclusions

Strictly speaking, we should conclude the evidence fails to support a moral hazard hypothesis. This would render the search for effective moral hazard pedagogy problematic. However, we believe continuing issues of experimental design and implementation and peculiarities in students’ rates of time preference, rather than an inherent problem in the theory of moral hazard, likely explain our results. Therefore, though our results continue to be discouraging, we see the path to continued, hopefully effective, change.

Regarding our original experimental design, the following concerns came to light. It became obvious that students perceived a disparity in the difficulty of the material covered on the respective quizzes. Mastery of the elasticity material requires a greater degree of technical or mechanical competence. This more difficult material was tested first and tended to bias the initial section mean downward. Furthermore, despite instructors’ efforts to conceal the experiment from students, many realized what we were attempting, thereby contaminating behavior. The quizzes were designed as chapter quizzes and were administered soon after the material was presented. Therefore a student received instruction regarding moral hazard and was promptly told that they would receive a minimum grade on the next chapter quiz. When the negative empirical results were presented in class, students’ prior knowledge was one of the first objections raised. Lastly, we realized we needed to script our statements to students regarding the grading and quiz content. Post-experimental discussion with students revealed that the different sections had very different ideas about how the grade floor would work in practice, as well as different ideas regarding quiz content.

Accordingly, we amended our experimental procedure. Instead of testing students on asymmetric information, we substituted the "more difficult" material of consumer’s choice for the second quiz. We chose elasticity and consumer’s choice for their perceived difficulty and conceptual distinctiveness. We administered the second quiz before our initial instruction in moral hazard, and carefully scripted our statements regarding quiz coverage and the operation of the grade floor. Despite these amendments, we generated our second set of counter-hypothetical results.

Two issues of design and implementation may be over-riding the underlying moral hazard we seek to elicit. The first issue is simple math phobia. Though the mathematics of elasticity is uncomplicated, the concept is one of the most math-intensive taught in Principles classes. To the extent that students fear and loath mathematics, they may be less effective learners and quiz-takers. We may be able to mitigate this effect and highlight the possible moral hazard by offering our "grade insurance" on the elasticity quiz.

The second issue is the possibility that a form of "grade illusion" may exist, driven by students’ remarkably high rates of time preference. Rather than recognizing that five percent of his grade is unchanged regardless of when it is earned, a student perceives five percent of his course grade as less important when he has fifty percent of the grade outstanding than when he has only fifteen percent outstanding. This may motivate students to study less diligently for the first quiz, swamping moral hazard. We are not implying students are irrational. Because of students’ high rates of time preference, study effort this week seems much more costly than projected equal study effort three weeks away. Consequently, students "blow off" the early going, then "buckle down" as the term concludes. We may be able to control for this effect by introducing cross-sectional analysis; that is, by having professors teach the chapters in opposite order from each other. Thus, we would expect the section taught elasticity first to demonstrate larger variation between quiz means than the section taught elasticity second.

We began with one of the rarest commodities in academics: an idea that might be interesting, useful and fun, both for us as educators and economists, and for our students. We still believe in the pedagogical value of the experiment despite the counter-hypothetical results. Despite the "failed" outcome of these experiments, we are convinced that our students benefited by their participation. They were excited to be involved in "economics research," and gained useful analytical experience in discussing why the experiment failed. Thus, even though the experimental evidence was contrary to hypothesis, we achieved our pedagogical objective: improved student mastery of the moral hazard concept.

References

Akerlof, George A. 1970. "The Market for ‘Lemons’: Quality Uncertainty and the Market Mechanism." Quarterly Journal of Economics 84: 488-500.

Arnold, Roger A. 2000. Economics, 5th ed. Cincinnati: South-Western College Publishing.

Case, Karl E. and Ray C. Fair. 1996. Principles of Economics, 4th ed. Upper Saddle River: Prentice Hall.

Gregory, Paul R. and Roy J. Ruffin. 1994. Economics. New York: Harper Collins College Publishers.

Gwartney, James D., Richard L. Stroup and Russell S. Sobel. 2000. Economics: Private and Public Choice, 9th ed. Ft. Worth: Dryden Press.

Heyne, Paul A. 1997. The Economic Way of Thinking, 8th ed. Upper Saddle River: Prentice Hall.

McConnell, Campbell R. and Stanley L. Brue. 1996. Macroeconomics, 13th ed. New York: McGraw Hill.

O’Sullivan, Arthur and Steven M. Sheffrin. 2000. Economics Principles and Tools, 2nd ed. Upper Saddle River: Prentice Hall.

Pauly, Mark V. 1974. "Overinsurance and Public Provision of Insurance: The Roles of Moral Hazard and Adverse Selection." Quarterly Journal of Economics 88: 44-62.


Back to Expernomics Main Page