Despite a great deal of research on course evaluations, institutional policies and practices are not always well informed by that research. Faculty are often not as informed as they should be either. Anecdotal evidence, myth and folklore tend to prevail. It’s good to encourage faculty to learn more about how feedback from students can become a valuable source of instructional information, and that’s where this article can help.
Here’s a collection of resources to check out and recommend to faculty. All of them are broadly relevant, even though some are published in discipline-based journals.
Using ratings results to improve.
Faculty aren’t always clear as to what they should do about rating results. Are they a mandate for change? Are they confusing and contradictory? Do they make sense? Can they be used to identify what needs to improve? Do they point in the direction of certain kinds of change? These two articles lay out how the results should be looked at and what can be concluded from them.
Boysen, G. A. (2016). Using student evaluations to improve teaching: Evidence-based recommendations. Scholarship of Teaching and Learning in Psychology, 2(4), 273–284.
—Offers a clear, succinct description of how faculty need to analyze student evaluation results if they intend to make decisions about what to change based on the feedback. The advice offered is helpful; it’s well-written and well-documented.
Golding, C., & Adam, L. (2016). Evaluate to improve: Useful approaches to student evaluation. Assessment & Evaluation in Higher Education, 41(1), 1–14.
—Conducted focus groups with teachers who used student evaluations to improve and found, among other things, they viewed the data as formative and focused improvement efforts on those things that increased student learning.
Misunderstanding ratings and their results.
Here are three studies that address some pervasive but erroneous beliefs about ratings. First, that meaningful conclusions about instructional quality can be drawn from small differences in rating results, and second, that the way to win at the ratings game is with easy courses and lots of high grades.
Boysen, G. A., Kelly, T. J., Paesly, H. N., & Casner, R. W. (2014). The (mis)interpretation of teaching evaluations by college faculty and administrators. Assessment & Evaluation in Higher Education, 39(6), 641–656.
–Three studies that looked at how faculty and administrators interpreted small means (differences small enough to be within the margin of error). It’s an interesting study design and offers compelling evidence.
Centra, J. (2003). Will teachers receive higher evaluation by giving higher grades and less course work? Research in Higher Education, 44(5), 495–514.
—An analysis involving 50,000 individual courses did not find correlations between high ratings and higher grades and less course work.
Marsh, H. W., & Roche, L. A. (2000). Effects of grading lenience and low workload on students’ evaluations of teaching: Popular myth, bias, validity or innocent bystander. Journal of Educational Psychology, 92(1), 202–228.
—Easy graders and easy courses don’t result in high course evaluations. It’s a study with a huge cohort.
Dealing with the negative.
Whether it’s negative student comments or an over-reaction to what are or are perceived to be low ratings, these two articles offer helpful and constructive perspectives.
Hodges, L. C., & Stanton, K. (2007). Translating comments on student evaluations into the language of learning. Innovative Higher Education, 31, 279–286.
—Shows how student complaints about quantitative courses, writing-intensive courses, and student-active formats can offer important insights into how students understand learning. Explores options for responding to the complaints.
Gallagher, T. J. (2000, April). “Embracing Student Evaluations of Teaching: A Case Study.” Teaching Sociology, 28, 140–146.
—Recounts how a new teacher responded to a case of not-very-good student ratings.
Feedback beyond end-of-course assessments.
Summative, end-of-course ratings should not be the only source of feedback on the teaching. Faculty should be encouraged to regularly collect formative feedback, the kind of diagnostic descriptive details that focus on instructional policies, practices and behaviors. They should also be encouraged to involve students in their attempts to make the course a positive and productive learning experience. Several of these articles describe and assess the various kinds of feedback needed to understand the impact of instruction on learning. Several others describe innovative feedback mechanisms.
Brickman, P., Gormally, C., & Martella, A. M. (2016). Making the grade: Using instructional feedback and evaluation to inspire evidence-based teaching. Cell Biology Education, 15(1), 1–14.
—Forty-one percent of 343 biology faculty reported that they were not satisfied with current end-of-course evaluation feedback; another 46 percent said they were only satisfied “in some ways.” The “findings reveal a large, unmet desire for greater guidance and assessment data to inform pedagogical decision making.”
Gormally, C., Evans, M., & Brickman, P. (2014). Feedback about teaching in higher ed: Neglected opportunities to promote change. Cell Biology Education, 13(2), 187–199.
—Summarizes a set of best practices for providing instructional feedback; a very practical and helpful analysis.
Hoon, A., Oliver, E., Szpakowska, K., & Newton, P. (2015). Use of the Stop, Start, Continue method is associated with the production of constructive qualitative feedback by students in higher education. Assessment & Evaluation in Higher Education, 40(5), 755–767.
—Students list instructional policies, practices, or behaviors they’d like the instructor to stop, start, or continue. Using this feedback mechanism improved the quality of student feedback.
Veeck, A., O’Reilly, K., MacMillan, A., & Yu, H. (2016). The use of collaborative midterm student evaluations to provide actionable results. Journal of Marketing Education, 38(3), 157–169.
—Working in teams, students comment on the course using an online collaborative document. Students took the process seriously, provided better feedback that faculty felt more motivated to act on.
Summaries of student ratings research.
Fortunately, the voluminous research on ratings has been organized, integrated, and written about accessibly. These two books from the 1990s are classics and research done since their publication is not at odds with the findings they report and the recommendations they make. Most faculty aren’t going to have time or the inclination to read a book on instructional evaluation. Fortunately, there are articles that offer succinct summaries. The one from College Teaching is a personal favorite.
Braskamp, L., & Ory, J. (1994). Assessing faculty work: Enhancing individual and institutional performance. San Francisco, CA: Jossey-Bass, 1994.
—Both authors did research on student evaluations; Braskamp was also a dean at the University of Illinois, Chicago. The book is well-organized and readable.
Centra, J. (1993). Reflective faculty evaluation: Enhancing teaching and determining faculty effectiveness. San Francisco, CA: Jossey-Bass.
—Written by one of the premier student ratings researchers. An excellent summary with implications fully explored.
Hobson, S. M. and Talbot, D. M. (2001). Understanding student evaluations: What all faculty should know. College Teaching, 40(1), 26–30.
—If a book is too much, here’s a five-page, well-organized, clearly written summary of the research on ratings. It offers individual faculty recommendations for dealing with rating results.
This article first appeared in Academic Leader on March 1, 2019 © Magna Publications. All rights reserved.
Special Offer: Get your first month of Academic Leader’s monthly subscription for just $1. Use coupon code AL41 at the cart during checkout! Visit here to subscribe!