University of North Carolina at Chapel Hill
Reading student evaluations of teaching can be one of the most challenging tasks that a faculty member must face. Anonymous students have an opportunity to tell you anything they wish, whether it directly pertains to the course or not. Opening the file or envelope and reading those comments is distressing to most, so if you feel queasy, uncertain, and a bit of trepidation as to what you will experience, you are not alone.
There is a great deal of published literature debating the appropriateness of students to rate or judge the effectiveness of a faculty member. To me, although students may not be able to effectively comment on every aspect of how well I teach, they most certainly can comment on the extent to which I facilitate their learning. Regardless of where you fall on the debate regarding whether such evaluations are appropriate evidence for personnel decisions (Gannon, 2018), student ratings and comments offer ideas on how to make changes in our courses. In my mind, the essential consideration is thinking carefully about what areas student feedback may be able to shape your thinking and in what areas students likely cannot reliably comment.
Quantitative, Qualitative, and Variable Responses
Don’t let inconsistency of responses dissuade you from realizing the value of student responses. You will rarely get the opportunity to teach just a few students; as the number of students in a course increases, so will the variability of responses. We should not consider inconsistent responses to be indicative of an invalid measure. If 20 people go to a movie and 2 people said they hated the film, 16 said it was pretty good, and 2 said it was the best movie ever, would we all throw up our hands and say the survey question was flawed, and moviegoers have no ability to rate their enjoyment of the film? The same is true for teaching evaluations. In a class of 20 or more students, I anticipate responses will be all over the place.
Instead of giving disproportionate weight to the outliers, look for common responses from many students as a guide to course redesign.
For the close-ended items, it is relatively easy to look at means or numbers of responses to each dimension draws more favorable or less positive responses. A low mean on a question about the course stimulating student interest in a given content area may suggest diversifying teaching strategies across class sessions (e.g., Harrington & Zakrajsek, 2017; Major, Harris, & Zakrajsek, 2016; Rice, 2017). It is helpful to carefully consider department means and means of other faculty members teaching the same course if those numbers are available. It is also beneficial to compare averages for the same class you have taught across several years if that situation applies to you. Mostly, we all give attention to the descriptive statistics, and then turn with anguish to the real opportunities to adjust the course, the written comments.
Identify Themes for Each Item
For years, I have used the same technique to learn from the written comments. I don’t read each student’s comments across all prompts. Instead, I read all student comments for that item. That is, if the first item is “What do you like best about this course?” then read responses from every student to that item before reading the next item. One consideration before you analyze comments: any responses that are mean or hateful with no behavioral suggestions that are appropriate for the course should be set aside into a category of “unkind and not helpful.” Likewise, glowing comments, which do make us feel good, go into a similar category of “nice, but not helpful.”
To score responses as you read, write out the categories of the responses (either on your computer or write out the list on index cards). The strategy to do this is to read the first response. Write on your list the major theme from the first card. If there are two themes, then write out the second theme. For example, let’s say the first response to “What do you like best about this course?” is, “I like the practice quizzes and that Dr. Z is always available in his office.” On your record sheet write “1 – practice quizzes; 2 – office availability.” Now on to the second evaluation. When a new theme occurs, then include it as item number 3. If a theme is repeated, put a check mark on the previously identified theme.
When you reach the end of the responses for the first open-ended item, move to the next item in the form. Again, this is a very fast read. Do not spend time thinking about the meaning of anything written or how you might change things. Some individuals like to take a break between items; others complete the sorting of themes for all students across all responses in one sitting. As noted previously in this blog, reading student comments is stressful for many faculty, although most faculty do turn to evaluations for teaching improvement in a variety of areas (Yao, Weissinger, & Grady, 2005). If you have a trusted colleague, one option is to have your colleague do this quick sort method and perhaps you could do the same for your colleague. That way, instead of reading all comments, you are each reading only the helpful comments. For some individuals, reading “you are a terrible teacher and should not be allowed to be in front of other humans,” is something that can be very damaging to a career. In such cases, do not share those comments with your colleague. Stick to the comments that adhere to the purpose of the form: to improve learning experiences for as many students as possible.
Reflect on the Data Before Making 2 or 3 Changes
To make use of the information, analyze the results of your themes gathered from the form. Look at the first open-ended item.
Ignoring the not helpful piles, both good and bad, look for the category/theme that has the most responses.
Write out a few notes regarding what you will continue, adapt, or stop based on the comments. Do this only for the two or three top categories. I have told faculty for years that if a category has a number that is less than ten percent of the number of students filling out the form, then that category should be ignored. If you have 30 students in a class and one student writes, “the book in this class is confusing…find another book,” then I suggest NOT changing the book. To the contrary, I do suggest taking every “ism” comment seriously, regardless of the number of responses making mention. That is particularly true if you have students from underrepresented groups. Feedback of this type requires attention.
Once you have gone through each open-ended item and noted two to three potential changes for each item, you have more than enough to consider for changes to the course. My suggestion is to identify, based on themes and what you deem as specifically important, to limit just two or three things overall to add, adapt, or stop the next time you teach the course. In the next course, take the time to point out changes you have made as a result of student comments from the previous semester. That will increase the future response rate from students in the current semester, as students see the feedback is valuable.
My final suggestion is to look again to see if you have any comments that make you smile when you read them. Type these out and post them somewhere in your office where you can look at them periodically. It might well be a biased sample of comments, as they were hand-selected, but there are days when a biased sample is exactly what you need.
Discussion questions:
1. How do you feel when you begin to read the results from your student course evaluations? Why do you think you feel the way that you do?
2. In what ways are the closed-ended questions helpful to you? In what ways are the open-ended questions helpful?
3. What can you do to get more honest and helpful responses from your students? This is not about getting more positive responses, but rather getting student responses that would best inform your teaching?
Suggested Readings:
Gannon, K. (May 06, 2018). In defense (sort of) of Student Evaluations of Teaching. The Chronicle of Higher Education. Retrieved June 25, 2019 from https://www.chronicle.com/article/In-Defense-Sort-of-of/243325
Harrington, C. & Zakrajsek, T. (2017) Dynamic Lecturing: Research-Based Strategies to Enhance Lecture Effectiveness.
Major, C. H., Harris, M., & Zakrajsek, T. (2016). Teaching for learning: 101 individually designed educational activities to put students on the path to success. New York: Routledge.
Schwartz, H. (2018). Connected Teaching Relationship, Power, and Mattering in Higher Education. Sterling, VA: Stylus Publishing
Taylor Rice, G. (2017). Hitting Pause: 65 Lecture Breaks to Refresh and Reinforce Learning. Stylus Publishing.
Yao, Y., Weissinger, E., Grady, G. (2003). Faculty use of student evaluation feedback. Practical Assessment, Research &Evaluation, 8(21), 1-9. Available online: http://PAREonline.net/getvn.asp?v=8&n=21.
I decided to make a more personal connection with my students this year and met with each one of them individually to learn a little more them and to help them solidify their thesis statement in my writing class. My evaluations this term were incredibly positive, absolutely reflecting the degree to which they felt I was interested in them personally. In fact, many wanted another meeting! This was an aha moment. I will continue this practice for as long as it is feasible!