Skip to content

End-of-Term Teaching Survey Results

Pre-course survey results are available after at least one response giving faculty the opportunity to use feedback to spark discussion and encourage more students to participate. Read more about using results to inform teaching.

Midterm course survey results are available the day after surveys close, we have resources for interpreting and using early student feedback.

  • Results from pre- and midterm surveys are available to the faculty member only and may be shared or used at their discretion.
  • Faculty may use our Reflection on Student Feedback Worksheet to help reflect on and respond to student feedback collected throughout the term. The worksheet can be found on the Teaching Survey Dashboard and contains links to helpful resources, guided reflection prompts, and space to record summaries of student feedback, actions taken, and outcomes.

End-of-Term Teaching Survey Results

End-of-term teaching survey reports (including numeric, comments, and results to questions added by the instructor) are available beginning the day after the term grades due date and upon verification that final grades have been posted (allow 2-3 days after grades are posted for reports to become available). Cross-listed and low enrollment class reports are available one week after the grades due date. Supervisory reports are available the week after the grades due date.

Teaching survey reports can be found on your teaching survey dashboard. Scroll down to the Reports section of the page, the most current reports will appear. To access prior reports (beginning fall 2016), choose the “Archived” tab on the far-right side of the page. For reports prior to fall 2016 – contact us to request past results reports).

Understanding and interpreting your teaching survey report:

Quick Guide to Interpreting your OMET Teaching Survey Report for Instructors and Administrators.

Surveys for all classes are automatically activated regardless of enrollment. Beginning fall 2016, each school decides whether to administer surveys for low enrollment classes for purposes of including the data in aggregate reporting. There is a minimum threshold set for instructor level reporting.

OMET does not release the quantitative results (Likert scale questions) to individual instructors for classes with fewer than five students enrolled for the following reasons. One, to protect student anonymity. The second reason is based on the following statistical principle; small sample sizes can affect statistical analysis. Mean (or average) for example, can be greatly influenced by “outliers”. In a class of four students, if three of four gave the instructor a rating of 5 for an item and one student gave the instructor a rating of 1, the single rating of 1 would change the class mean for that item from 5.0 to 4.0. By contrast, in a class of 30 students where 29 gave a rating of 5 and one student gave a rating of 1, the mean would be 4.87.

Report Release

  • Standard Reports: (including numeric, comments, and results to questions added by the instructor) are released after the term is over and final grades are posted.
  • Low enrollment classes:
    • Individual (non-cross-listed) classes – no report issued for classes with fewer than five students enrolled
    • Cross-listed classes:
      • No report issued for any of the sections with fewer than five students enrolled
      • A combined report will be automatically issued for cross-listed classes if the total number of enrollments for all sections is five or more and if the class is not taught by multiple instructors. This report is released the Friday following one week after grades are due.
    • Comment and QP Reports: This report is available for surveys of classes with fewer than five students enrolled upon request. This will not be a published report and will not appear on the Teaching Survey Dashboard.
    • Supervisor Reports: Released the Friday after grades are due.

Results to Likert Scaled Items – A Likert Scale is a continuous five (or seven) point rating scale used to measure opinions or attitudes.

Results Statistics:

  • Response Count – number of students who responded to each question
  • Response Ratio – percentage of students who responded to each question
  • Mean (Average)– value obtained by dividing the sum of all the responses by the number of respondents
  • Median – numerical middle point of scores
  • Mode – most common number of responses
  • Standard Deviation (SD) – measure of the amount of variation of the set of respondents. A low standard deviation number indicates that the values tend to be close to the mean or average. A high standard deviation number indicates that values are spread out over a wider range. Determining whether or not a standard deviation is low or high is subjective. A general guideline for a 5-point scale is that a standard deviation of 1 or more can be considered high while a standard deviation of less than 1 can be considered low.

When using numerical values assigned to Likert categories such as Strongly disagree (1). Disagree (2), Neutral (3), Agree (4), and Strongly agree (5), be aware that the numbers convey “greater than” or “less than” relationships but the differences between values are not necessarily constant. The difference in value between Strongly Agree and Agree and between Agree and Neutral, for example, are not clear nor is there a shared understanding of these values among raters. For this reason, it’s best to take other measures like mode, median, and frequencies into consideration when interpreting the data.

Response Rates

Response rates are calculated based on the total number of responses divided by the number of enrolled students.  Response rates for online administration are generally lower than paper administration. However, research suggests that lower response rates for online versus paper administration do not adversely affect mean scores and students may be likely to provide lengthier comments (Heath, Lawyer, et.al, 2007) . There are restrictions on report release for low enrollment classes (see above “Low Enrollment class surveys”). In most cases there is no minimum level of responses to release reports.

Summary of Results Sample (click images for larger versions)

OMET Survey Results - sample image 1

OMET Survey Results - sample image 2 

Detailed Results Sample

OMET Survey Results - Detailed Results image

Student comments can provide insights into what worked and what didn’t work well in the class. The challenge in deciphering qualitative data is that they are presented randomly with no order or structure. (Lewis, 2002) Comments often appear unconnected and often do not line up with the numerical data included in the report. Here are some ways to make sense of the data and extract meaningful information:

  • Classify comments – use a matrix and assign each comment to a category. Classify comments into strengths and challenges or use the five components of effective teaching mode:
Knowledge of Material Relates to scholarship, with an emphasis on breadth, analytic ability, and conceptual understanding.
Organization/Clarity Relates to presenting the subject ins a clear and organized manner.
Instructor-Group Interaction Relates to rapport with the class as a whole, sensitivity to class response, and skill at securing active class participation.
Instructor-Individual Student Interaction Relates to mutual respect and rapport between the instructor and the individual student.
Dynamism/Enthusiasm Relates to the instructors enthusiasm in teaching the material.

Adapted from Hildebrand, Milton, & Dienst (1971, p. 18)

  • Or use our coding template organize student comments into “keep,” “stop,” and “suggestions” categories.
  • Look for patterns – once you’ve classified the comments, examine whether patterns exist. Have all or most of the comments fallen into one or two categories?
  • Students may have a hard time verbalizing what they find difficult and provide feedback that’s vague or confusing. See our strategies for decoding and responding to common student feedback support article.
  • Don’t place emphasis on the outliers – Unfortunately, sometimes students can be harsh critics. Reading negative or cruel comments is difficult but don’t dwell on one or two comments that are disrespectful or hurtful.

In the future, talk to students about giving meaningful feedback and give them multiple chances to practice. We have several resources that can help guide the discussion and options for gathering student feedback throughout the term.

  • Meet with a Teaching Consultant who can help you interpret your results and develop a course of action if necessary. Email teaching@pitt.edu to set up a consultation.
  • Do a midterm course survey. OMET now offers a midterm course survey option. For more information, visit our Midterm Course Survey pages.
  • Talk to students about the survey process and providing effective feedback.
  • Request a special report to track changes. A Trend Analysis Report will show results over time. Historical analysis can go back to fall 2016.

Standard individual class survey reports and combined reports of cross-listed classes with one instructor are automatically created. Other reports are available by special request. Special reports are created and released after all instructor and supervisor reports are released (usually two to three weeks after the term ends).

Special Report Options:

  • Trend Analysis – this report shows results over time (click image to see a larger version).
    OMET Survey Results - Trend Analysis image
  • Combined instructor reports – this report shows the total results of all of the classes taught by one instructor in a given term.
  • Combined Report for Cross-listed classes with multiple instructors – This report contains the total results of all sections of a cross-listed class for one instructor. (A combined report for cross-listed classes with one instructor is automatically released one week after the grades due date. If there are multiple instructors, a Special Report Request must be made.)

Complete a contact us to request any of the reports described above or to inquire about other reporting options.

* The University of Pittsburgh at Bradford restricts the release of reports for surveys with less than five responses regardless of the number of enrolled students.

References & Resources

Burton, W. B., Civitano, A., & Steiner-Grossman, P. (2012). Online versus paper evaluations: Differences in both quantitative and qualitative data. Journal of Computing in Higher Education, 24(1), 58–69. https://doi.org/10.1007/s12528-012-9053-3 (NOTE: To access this content, you must be logged in or log into the University Library System.)

Heath, N., Lawyer, S., & Rasmussen, E. (2007). Web-Based Versus Paper-and-Pencil Course Evaluations. Teaching of Psychology, 34, 259–261. https://doi.org/10.1080/00986280701700433 (NOTE: To access this content, you must be logged in or log into the University Library System.)

Hildebrand, M., Wilson, R. C., & Dienst, E. R. (1971). Evaluating University Teaching. https://doi.org/ED 057 748

How to Read a Student Evaluation – Do Your Job Better – The Chronicle of Higher Education. (n.d.). Retrieved Jan. 29, 2020, from https://www.chronicle.com/article/How-to-Read-a-Student/129553

Interpretation of Course Evaluation Results | Mercury – McGill University. (n.d.). Retrieved Jan. 29, 2020, from https://www.mcgill.ca/mercury/instructors/interpretation#Recommendations%20for%20Interpreting%20Written%20Comments

Measuring Up: How to Manage Those Dreaded Course Evaluations – The Chronicle of Higher Education. (n.d.). Retrieved June 9, 2023, from https://www.chronicle.com/article/measuring-up-how-to-manage-those-dreaded-course-evaluations

Nadler, J. T., Weston, R., & Voyles, E. C. (2015). Stuck in the middle: The use and interpretation of mid-points in items on questionnaires. Journal of General Psychology, 142(2), 71–89. https://doi.org/10.1080/00221309.2014.994590 (NOTE: To access this full article, you must be logged in or log into the University Library System.)

Spooren, P., Brockx, B., & Mortelmans, D. (2013). On the Validity of Student Evaluation of Teaching: The State of the Art. Review of Educational Research (Vol. 83). https://doi.org/10.3102/0034654313496870

Stark, P., & Freishtat, R. (2014). An Evaluation of Course Evaluations. ScienceOpen Research, (September), 1–26. https://doi.org/10.14293/S2199-1006.1.SOR-EDU.AOFRQA.v1

Student Evaluations of Teaching | Center for Teaching | Vanderbilt University. (n.d.). Retrieved February 10, 2020, from https://cft.vanderbilt.edu/guides-sub-pages/student-evaluations/

Sullivan, G. M., & Artino, A. R. (2013). Analyzing and Interpreting Data from Likert-Type Scales. Journal of Graduate Medical Education, 5(4), 541–542. https://doi.org/10.4300/JGME-5-4-18 (NOTE: To access this content, you must be logged in or log into the University Library System.)

The University of Texas at Austin, F. I. C. (2016). Quick Sort Approach – Open-ended Comments. Retrieved Dec. 9, 2019, from https://facultyinnovate.utexas.edu/sites/default/files/quick_sort_approach_2016.pdf

Uttl, B., White, C. A., & Gonzalez, D. W. (2017). Meta-analysis of faculty’s teaching effectiveness: Student evaluation of teaching ratings and student learning are not related. Studies in Educational Evaluation. https://doi.org/10.1016/j.stueduc.2016.08.007

Worthington, A. (2002). The Impact of Student Perceptions and Characteristics on Teaching Evaluations: A Case Study in Finance Education. Assessment and Evaluation in Higher Education, 27. https://doi.org/10.1080/02602930120105054 (NOTE: To access this content, you must be logged in or log into the University Library System.)

Back To Top