skip to Main Content

End-of-Term Teaching Survey Results

For more information on accessing and interpreting your midterm course surveys, visit the Understanding Your Results page in that section of our site.

This section provides a general overview of report terms and content, and guidelines for interpreting your end-of-term survey results.

For survey reports of classes taught in the following Schools/Campuses:

  • Dietrich School of Arts and Sciences
  • College of General Studies
  • College of Business Administration
  • School of Education Online Classes
  • School of Dental Medicine
  • Graduate School of Public and International Affairs
  • Katz Graduate School of Business
  • (All other schools, see below)

Surveys for all classes are automatically activated regardless of enrollment. Beginning fall 2016, each school decides whether to administer surveys for low enrollment classes for purposes of including the data in aggregate reporting. There is a minimum threshold set for instructor level reporting.

OMET does not release the quantitative results (Likert scale questions) to individual instructors for classes with fewer than five students enrolled for the following reasons. One, to protect student anonymity. The second reason is based on the following statistical principle; small sample sizes can affect statistical analysis. Mean (or average) for example, can be greatly influenced by “outliers”. In a class of four students, if three of four gave the instructor a rating of 5 for an item and one student gave the instructor a rating of 1, the single rating of 1 would change the class mean for that item from 5.0 to 4.0. By contrast, in a class of 30 students where 29 gave a rating of 5 and one student gave a rating of 1, the mean would be 4.87.

Report Release

  • Standard Reports: (including numeric, comments, and results to questions added by the instructor) are released after the term is over and final grades are posted.
  • Low enrollment classes:
    • Individual (non-cross-listed) classes – no report issued for classes with fewer than five students enrolled
    • Cross-listed classes:
      • No report issued for any of the sections with fewer than five students enrolled
      • A combined report will be automatically issued for cross-listed classes if the total number of enrollments for all sections is five or more and if the class is not taught by multiple instructors. This report is released the Friday following one week after grades are due.
  • Comment and QP Reports: This report is available for surveys of classes with fewer than five students enrolled upon request. This will not be a published report and will not appear on the Teaching Survey Dashboard.
  • Supervisor Reports: Released the Friday after grades are due.

Results to Likert Scaled Items – A Likert Scale is a continuous five (or seven) point rating scale used to measure opinions or attitudes.

Results Statistics:

  • Response Count – number of students who responded to each question
  • Response Ratio – percentage of students who responded to each question
  • Mean (Average)– value obtained by dividing the sum of all the responses by the number of respondents
  • Median – numerical middle point of scores
  • Mode – most common number of responses
  • Standard Deviation (SD) – measure of the amount of variation of the set of respondents. A low standard deviation number indicates that the values tend to be close to the mean or average. A high standard deviation number indicates that values are spread out over a wider range. Determining whether or not a standard deviation is low or high is subjective. A general guideline for a 5-point scale is that a standard deviation of 1 or more can be considered high while a standard deviation of less than 1 can be considered low.

When using numerical values assigned to Likert categories such as Strongly disagree (1). Disagree (2), Neutral (3), Agree (4), and Strongly agree (5), be aware that the numbers convey “greater than” or “less than” relationships but the differences between values are not necessarily constant. The difference in value between Strongly Agree and Agree and between Agree and Neutral, for example, are not clear nor is there a shared understanding of these values among raters. For this reason, it’s best to take other measures like mode, median, and frequencies into consideration when interpreting the data.

Response Rates

Response rates are calculated based on the total number of responses divided by the number of enrolled students.  Response rates for online administration are generally lower than paper administration. However, research suggests that lower response rates for online versus paper administration do not adversely affect mean scores and students may be likely to provide lengthier comments (Heath, Lawyer, et.al, 2007) . There are restrictions on report release for low enrollment classes (see above “Low Enrollment class surveys”). In most cases there is no minimum level of responses to release reports.

Student comments can provide insights into what worked and what didn’t work well in the class. The challenge in deciphering qualitative data is that they are presented randomly with no order or structure. (Lewis, 2002) Comments often appear unconnected and often do not line up with the numerical data included in the report. Here are some ways to make sense of the data and extract meaningful information:

  • Classify comments – use a matrix and assign each comment to a category. Classify comments into strengths and challenges or use the five components of effective teaching mode:
Knowledge of Material Relates to scholarship, with an emphasis on breadth, analytic ability, and conceptual understanding.
Organization/Clarity Relates to presenting the subject ins a clear and organized manner.
Instructor-Group Interaction Relates to rapport with the class as a whole, sensitivity to class response, and skill at securing active class participation.
Instructor-Individual Student Interaction Relates to mutual respect and rapport between the instructor and the individual student.
Dynaminism/Enthusiasm Relates to the instructors enthusiasm in teaching the material.

Adapted from Hildebrand, Milton, & Dienst (1971, p. 18)

  • Look for patterns- once you’ve classified the comments, examine whether patterns exist. Have all or most of the comments fallen into one or two categories?
  • Don’t place emphasis on the outliers – Unfortunately, sometimes students can be harsh critics. Reading negative or cruel comments is difficult but don’t dwell on one or two comments that are disrespectful or hurtful.

In the future, talk to students about giving meaningful feedback and give them multiple chances to practice. We have several resources that can help guide the discussion and options for gathering student feedback throughout the term.

  • Meet with a Teaching Consultant who can help you interpret your results and develop a course of action if necessary. Email teaching@pitt.edu to set up a consultation.
  • Do a midterm course survey. OMET now offers a midterm course survey option. For more information, visit our Midterm Course Survey pages.
  • Talk to students about the survey process and providing effective feedback.
  • Request a special report to track changes. A Trend Analysis Report will show results over time. Historical analysis can go back to fall 2016.

Standard individual class survey reports and combined reports of cross-listed classes with one instructor are automatically created. Other reports are available by special request. Special reports are created and released after all instructor and supervisor reports are released (usually two to three weeks after the term ends).

Special Report Options:

  • Trend Analysis – this report shows results over time (click image to see a larger version).
    OMET Survey Results - Trend Analysis image
  • Combined instructor reports – this report shows the total results of all of the classes taught by one instructor in a given term.
  • Combined Report for Cross-listed classes with multiple instructors – This report contains the total results of all sections of a cross-listed class for one instructor. (A combined report for cross-listed classes with one instructor is automatically released one week after the grades due date. If there are multiple instructors, a Special Report Request must be made.)

Complete a Special Report Request Form to order one of the reports described above or to inquire about other reporting options.

For survey reports of classes taught in the following Schools/Campuses:

  • School of Education face-to-face classes
  • Swanson School of Engineering
  • University Honors College
  • School of Law
  • School of Medicine
  • School of Nursing
  • School of Pharmacy
  • Graduate School of Public Health
  • School of Computing and Information
  • School of Health and Rehabilitation Sciences
  • University Center for Social and Urban Research
  • University of Pittsburgh at Bradford
  • University of Pittsburgh at Greensburg
  • University of Pittsburgh at Johnstown
  • University of Pittsburgh at Titusville

Surveys for all classes are automatically activated regardless of enrollment. Beginning fall 2016, each school decides whether to administer surveys for low enrollment classes for purposes of including the data in aggregate reporting. There is a minimum threshold set for instructor level reporting.

OMET does not release the quantitative results (Likert scale questions) to individual instructors for classes with fewer than five students enrolled for the following reasons. One, to protect student anonymity. The second reason is based on the following statistical principle; small sample sizes can affect statistical analysis. Mean (or average) for example, can be greatly influenced by “outliers”. In a class of four students, if three of four gave the instructor a rating of 5 for an item and one student gave the instructor a rating of 1, the single rating of 1 would change the class mean for that item from 5.0 to 4.0. By contrast, in a class of 30 students where 29 gave a rating of 5 and one student gave a rating of 1, the mean would be 4.87.

Report Release

  • Standard Reports: (including numeric, comments, and results to questions added by the instructor) are released after the term is over and final grades are posted.
  • Low enrollment classes:
    • Individual (non-cross-listed) classes – no report issued for classes with fewer than five students enrolled
    • Cross-listed classes:
      • No report issued for any of the sections with fewer than five students enrolled
      • A combined report will be automatically issued for cross-listed classes if the total number of enrollments for all sections is five or more and if the class is not taught by multiple instructors. This report is released the Friday following one week after grades are due.
    • Comment and QP Reports: This report is available for surveys of classes with fewer than five students enrolled upon request. This will not be a published report and will not appear on the Teaching Survey Dashboard.
    • Supervisor Reports: Released the Friday after grades are due.

Results to Likert Scaled Items – A Likert Scale is a continuous five (or seven) point rating scale used to measure opinions or attitudes.

Results Statistics:

  • Response Count – number of students who responded to each question
  • Response Ratio – percentage of students who responded to each question
  • Mean (Average)– value obtained by dividing the sum of all the responses by the number of respondents
  • Median – numerical middle point of scores
  • Mode – most common number of responses
  • Standard Deviation (SD) – measure of the amount of variation of the set of respondents. A low standard deviation number indicates that the values tend to be close to the mean or average. A high standard deviation number indicates that values are spread out over a wider range. Determining whether or not a standard deviation is low or high is subjective. A general guideline for a 5-point scale is that a standard deviation of 1 or more can be considered high while a standard deviation of less than 1 can be considered low.

When using numerical values assigned to Likert categories such as Strongly disagree (1). Disagree (2), Neutral (3), Agree (4), and Strongly agree (5), be aware that the numbers convey “greater than” or “less than” relationships but the differences between values are not necessarily constant. The difference in value between Strongly Agree and Agree and between Agree and Neutral, for example, are not clear nor is there a shared understanding of these values among raters. For this reason, it’s best to take other measures like mode, median, and frequencies into consideration when interpreting the data.

Response Rates

Response rates are calculated based on the total number of responses divided by the number of enrolled students.  Response rates for online administration are generally lower than paper administration. However, research suggests that lower response rates for online versus paper administration do not adversely affect mean scores and students may be likely to provide lengthier comments (Heath, Lawyer, et.al, 2007) . There are restrictions on report release for low enrollment classes (see above “Low Enrollment class surveys”). In most cases there is no minimum level of responses to release reports.

Summary of Results Sample (click images for larger versions)

OMET Survey Results - sample image 1

OMET Survey Results - sample image 2 

Detailed Results Sample

OMET Survey Results - Detailed Results image

Student comments can provide insights into what worked and what didn’t work well in the class. The challenge in deciphering qualitative data is that they are presented randomly with no order or structure. (Lewis, 2002) Comments often appear unconnected and often do not line up with the numerical data included in the report. Here are some ways to make sense of the data and extract meaningful information:

  • Classify comments – use a matrix and assign each comment to a category. Classify comments into strengths and challenges or use the five components of effective teaching mode:
Knowledge of Material Relates to scholarship, with an emphasis on breadth, analytic ability, and conceptual understanding.
Organization/Clarity Relates to presenting the subject ins a clear and organized manner.
Instructor-Group Interaction Relates to rapport with the class as a whole, sensitivity to class response, and skill at securing active class participation.
Instructor-Individual Student Interaction Relates to mutual respect and rapport between the instructor and the individual student.
Dynaminism/Enthusiasm Relates to the instructors enthusiasm in teaching the material.

Adapted from Hildebrand, Milton, & Dienst (1971, p. 18)

  • Look for patterns- once you’ve classified the comments, examine whether patterns exist. Have all or most of the comments fallen into one or two categories?
  • Don’t place emphasis on the outliers – Unfortunately, sometimes students can be harsh critics. Reading negative or cruel comments is difficult but don’t dwell on one or two comments that are disrespectful or hurtful.

In the future, talk to students about giving meaningful feedback and give them multiple chances to practice. We have several resources that can help guide the discussion and options for gathering student feedback throughout the term.

  • Meet with a Teaching Consultant who can help you interpret your results and develop a course of action if necessary. Email teaching@pitt.edu to set up a consultation.
  • Do a midterm course survey. OMET now offers a midterm course survey option. For more information, visit our Midterm Course Survey pages.
  • Talk to students about the survey process and providing effective feedback.
  • Request a special report to track changes. A Trend Analysis Report will show results over time. Historical analysis can go back to fall 2016.

Standard individual class survey reports and combined reports of cross-listed classes with one instructor are automatically created. Other reports are available by special request. Special reports are created and released after all instructor and supervisor reports are released (usually two to three weeks after the term ends).

Special Report Options:

  • Trend Analysis – this report shows results over time (click image to see a larger version).
    OMET Survey Results - Trend Analysis image
  • Combined instructor reports – this report shows the total results of all of the classes taught by one instructor in a given term.
  • Combined Report for Cross-listed classes with multiple instructors – This report contains the total results of all sections of a cross-listed class for one instructor. (A combined report for cross-listed classes with one instructor is automatically released one week after the grades due date. If there are multiple instructors, a Special Report Request must be made.)

Complete a Special Report Request Form to order one of the reports described above or to inquire about other reporting options.

* The University of Pittsburgh at Bradford restricts the release of reports for surveys with less than five responses regardless of the number of enrolled students.

References & Resources

Burton, W. B., Civitano, A., & Steiner-Grossman, P. (2012). Online versus paper evaluations: Differences in both quantitative and qualitative data. Journal of Computing in Higher Education, 24(1), 58–69. https://doi.org/10.1007/s12528-012-9053-3 (NOTE: To access this content, you must be logged in or log into the University Library System.)

Heath, N., Lawyer, S., & Rasmussen, E. (2007). Web-Based Versus Paper-and-Pencil Course Evaluations. Teaching of Psychology, 34, 259–261. https://doi.org/10.1080/00986280701700433 (NOTE: To access this content, you must be logged in or log into the University Library System.)

Hildebrand, M., Wilson, R. C., & Dienst, E. R. (1971). Evaluating University Teaching. https://doi.org/ED 057 748

How to Read a Student Evaluation – Do Your Job Better – The Chronicle of Higher Education. (n.d.). Retrieved January 29, 2020, from https://www.chronicle.com/article/How-to-Read-a-Student/129553

Interpretation of Course Evaluation Results | Mercury – McGill University. (n.d.). Retrieved January 29, 2020, from https://www.mcgill.ca/mercury/instructors/interpretation#Recommendations%20for%20Interpreting%20Written%20Comments

Making Sense of Course Evaluation Results: A Quick Guide for Instructors. (n.d.). Retrieved January 29, 2020, from https://atl.wsu.edu/documents/2015/02/making-sense-of-course-evaluations-or-midterm-feedback-guidelines-for-instructors.pdf/

Nadler, J. T., Weston, R., & Voyles, E. C. (2015). Stuck in the middle: The use and interpretation of mid-points in items on questionnaires. Journal of General Psychology, 142(2), 71–89. https://doi.org/10.1080/00221309.2014.994590 (NOTE: To access this content, you must be logged in or log into the University Library System.)

Student Evaluations of Teaching | Center for Teaching | Vanderbilt University. (n.d.). Retrieved February 10, 2020, from https://cft.vanderbilt.edu/guides-sub-pages/student-evaluations/

Sullivan, G. M., & Artino, A. R. (2013). Analyzing and Interpreting Data from Likert-Type Scales. Journal of Graduate Medical Education, 5(4), 541–542. https://doi.org/10.4300/JGME-5-4-18 (NOTE: To access this content, you must be logged in or log into the University Library System.)

The University of Texas at Austin, F. I. C. (2016). Quick Sort Approach – Open-ended Comments. Retrieved December 9, 2019, from https://facultyinnovate.utexas.edu/sites/default/files/quick_sort_approach_2016.pdf

Worthington, A. (2002). The Impact of Student Perceptions and Characteristics on Teaching Evaluations: A Case Study in Finance Education. Assessment and Evaluation in Higher Education, 27. https://doi.org/10.1080/02602930120105054 (NOTE: To access this content, you must be logged in or log into the University Library System.)

References & Resources

Student Evaluations of Teaching | Center for Teaching | Vanderbilt University. (n.d.). Retrieved February 10, 2020, from https://cft.vanderbilt.edu/guides-sub-pages/student-evaluations/Making Sense of Course Evaluation Results: A Quick Guide for Instructors. (n.d.). Retrieved from http://ideaedu.org/research-and-papers/idea-papers.

How to Read a Student Evaluation – Do Your Job Better – The Chronicle of Higher Education. (n.d.). Retrieved January 29, 2020, from https://www.chronicle.com/article/How-to-Read-a-Student/129553

Interpretation of Course Evaluation Results | Mercury – McGill University. (n.d.). Retrieved January 29, 2020, from https://www.mcgill.ca/mercury/instructors/interpretation#Recommendations for Interpreting Written Comments

Burton, W. B., Civitano, A., & Steiner-Grossman, P. (2012). Online versus paper evaluations: Differences in both quantitative and qualitative data. Journal of Computing in Higher Education, 24(1), 58–69. https://doi.org/10.1007/s12528-012-9053-3

Heath, N., Lawyer, S., & Rasmussen, E. (2007). Web-Based Versus Paper-and-Pencil Course Evaluations. Teaching of Psychology, 34, 259–261. https://doi.org/10.1080/00986280701700433

Hildebrand, M., Wilson, R. C., & Dienst, E. R. (1971). Evaluating University Teaching. California.

Nadler, J. T., Weston, R., & Voyles, E. C. (2015). Stuck in the middle: The use and interpretation of mid-points in items on questionnaires. Journal of General Psychology, 142(2), 71–89. https://doi.org/10.1080/00221309.2014.994590

Sullivan, G. M., & Artino, A. R. (2013). Analyzing and Interpreting Data From Likert-Type Scales. Journal of Graduate Medical Education, 5(4), 541–542. https://doi.org/10.4300/JGME-5-4-18

Worthington, A. (2002). The Impact of Student Perceptions and Characteristics on Teaching Evaluations: A Case Study in Finance Education. Assessment and Evaluation in Higher Education, 27. https://doi.org/10.1080/02602930120105054

Quick Sort Apprioach – Open-ended comments, The University of Texas at Austin, Faculty Innovation Center, Retrieved January 29, 2020 from https://facultyinnovate.utexas.edu/sites/default/files/quick_sort_approach_2016.pdf

Back To Top