Skip to content

Peer Review

Peer Review

Below, review the Assessment of Teaching Initiative’s Guide on Conducting Faculty Peer Review.

Peer review may consist of any combination of the following assessments performed by a faculty member’s colleague(s):

  • Review of curriculum
  • Review of teaching materials (e.g. syllabi, lesson plans, assignments, course shells)
  • Review of student artifacts (e.g. examples of student work)
  • Review of a teaching portfolio
  • Classroom observations

Research has shown that peer review has several benefits:

  • It provides another perspective on teaching effectiveness beyond student opinion of teaching surveys. Unlike students, peer reviewers are often experts in the faculty member’s discipline and/or pedagogy (Berk et al., 2004).
  • It helps faculty better recognize areas of teaching strengths and weakness and determine how to improve (Al Qahatani et al, 2011).
  • Faculty report that participating in peer review is useful and helps them improve their teaching (Bell & Mladenovic, 2008; DiVall et al. 2012) and that the benefits outweigh the effort of participating (DiVall et al., 2012)
  • Peer observations benefit the observer as well as the faculty member being observed (Bell & Mladenovic, 2008; Hendry & Oliver, 2012; Swinglehurst et al., 2008).
  • Observers have reported that observations help them learn about new teaching strategies and increase their confidence in their abilities to perform new teaching strategies themselves in class (Hendry & Oliver, 2012).

Peer review may be performed formatively to provide a faculty member with feedback to help them improve, or summatively, as part of a formal evaluation like annual, promotion, or tenure review.

The purpose, process, tools, and expectations for peer reviews should always be communicated clearly to faculty early enough to allow them to prepare.

Whose teaching will be reviewed? When?

Academic units should establish review cycles that balance providing faculty with regular feedback with feasibility and sustainability. Review cycles may vary based on faculty rank, appointment, experience teaching the course, or experience teaching at Pitt. For example, academic units may decide to conduct formative peer reviews of teaching during the first semester for new faculty or faculty teaching new courses in order to provide feedback that can be used for improvement, but conduct summative peer reviews for experienced faculty teaching established courses less frequently.

Units may also opt to design peer review cycles to coincide or integrate with existing schedules for curricular review or assessment of student learning. Some review processes like syllabi review, for example, can generate data for multiple types of reviews.

Units that are conducting new peer review processes should plan on piloting peer review, collecting reviewer and reviewee feedback, and making revisions as needed.

Who will conduct the review? What will be reviewed?

The purpose of peer review should help units determine who conducts them. Units should be cognizant of the fact that summative peer reviews are higher stakes and may be more anxiety-provoking for faculty. Faculty may be more concerned about how bias affects summative peer review (Berk et al., 2004). Academic units may address these concerns by:

  • Developing peer review tools that focus more on providing qualitative feedback than ratings
  • Training reviewers to recognize potential bias and apply review tools as equitably and consistently as possible
  • Assigning reviewers from a different department within the same school
  • Using review teams rather than a single reviewer
  • Inviting the faculty member who was reviewed to respond to feedback as part of the review process

Regardless of whether the review is formative or summative, all reviewers should receive training on how to conduct reviews, use review tools, and how to provide their colleagues with meaningful, constructive feedback. In addition to increasing reviewer competence, conducting training can also increase faculty trust in reviewers (Kohut et al., 2007).

Units should also determine whether peer review consists of an assessment of teaching materials, student artifacts, and/or a classroom observation. Will reviews of teaching materials and observations be part of the same processes or different processes? For example, units may specify that a curriculum committee is responsible for review of syllabi, a promotion and tenure committee will conduct evaluations or teaching portfolios, and that peer observations should be performed by a faculty mentor or colleague. Conversely, review of teaching materials and observations can be combined into a single peer review process completed by the same reviewer or team of reviewers.

What is the process for peer review of teaching?

Faculty should be engaged to help create and revise specific peer review processes, policies, and tools to ensure that they are feasible, relevant, and equitable (Bingham & Ottewill, 2001).

Most peer reviews broadly consist of these steps:

Preliminary activities: Prior to the review, reviewers should talk to the faculty member being reviewed to learn more about their teaching and learning goals and to provide some context for the review. If performing an observation, a reviewer may ask about:

  • the course, course description, and learning objectives
  • how the course is delivered (e.g. lecture, recitation, lab or clinical, face-to-face, hybrid, online, flipped)
  • the number of students
  • what students have been doing in the course prior to the observation being performed
  • the lesson plan or agenda for the session to be observed
  • challenges that the reviewee may have experienced teaching the class up to that point or areas the reviewee would like the reviewer to focus on during the observation

These conversations may take place synchronously or asynchronously and face-to-face or remotely. Academic units should determine what type of information reviewers will need beforehand to conduct a successful review and provide them with questions to ask or types of information to seek out.

The peer review: The reviewer(s) apply a peer review tool to teaching materials and/or a classroom observation. The review may consist of more than one step, based on the review process. Depending on the tool being used, there may be a specific protocol for reviewers to follow. Reviewers should take notes that are detailed enough to allow them to cite specific examples to support their feedback. Academic units should create resources and training to help faculty learn how to complete reviews.

Follow-up activities: Reviewer(s) should conduct a debrief or follow-up to discuss and share the results of their review with the faculty member who was reviewed and potentially, others in the department. Depending on the type of peer review, this might consist of a short, informal conversation or could involve a formal reporting process. It may involve the reviewee self-assessing or reflecting on their teaching and/or documenting the steps they will take to improve teaching based on feedback. This meeting should take place as soon after the peer review as possible. If a peer observation was performed, the follow-up meeting should occur within the same week.

Although there is always some degree of subjectivity involved in conducting peer review, using tools that identify specific, observable criteria helps reviewers apply tools in a consistent manner. To ensure validity, peer review tools must align with the academic unit’s definition of teaching effectiveness. Units should engage faculty in the process of selecting, constructing, and revising tools.

Lists of question prompts, checklists, and rubrics can be used to guide peer review. Academic units may want to adopt or adapt existing tools or create their own. Tools may vary depending on the purpose of the assessment and the course or rank/appointment of the faculty member being assessed. For example, an academic unit may decide to develop a different observation tool for didactic versus clinical courses.

Sample Peer Review Tools (validated tools and examples from other institutions)

Sample Tools for Self and Peer Assessment of Online Course Design and Teaching

Resources for Inclusive Peer Review

New research and resources on conducting peer review of inclusive teaching are constantly emerging. Here are some examples of ongoing projects:

For peer review to be effective, faculty need to receive meaningful, constructive feedback from a reviewer. The method and manner of delivery affects how feedback is received. There are several steps that reviewers can take (adapted from Newman et al., 2012) to offer effective feedback:

  • Approach giving feedback as a collaborative problem-solving experience that will allow both you and the person you reviewed to learn more about teaching. Avoid positioning yourself as an expert giving advice to a novice unless that is explicitly the purpose and process for the peer review (e.g. a mentor observing a mentee).
  • Ask the reviewee to share their thoughts about the experience. What do they identify as strengths and areas of improvement? Which activities went well? Which would they do differently next time? This prompts reflection and allows you to build your feedback on their self-reflection. It also gives the reviewee the chance to correct anything that you misperceived during your review.
  • Start with positive feedback. Receiving feedback can be an anxiety-producing process for the recipient. Beginning by discussing strengths can lower anxiety and increase the likelihood that the reviewee will be engaged and receptive to additional feedback.
  • When delivering constructive feedback, avoid judgment. Offer examples of observed behaviors. Be specific and improvement-focused. Vague comments often do not give the reviewee enough information to make improvements. For example, if you say, “Students gradually became less engaged,” that might be true, but it does not give the reviewee sufficient information to make changes. If you say, “I noticed that students started to become less engaged after you had been lecturing for about 20 minutes. They were more engaged during the shorter lectures and class discussions. It might be helpful to break up longer lectures with some short active learning strategies,” you are telling the reviewee when and (likely) why students became disengaged and what they could do differently to prevent that from happening in the future.
  • It can also be helpful to deliver constructive feedback as questions to help the faculty member who was reviewed reflect on what they did and consider how they might make changes. For example, instead of, “The tone of this syllabus is harsh and would be off-putting to students” you could say, “When reviewing your syllabus, I noticed that the language used did not align with what you told me about your teaching style. Can you tell me more about why you chose that language and what you are trying to communicate?”
  • Focus suggestions for improvement on things the reviewee can change. For example, if you tell a large lecture faculty member to move away from multiple choice exams as their primary mode of assessment, that might not be feasible given the class size.

Depending on the purpose of the peer review, it may also be appropriate to limit constructive feedback to a few of the most impactful changes that the reviewee could make rather than pointing out every perceived area for improvement.

You might conduct a peer review and determine that the reviewee would benefit from sustained resources and support. In these cases, you can refer them to faculty development resources in your academic unit or to the Center for Teaching and Learning (teaching@pitt.edu).

How and with whom results are communicated may depend on the purpose of the peer review and how the data they generate are used. Academic units may decide that formative peer reviews should remain confidential, with results only being shared with the reviewee. Summative review results may be shared with academic unit leaders, promotion and tenure review committees, or other leaders in the department so that they can be used for formal evaluations or to inform program- or unit-level decision-making.

Academic units will also need to determine what type of artifact is ultimately shared. Will the reviewer generate a narrative statement, a completed checklist or rubric, lists of strengths and areas for improvement, or a report? Will the faculty member compose a response? Will aggregate data be shared with unit leaders? Units may adopt some combination of these approaches. Focusing on how the faculty member who was reviewed will use results may reduce faculty anxiety about peer review and encourage iterative improvement of teaching. For example, a unit may decide that detailed feedback should remain confidential between the reviewer and reviewee, but that the reviewee should submit a statement documenting how they used feedback to improve their teaching as part of a teaching portfolio.

References, Resources, and Readings

Al Qahtani, S., Kattan, T., Al Harbi, K., Seefeldt, M. (2011). Some thoughts on educational peer evaluation (opens in new tab). South-East Asian Journal of Medical Education, 5(1), 47–49.

Bell, A. & Mladenovic, R. (2008). The benefits of peer observation of teaching for tutor development. Higher Education, 55(6). doi: 10.1007/s10734-007-9093-1 (opens in new tab)

Berk, R.A., Naumann, P.L., & Appling, S.A. (2004). Beyond student ratings: Peer observation of classroom and clinical teaching. International Journal of Nursing Education Scholarship, 1(1).doi: 10.2202/1548-923x.1024 (opens in new tab)

Bingham, R., Ottewill, R. (2001). Whatever happened to peer review? Revitalising the contribution of tutors to course evaluation. Quality Assurance in Education, 9, 32–39. doi:10.1108/09684880110381319 (opens in new tab)

DiVall, M., Barr, J., Gonyeau, M., Matthews, S. J., Van Amburgh, J., Qualters, D., & Trujillo, J. (2012). Follow-up assessment of a faculty peer observation and evaluation program. American Journal of Pharmaceutical Education, 76(4). doi: 10.5688/ajpe76461 (opens in new tab)

Gosling, D. (2002). Models of peer observation of teaching. ITSN Generic Centre Learning and Teaching Support Network.

Hendry, G.D. & Oliver, G.R. (2012). Seeing is believing: The benefits of peer observation. Journal of University Teaching & Learning Practice, 9(1). (PDF – 114KB)

Kohut, G. F., Burnap, C., Yon, M. G. (2007). Peer observation of teaching: Perceptions of the observer and the observed. College Teaching, 55, 19–25. doi:10.3200/CTCH.55.1.19-25 (opens in new tab)

Kuo, F., Crabtree, J. L., & Scott, P. J. (2016). Peer observation and evaluation tool (POET): A formative peer review supporting scholarly teaching. The Open Journal of Occupational Therapy, 4(3), doi:10.15453/2168-6408.1273 (PDF – 887KB)

Lund, T. J., Pilarz, M., Velasco, J. B., Chakraverty, D., Rosploch, K., Undersander, M., & Stains, M. (2015). The best of both worlds: Building on the COPUS and RTOP observation protocols to easily and reliably measure various levels of reformed instructional practice. CBE Life Sciences Education, 14(2), doi:10.1187/cbe.14-10-0168 (opens in new tab)

Marchant, G. J. (1989). StRoBe: A classroom-on-task measure. (opens in new tab)

Newman, L.R., Roberts, D.H., & Schwartzstein, R.M. (2012). Peer observation of teaching handbook. Shapiro Institute for Education and Research at Harvard Medical School and Beth Israel Deaconess Medical Center. (PDF – 543KB)

O’Leary, M. (2020). Classroom Observation: A Guide to the Effective Observation of Teaching and Learning. Taylor and Francis.

Pembridge, J.J. & Rohrbacher, C.M. (2020). Faculty peer review of teaching for the 21st century. In S.M. Linder, C.M. Lee, & S. K. Stefl (Eds.), Handbook of STEM faculty development (pp. 207-220). Information Age Publishing.

Smith, M. K., Francis H. M. Jones, Gilbert, S. L., & Wieman, C. E. (2013). The classroom observation protocol for undergraduate STEM (COPUS): A new instrument to characterize university STEM classroom practices. CBE-Life Sciences Education, 12(4), 618-627. doi:10.1187/cbe.13-08-0154 (opens in new tab)

Swinglehurst, D., Russell, J. & Greenhalgh, R. (2008). Peer observation of teaching in the online environment: an action research approach (opens in new tab). Journal of Computer Assisted Learning, 24(5), 383-393. doi:10.1111/j.1365-2729.2007.00274.x

Van Note Chism, N. (1999). Peer review of teaching: A sourcebook (opens in new tab).

Back To Top