Skip to content

Encouraging Academic Integrity

Encouraging Academic Integrity

Statement about AI detectors

A wide array of AI-detection tools exists; in fact, such tools are becoming ubiquitous. However, their accuracy varies considerably. The creators of many of these tools claim extremely high accuracy rates in spotting AI-generated content. It is not unusual to see claims of 98-99%+ accuracy. However, when evaluated by third parties, most of these tools also have a high rate of false positives. Even AI detectors with a great record in spotting AI generated content often flag human-written text as AI-generated. AI detectors can also produce inequitable results. For example, AI detectors are more likely to produce false positives for non-native English speakers. False positives carry the risk of loss of student trust, confidence and motivation, bad publicity, and potential legal sanctions.

Many of the software companies involved refuse to offer a specific false positive rate for their products.

On April 4, 2023, Turnitin released an AI detector that was available to Pitt faculty through its existing suite of tools. Although Turnitin claimed that its tool was more accurate than other AI detectors, some faculty voiced concerns about accuracy and usefulness. Based on our own testing at the Teaching Center, and discussions with other institutions, the Teaching Center decided not to endorse or support the use of this tool.

In June 2023, Turnitin acknowledged that its AI detection tool has a higher false positive rate than the company originally asserted. However, they have not disclosed a new false positive rate estimate and continued to integrate AI detection into additional tools like iThenticate, a research writing similarity checker. Turnitin has now updated their application so that the Teaching Center has the capability to disable this specific application within the suite of Turnitin tools.

Based on our professional judgment, the Teaching Center has concluded that current AI detection software is not yet reliable enough to be deployed without a substantial risk of false positives and the consequential issues such accusations imply for both students and faculty. Use of the detection tool at this time is simply not supported by the data and does not represent a teaching practice that we can endorse or support. Therefore, the Teaching Center has disabled the AI-detection tool in Turnitin.

Currently, the Teaching Center does not endorse or support the use of any AI-detection tools. We will continue to advise Pitt’s faculty about the value of AI tools as they continue to evolve, and we will work closely with all instructors on appropriate use of generative AI tools in the classroom and the appropriate best practices for managing potential abuses of this technology. To request a consultation to discuss how to talk to students about the ethical use of AI tools or design assessments to mitigate academic integrity issues, email the Teaching Center at teaching@pitt.edu.

Equitable and Inclusive teaching considerations/strategies

Many inclusive teaching strategies also encourage academic integrity:

  • Clearly communicate whether and how generative AI tools can be used in syllabi. For examples, see Syllabus Language.
  • Talk to students about why academic integrity matters and the ethical and practical implications of academic integrity violations. Emphasize your trust in your students and your belief that they can successfully complete coursework themselves. Invite students to ask questions and attend office hours if they are confused or feel unable to successfully complete their work.
  • Decrease the motivation to commit academic integrity violations by building students’ intrinsic motivation to engage in coursework fully. Some strategies for building intrinsic motivation include emphasizing the relevance of learning tasks, creating authentic assessments, and giving students choices about how to express their learning (e.g. allowing students to select a topic or determine what type of learning artifact to create) (Lang, 2013).
  • Reduce students’ assessment anxiety, which can contribute to the likelihood of academic integrity violations, by incorporating low-stakes assessments and scaffolded assignments that allow students to receive periodic feedback and improve their work over time.
  • Develop assignments that cannot successfully be completed using AI tools. This might involve having students complete part or all of the assignment during class or designing assignments that include tasks that are outside of ChatGPT’s current capabilities. Examples might include assignments that require students to draw from recent events.

What should you do if you suspect that a student has committed an academic integrity violation using AI?

It can be challenging to definitively prove when students have used AI to complete assignments. In cases where student assignments list hallucinated sources, the use of fake sources in and of itself can constitute an academic integrity violation. However, it is important to note that AI detectors are not reliable enough to act as proof. If you suspect AI misuse, you can still follow the standard University of Pittsburgh Academic Integrity Guidelines.

Last updated on: Feb. 7, 2024

Back To Top