Text generated from AI, machine learning, or similar algorithmic tools cannot be
used in papers published in Science journals, nor can the accompanying figures,
images, or graphics be the products of such tools, without explicit permission
from the editors. In addition, an AI program cannot be an author of a Science
journal paper. A violation of this policy constitutes scientific misconduct.
Another approach is Nature Journals' large language model guidelines:
Large Language Models (LLMs), such as ChatGPT, do not currently satisfy our
authorship criteria. Notably an attribution of authorship carries with it
accountability for the work, which cannot be effectively applied to LLMs. Use of
an LLM should be properly documented in the Methods section (and if a Methods
section is not available, in a suitable alternative part) of the manuscript.
In class, you can discourage student use of generative AI by encouraging students to use other
tools and techniques instead. In fact, many of the teaching and assessment techniques
recommended by the CET – ask more nuanced questions, have students complete assignments
and assessments during class, require students to submit drafts, augment written papers with
other activities that demonstrate students’ content knowledge – work equally well if you want to
embrace and enhance or discourage and detect student use of AI generators.
However, we do not consider requiring handwritten assignments to be an effective technique to
discourage or detect. Students who have academic accommodations may need to use assistive
technology in your class. Prohibiting student use of technology or requiring that all students
handwrite their work may create a situation that singles out students with accommodations if
they can use technology while others cannot.
As for detecting if students' typed academic work contains AI-generated text, the best way is to
honestly grade that work and look for errors. Some, if not most, contemporary AI-generated text
is specifically designed to appear plausible and persuasive but is not necessarily accurate.
OpenAI cautions that "ChatGPT sometimes writes plausible-sounding but incorrect or
nonsensical answers." Because of this, current generation AI text generators have the propensity
to make easily discernable, fundamental mistakes that a subject matter expert or even an
'experienced novice' would never make (see "CNET's Article-Writing AI Is Already Publishing
Very Dumb Errors", "Why Meta's latest large language model survived only three days online",
"ChatGPT Needs Some Help With Math Assignments," and "Alphabet shares dive after Google
AI chatbot Bard flubs answer in ad" for some recent examples). Even if students rewrite AI-
generated text to avoid detection, the structural and factual errors in current-generation AI-
generated text should remain. But those detectable weaknesses may not last forever, especially
with future generations of AI text generators.
Another option is to use a 'similarity detector' (often mistakenly called a 'plagiarism detector')
like Turnitin. Turnitin, which is available in every USC Blackboard course, scans submitted text
and highlights any phrases or paragraphs that are identical or closely like other sources known to
Turnitin. Since some AI text generators have been known to copy from other sources without