Committee on Information Services
USC Academic Senate Provost Joint Committee
Patrick Crispen & Clifford Neuman, Co-Chairs
Last updated: 13 February 2023
Instructor Guidelines for Student Use
of Generative Artificial Intelligence for
Academic Work
The University of Southern California's Academic Senate Committee on Information Services
recommends that all USC schools, academic departments, and instructors adopt the following
guidelines regarding student use of generative artificial intelligence for academic work.
Instructors should encourage USC students to explore generative artificial
intelligence (AI), using these new tools to create, analyze, and evaluate new
concepts and ideas that inspire them to generate their own academic work. In
advance of this exploration, instructors should help students recognize that some
contemporary AI-generated content may be specifically designed to appear
plausible and persuasive but is sometimes factually inaccurate.
Many of the issues that have surfaced with the introduction of ChatGPT
questions about academic integrity, authorship and citations, student
engagement, misinformation and disinformation are issues higher education
and society have encountered in the past in response to the need for digital
literacy. We suggest that generative AI is simply the newest addition to USC’s
digital literacy tools.
Ultimately, this committee leaves instructors to set their own course policies
regarding student use of generative AI. Whatever any given individual instructor
decides should be clearly communicated to students in course materials.
However, the committee recommends that instructors remind students that the
acquisition of academic work in whole or in part from any source (from textbooks
and journal articles to web resources to generative AI) and the subsequent
presentation of those materials as the student's own work (whether that material
is paraphrased or copied in verbatim or near-verbatim form) constitutes an
academic integrity violation unless otherwise allowed by the instructor.
Individual assignments and exams may have additional, specific requirements
related to original work which should be clearly defined by the instructor.
Because generative AI is a constantly evolving space, the committee encourages
USC’s instructors to begin to learn more about generative AI so they can better
adjust their pedagogy and evolve as educators.
The Current State of Student Use of
Generative AI for Academic Work (Mid-
February 2023)
Limitations of Guidelines
Our focus in this document is narrow: to highlight the steps that USC’s instructors need to take
today to identify and to state their expectations with respect to student use of generative AI for
academic work. In the weeks and months to come, we expect that USC’s academic community
will be asked to participate in broader conversations about generative AI and its place in
academe and society.
What is ChatGPT and what can it do?
It is hard to overstate how much attention ChatGPT, a free
online artificial (AI) chatbot that generates text in response
to prompts, has received since its launch on November 30.
One reason it has become so popular so quickly, surpassing
100 million monthly active users to become the fastest-
growing consumer internet application in history, is the ease
and speed with which it can generate text on demand. Ask
ChatGPT to "write a five-paragraph essay on the impact
John Stuart Mill's recantation of the wages fund doctrine had
on classical economics" or "explain what it means when
people say, 'knowledge is knowing that Frankenstein is not
the monster but wisdom is knowing that Frankenstein is the
monster'" or "write a sonnet about Clay Helton" and it
creates plausible, human-like text responses in a matter of
seconds.
Scholars, researchers, and educators have demonstrated that,
given the right prompts, ChatGPT can
Pass the final exam of an MBA-level Operations Management class at Wharton
(Barely) pass questions on a law school exam at the University of Minnesota
Pass all three exams that comprise the United States Medical Licensing Examination
Pass Google's coding exams and interviews at the level of an entry-level software
engineer with no prior industry experience.
Are students already using ChatGPT for academic work?
Yes. Students are starting to use ChatGPT for their academic work. In a late-2022 nationwide
survey of 1,000 students currently enrolled in US colleges and universities, nearly one-third of
the students said they have already used ChatGPT to complete a written college assignment and
nearly two-thirds of that group say that they have used it for 50% or more of their assignments.
We expect both percentages to be even higher today.
Are there other generative AI text tools like ChatGPT?
Yes. ChatGPT is just the first in a wave of generative AI tools that will soon become
ubiquitous. Generative AI is algorithms and tools
that can be used to create new content, including audio, code, images, text,
simulations, and videos. Recent new breakthroughs in the field have the potential
to drastically change the way we approach content creation. (McKinsey &
Company)
Most of these new tools rely on ‘large language models,’ AI systems that use advanced statistical
techniques to analyze and understand natural language data, such as text or speech, and generate
human-like responses.
Microsoft, already a major investor in OpenAI (makers of ChatGPT), launched a new AI-
powered Bing search engine and Edge browser in early February 2023 and intends to incorporate
AI-content generation into Microsoft Office programs like Outlook, Word, and PowerPoint.
Later in 2023, Microsoft will release technology that will allow companies, schools, and
governments to create their own custom ChatGPT-powered AI text generators. Google is
expected to launch its own ChatGPT competitor named Bard in early 2023. In the summer of
2022, Meta (formerly Facebook) released its Open Pretrained Transformer (OPT) large language
model to developers and researchers. And that is just the beginning (see
https://www.futuretools.io/ for a list of hundreds of new AI tools that be used to generate text,
images, audio, and more).
Considering this, we suggest that rather than focusing solely on ChatGPT, instructors
should instead focus on what role they would like all generative AI tools to play in their
classes and in their students' work going forward.
How might instructors approach student use of generative AI?
We suggest that there are two ways instructors can approach student use of generative AI:
1. Embrace and Enhance
2. Discourage and Detect
Embrace and Enhance
The good news is that many of the proven teaching and assessment techniques that worked in a
pre-generative AI world still work in a world where any student with a cell phone and a
generative AI account can create walls of academic(-sounding) text.
USC's Center for Excellence in Teaching (CET) recently published a guide titled "Using AI text,
image, and music-generating tools in your courses" that includes helpful ideas for incorporating
AI-generators and AI-generated content in your course, including evaluating and critiquing AI-
generated content (including interrogating the content for biases) and asking students to create
rebuttals. Like our recommendation that students should use generative AI to create, analyze, and
evaluate new concepts and ideas that inspire them to generate their own academic work, CET
recommends having your students use AI generators to brainstorm ideas, formulate and iterate
question prompts, and refine responses, adding that you should
Frame using AI tools as something to build upon. Remind students of the best way
to use these tools in their discipline, such as for idea generation, essentializing,
brainstorming, or gathering information about the typical understanding of a topic.
All uses of AI tools should be supplemented with appropriate evidentiary support
and reflection.
Some students may lack the foundational knowledge to understand why or how AI-generated
content is inaccurate. This presents an excellent teaching opportunity for you to demonstrate for
your students generative AI's strengths and weaknesses.
CET also offers several suggestions on designing assignments and assessments in the age of AI
generators, including
Asking more nuanced questions, beyond simple definitions and common comparisons,
related to the course text, articles, media, or activities that may be unknown to or beyond
the capabilities of current AI generators,
Having students complete assignments and assessments during class time,
Requiring students to submit drafts of their papers or projects before they submit their
finished work, and
Augmenting written papers with additional oral presentations, concept maps, group work,
or case studies so that students can further demonstrate their understanding of the course
objectives.
If you are going to allow your students to use generative AI as a source in their academic work,
you may want to consider requiring your students to clearly disclose the role generative AI
played in formulating their work. OpenAI also has recommended language that you may want to
adapt and adopt:
The author generated this text in part with GPT-3, OpenAI’s large-scale language-
generation model. Upon generating draft language, the author reviewed, edited,
and revised the language to their own liking and takes ultimate responsibility for
the content of this publication.
Discourage and Detect
Some educators have asked how they can block student use of ChatGPT (see New York City
Department of Education and others). We suggest that that would be akin to standing on the
shore hoping to block a rising tide. Generative AI is here and is not going away. That said, if you
wish to discourage student use of generative AI, let your students know this expectation
both in your syllabus and in class. The guidelines listed at the beginning of this document
should serve as a good starting point.
You may also consider adapting and adopting something like Science Journals' artificial
intelligence (AI) policy:
Text generated from AI, machine learning, or similar algorithmic tools cannot be
used in papers published in Science journals, nor can the accompanying figures,
images, or graphics be the products of such tools, without explicit permission
from the editors. In addition, an AI program cannot be an author of a Science
journal paper. A violation of this policy constitutes scientific misconduct.
Another approach is Nature Journals' large language model guidelines:
Large Language Models (LLMs), such as ChatGPT, do not currently satisfy our
authorship criteria. Notably an attribution of authorship carries with it
accountability for the work, which cannot be effectively applied to LLMs. Use of
an LLM should be properly documented in the Methods section (and if a Methods
section is not available, in a suitable alternative part) of the manuscript.
In class, you can discourage student use of generative AI by encouraging students to use other
tools and techniques instead. In fact, many of the teaching and assessment techniques
recommended by the CET ask more nuanced questions, have students complete assignments
and assessments during class, require students to submit drafts, augment written papers with
other activities that demonstrate students’ content knowledge work equally well if you want to
embrace and enhance or discourage and detect student use of AI generators.
However, we do not consider requiring handwritten assignments to be an effective technique to
discourage or detect. Students who have academic accommodations may need to use assistive
technology in your class. Prohibiting student use of technology or requiring that all students
handwrite their work may create a situation that singles out students with accommodations if
they can use technology while others cannot.
As for detecting if students' typed academic work contains AI-generated text, the best way is to
honestly grade that work and look for errors. Some, if not most, contemporary AI-generated text
is specifically designed to appear plausible and persuasive but is not necessarily accurate.
OpenAI cautions that "ChatGPT sometimes writes plausible-sounding but incorrect or
nonsensical answers." Because of this, current generation AI text generators have the propensity
to make easily discernable, fundamental mistakes that a subject matter expert or even an
'experienced novice' would never make (see "CNET's Article-Writing AI Is Already Publishing
Very Dumb Errors", "Why Meta's latest large language model survived only three days online",
"ChatGPT Needs Some Help With Math Assignments," and "Alphabet shares dive after Google
AI chatbot Bard flubs answer in ad" for some recent examples). Even if students rewrite AI-
generated text to avoid detection, the structural and factual errors in current-generation AI-
generated text should remain. But those detectable weaknesses may not last forever, especially
with future generations of AI text generators.
Another option is to use a 'similarity detector' (often mistakenly called a 'plagiarism detector')
like Turnitin. Turnitin, which is available in every USC Blackboard course, scans submitted text
and highlights any phrases or paragraphs that are identical or closely like other sources known to
Turnitin. Since some AI text generators have been known to copy from other sources without
attribution see "CNET's AI Journalist Appears to Have Committed Extensive Plagiarism"
tools like Turnitin may detect AI-generated text. Or not. Turnitin has announced that it will roll
out additional, built-in AI writing and ChatGPT detection features soon.
Open AI, makers of ChatGPT, recently released their own AI Text Classifier tool at
https://platform.openai.com/ai-text-classifier that "predicts how likely it is that a piece of text
was generated by AI from a variety of sources." The tool currently "requires a minimum of 1,000
characters, which is approximately 150 - 250 words." Princeton student Edward Tian also
developed a popular tool named GPTZeroX at https://gptzero.me/ that "highlights portions of
text that are most likely to be AI generated" and allows you batch upload files "in PDF, Word,
and .txt format."
One word of caution: Students may be able to circumvent these tools. As Melissa Heikkilä noted
in an article in the MIT Technology Review in December, "[b]ecause large language models
work by predicting the next word in a sentence, they are more likely to use common words like
'the,' 'it,' or 'is' instead of wonky, rare words." With this in mind, Michael Webb at the National
Center for AI in Tertiary Education found a simple technique to fool many ChatGPT detectors,
including GPTZeroX:
1. Have the AI text generator create some text.
2. Then ask the AI generator to "Use the word 'the' less"
We tried this technique and were able to 'fool' both OpenAI’s AI Text Classifier and Tian's
GPTZeroX into claiming that text copied straight from ChatGPT was most likely written by a
human and not AI.
Conclusion
The age of AI-generated content is upon us. In a little over two months, OpenAI’s ChatGPT has
created a 100 million user market for AI-generated text content that did not exist at scale last fall.
And ChatGPT is just the first in a wave of generative AI tools that will soon become ubiquitous.
Because of that, rather than focusing solely on ChatGPT, we strongly recommend that instructors
instead focus on what role they would like all generative AI tools to play in their classes and in
their students' work going forward either by embracing and enhancing or discouraging and
detecting studentsuse of this technology.
Additional Resources
USC Center for Excellence in Teaching AI Generators in the News
EDUCAUSE QuickPoll Results
MIT Technology Review Artificial Intelligence
The Register Artificial Intelligence
National Centre for AI in Tertiary Education
Santa Fe Community College Library's Repository of Information about the Impact of
ChatGPT on/in Higher Education