How I’m responding to generative A.I. (for now)
In May 2023, when ChatGPT was beginning to make headlines in the world of education, I gave a talk at my university’s biannual Academic Resources Conference about how I was responding to generative A.I. in my courses—specifically, an online asynchronous literature course, where there was no controlled environment for exams or other assessments. Since then, I’ve received a surprising number of emails from faculty and instructional designers, both inside and outside my institution, who found my presentation helpful. Since we are still grappling with these technologies, I thought I’d share my experiments with redesigning assessments in spring 2023 in a more public setting, for those who might find it useful. I’ll also share a little bit about my plans for this year, including plans to incorporate generative A.I. in my classroom while also promoting critical thinking and authentic writing.
A few caveats:
First, I am not an expert in generative A.I. I’ve done my
best to keep up with changes and advancements in Large Language Models (LLMs),
but this is not my field of expertise. There are lots of scholars who are more
informed and have thought more rigorously than I about these issues, and I hope
to continue learning from them as these technologies evolve.
Second, some of the examples I will share may be dated. I
began experimenting with generative A.I. in February 2022, before the release
of ChatGPT 3.5, ChatGPT 4, Bard, and other tools that are now widely available.
Just because something worked for my class in spring 2023 doesn’t mean it will
work now.
Revising Online Assessments in Spring 2023
In spring 2023, I was taught a general education course on
Jane Austen and popular culture. The course focused on two novels by Jane
Austen (Pride and Prejudice and Emma) and various adaptations of these
novels, including several film adaptations, the web series The Lizzie Bennet Diaries, and Ibi Zoboi’s young-adult novel Pride. The course was asynchronously online,
which means that there were no set meeting times. Students completed work online,
at their own pace, though they needed to meet weekly deadlines.
When the course began in January 2023, academics were
beginning to raise concerns over ChatGPT 3 and academic integrity. Like many
instructors, I was concerned that students would use these tools to cheat—to write
essays and complete exams for them, similar to how one might pay someone to
write an exam or take a test for them. My class had four major assessments: a
scene analysis essay in which students analyzed a scene from a film or web
series episode; a midterm exam; an creative project in which student developed
their own adaptation of an Austen novel; and a final exam.
I wanted to evaluate how ChatGPT responded to these
assessments, so I ran each of them through ChatGPT 3. For this blog post, I’ll focus
on the first two assessments: the scene analysis essay and the midterm exam.
Assessment 1: Scene Analysis Essay
As I explained above, my first assessment for the course was
a short essay analyzing a scene from a film or web series episode. As a
teacher, I had a four learning goals for this assignment. First, I wanted
students to develop and support an interpretive claim (e.g. a thesis). Second,
I wanted them to engage in textual analysis (what those of us in literary
studies call close reading). Third, I wanted students to engage in the writing
process—to brainstorm ideas, create an initial draft, and spend time revising
and editing it. In sum, I was looking for a polished essay that developed an
interpretive claim about the scene, supported with well-chosen quotations and details
from the film or web series.
When I ran my instructions through ChatGPT 3, I got a polished
and reasonably well-organized essay that presented observations about the
scene. However, the essay failed to present an interpretive claim and failed to
support observations with specific quotations or details from the film or web series.
I concluded that it wasn’t necessary to re-design this assignment, since
ChatGPT wasn’t replicating the kind of analysis I was looking for in the
assignment.
Assessment 2: Midterm Exam
The second assessment I evaluated was the midterm exam. When
I had taught the class before, I had given students five different prompts,
each about a different novel, film, or web series, and students chose two of
these prompts and wrote essays responding to them. Since this was an online asynchronous
class, the prompts were available for three days, and students were encouraged
to spend 2-3 hours writing their essays, though they could of course spend
longer.
- In Pride and Prejudice, Austen presents her readers with a range of marriages, some happier than others. Based on these relationships, what does Austen seem to believe is essential for a happy marriage?
- When The Lizzy Bennet Diaries was released in 2012, some critics saw the series as a feminist revision of Pride and Prejudice that centers on women’s relationships with each other instead of their romantic relationships with men. Do you agree with this assessment of the series? Why or why not?
My learning goals for the midterm exam were different from
my learning goals for the scene analysis essay. Because I only expected
students to spend a couple hours on these essays and didn’t expect student to re-watch
the films or web series, I was not looking for close textual analysis. Instead,
I wanted students to demonstrate that they had read and viewed the assigned
texts. I also wanted students to show they could explain how a theme is developed
in assigned texts. In short, I was looking for somewhat-polished essays that
explained how a theme is developed in the primary test, supported by general
examples from the text.
When I ran my prompts through ChatGPT 3, I got polished
essays that explained how a theme is developed in the primary text, supported by
general examples from the text. It was clear I needed to re-think the
assessment, as ChatGPT 3 could replicate the kind of analysis that I was
looking for in my students’ essays.
After some experimentation, I determined that ChatGPT 3 could
generate essays about a single text, but it couldn’t generate essays that compare
texts. When I asked it to compare two texts, it generated a list of general
observations. However, these observations needed to be further developed and
supported to meet my learning goals for the essay.
Given this discovery, I redesigned my exam to be based on an
open-ended prompt that asked students to compare the two texts. Here is the
revised prompt:
Select a major or minor
character from Pride and Prejudice and compare how he/she is
developed in at least two of the assigned texts for this semester
(Austen’s Pride and Prejudice, Zoboi’s Pride, the films Pride
and Prejudice (2005) and Bride and Prejudice, and the web series The
Lizzie Bennet Diaries.)
Write an essay responding to the
below questions:
- How does each text initially present the character
- How, if at all, does the character change over the course of each story?
- What underlying themes does character’s evolution (or lack of evolution) contribute to. How so? (Remember that some adaptations may have themes that aren’t present in the original novel.)
Remember to support your points
with well-chosen quotations, examples, and/or details from the novels, film, or
web series. Remember that you can discuss elements like performance, camera
shots and angles, editing, set design, costuming, and music as well as direct
quotations.
I also included the following language about generative A.I.
to the exam:
I expect you to write your essay
without assistance from generative AI like ChatGPT. I may run your essay
through a website that detects generative AI. If your essay scores high on the
detector, I may ask you to re-take the exam or schedule a supplementary oral
exam.
I had a couple of goals for this language. Given the
questionable ethics of A.I. checkers, I wanted to notify students that I might
run their writing through an A.I. checker. Also, since A.I. checkers are not
reliable, I wanted to provide a tentative course of action for assignments I
suspected of being written by A.I. In other words, I didn’t want to rely on
A.I. checkers.
I was pleasantly surprised with the results of my experiment.
I didn’t end up using my A.I. policy, and the essays I received were more
creative and engaged than the essays I’d received in earlier classes. By
crafting an essay prompt that ChatGPT 3 couldn’t respond to, I’d also inadvertently
crafted an essay prompt that encouraged my students to think more deeply and
creatively.
Thinking about Fall 2023
As I plan for the following year, I’m not just thinking
about how I can prevent cheating in my classes, I’m also thinking about how I
can encourage students to think critically about generative A.I. and help
students use A.I. in ethical ways.
Here are a few examples broken down by class.
First-Year Writing
As the Assistant Director of the Writing Program at my
institution, I assist with curriculum development and graduate teaching assistant
(GTA) training. Here’s what I’ve done so far:
- Created Generative A.I. Guidelines for major writing assignments. These guidelines explain acceptable uses of generative A.I. (brainstorming topics, organizing ideas, copy-editing writing etc.) and unactable uses of generative A.I. (generating significant portions of a content) for each assignment
- Provided a resources for how to disclose use of A.I. in writing. As part of teaching ethical uses of A.I., our library has created a resource to help students disclose when and how they are using A.I. in their writing.
I’m also actively encouraging our GTAs and instructors to
talk with their students about generative A.I., including its strengths and
weaknesses as a tool to help with writing.
This fall, I’ll be heading up a team of instructors to
re-think our first-year writing curriculum. Here are some tentative ideas:
- Leaning into multimodality. Generative A.I. programs are better at reading and discussing linguistic modes (writing, transcripts of videos, etc.) than visual, aural, gestural modes. By encouraging students to analyze and compose in multiple modes, we can reduce their reliance on generative A.I.
- Giving students time to write. Students are more likely to behave unethically when they are pressed for time. By scaffolding major writing assignments and giving students time to write in class, we can help students stay on track with their writing without relying on generative A.I.
- Encouraging metacognition. As part of emphasizing the writing process, I’m hoping to create more opportunities for students to reflect on their writing and their writing processes, including how they are or are not using generative A.I.
- Encouraging students to critique generative A.I. We’re hoping to design activities that make students aware of the limitations of generative A.I., such as its tendency to hallucinate sources.
- Teaching students to use generative A.I. in ethical ways. We’re also hoping to design activities that encourage students to collaborate with generative A.I. in ethical ways, such using it to brainstorm ideas, identify counter-arguments, and organize ideas.
Literature (Online)
This fall, I’m teaching an online upper-division literature
course on fiction. Here are some changes I’m making for the course.
- Replacing some discussion boards with annotation activities. After planning with ChatGPT 3.5, I’ve discovered that it can perform close readings of literature. As a result, I’m replacing some of my discussion boards with annotation actives using Perusall, a free online program. Students will work in groups to annotate assigned stories, and then complete a short reflection on what they took away from the activities.
- Requiring students to submit annotations of specific passages with literary analysis papers. Building on the annotation activities, I’m also asking students to submit annotations of specific passages with their literary analysis papers, and I’m including the annotations in their grade for the papers.
- Having students annotate scholarly articles. I’m also using Perusall to have students annotate scholarly articles about the assigned works before writing their research papers.
Advanced Composition (In-Person)
I’m also teaching an in-person advanced composition course that’s aimed at educators and future educators. The first unit of the course is organizes around a literacy narrative in which students reflect on their history as writers. I’ve redesigned this unit to include discussions about A.I. and the writing process, including the following activities:- Analyzing A.I. generated writing. As part of the first unit, I’m planning to ask the class to comparing an A.I. generated literacy narratives to classic literacy narratives, such as Amy Tan’s “Mother Tongue.”
- Incorporating generative A.I. into the writing process. As part of the first unit, I’m also hoping to discuss Chat GPT 3.5’s advice for writing literacy narrative and encourage student experiment with generative A.I. while writing their literacy narrative.
- Reflecting on using generative A.I. At the end of the first unit, students also compose a writing process reflection. This semester, I’m planning to ask them to reflect on how they used generative A.I.
Final Thoughts
As I mentioned in my initial caveats, I am by no means an
expert on generative A.I. or teaching with generative A.I. This blog post
merely provides some examples of how one teacher is responding to this emerging technology.
Carrie Dickison is an Associate Teaching Professor and the Assisstant Director of the Writing Program at Wichita State University.
Comments
Post a Comment