Skip to main content
English

How is AI Changing How We Write and Create?

We spoke with three English professors to find out

Three people use talk while using laptop computers. Play Video

Since late last year, artificial intelligence platforms like ChatGPT have become a growing topic of conversation on college campuses, with students using the technology for everything from class assignments to essays.

These text-generating software programs sift through massive databases to generate human-like responses to prompts or questions from users. The rapid introduction of this technology and its relatively unknown potential have spawned both awe and apprehension. 

So what are the possible positive and negative effects of these tools? How might they change how we think about writing, creativity, authenticity and teaching? What is the path forward?

To better understand these questions, we spoke with three scholars from our Department of English. Our faculty offer insights on the technology’s societal and ethical ramifications, how to deal with issues like plagiarism, and the importance of increasing AI literacy, among other topics.

What are the cognitive consequences of machine-generated writing?  

Chris Anson
Professor of English

Written communication represents myriad purposes and genres across thousands of different contexts. Much of it is created to handle routine tasks, such as describing a home for a real estate listing, sending an apology letter to a customer or providing accurate directions to a location.

In such cases, using AI-based natural language processors (NLP) to fulfill the task does little to challenge the cognitive processes of the human writer. The reason is that the writer is ordinarily not significantly changed by the writing task, especially when it uses boilerplate-like language — language that is often repeated with a few unique details inserted. Instead, these softwares improve efficiency and make time for the person to do higher-level, more cognitively sophisticated kinds of writing. 

A user looks on as ChatGPT generates an answer to a prompt.

However, when people compose unique texts that require complex reasoning — the framing and support of arguments, and choices of structure, language and style — their composing process alternates between mental formulation and textual output. Writers test and evaluate their visual representation of thought on a page or screen as it emerges, discovering new ideas and subsequently revising the text. Writing can change the writer, opening up new perspectives and beliefs or revealing what there still is to learn.

Especially in educational settings, the reciprocity of writing and thinking is essential for intellectual development and higher-order reasoning. Asking an AI-based system to write an essay on a topic that the (human) writer has not yet explored significantly subverts the thinking and learning process. 

It is possible for a writer to auto-generate a text, evaluate whether it reflects the writer’s thoughts or intentions, and then revise the text as needed. But this process doesn’t work for many genres such as explanations of scientific processes or historical accounts because the writer is relying on the machine to provide information they don’t yet know. 

Representing new knowledge in writing solidifies the knowledge through its (re)articulation, leading to stronger learning. In this sense, when used to replace human composing entirely, AI-based NLP systems threaten the integrity of our educational system and the future intellectual acumen of our students.

When used to replace human composing entirely, AI-based NLP systems threaten the integrity of our educational system and the future intellectual acumen of our students.

Because NLP systems increasingly will become part of our daily lives, educators need to find principled ways to integrate them into instruction. For example, I asked ChatGPT to explain the cognitive consequences of machine-generated writing, it gave me an additional idea that I had not considered, but only after I had written my statement. The software’s response? “Machine-generated writing may lead to a homogenization of writing style…”

As ChatGPT itself recommends, “it is important for people to remain aware of these potential effects and to use machine-generated writing in conjunction with, rather than as a replacement for, human-generated writing.” 

 
What are some ethical considerations of machine-generated content?

Huiling Ding
Professor of English

AI-generated content, be it texts or artwork, introduces many ethical challenges related to authorship, copyright, creativity, plagiarism and labor practices. For instance, text-to-image AI generators like Midjourney and DALL-E 2 use images available in the public domain and/or images available online through Google search, Pinterest, and other image-sharing and art-shopping platforms as training data for their algorithms.

By supporting text-prompt-driven image creation, these AI generators then produce artwork that can imitate individual artists’ styles. In doing so, they compete with if not displace artists who have spent decades improving their craft. 

AI-assisted writing faces similar challenges in terms of transparency, explainability, plagiarism and authorship attribution. Using online texts as training data, AI writers such as GPT-3 can generate original summaries and syntheses based on existing content. 

This image was generated through the DALL-E platform using the prompt: “A 3D rendering of a robot shaking hands with a student sitting on a stack of books.”

Traditional writing classes are disrupted by these AI tools, which speed up and automate the process of online research and the summary and synthesis of reference materials. Students can easily copy and paste AI-generated content as their own written work without being caught by plagiarism-detecting tools such as Turnitin.

In other words, natural language generation tools such as GPT-3 transform how we detect and define plagiarism. That, in turn, calls for new research and adaptation from writing instructors and scholars. 

Outside the classroom, professional writers and businesses use AI content generators to create preliminary ideas, generate quick summaries of online publications, write stories and engage with customers in chatbot conversations.

While famous artists such as Greg Rutkowski may feel their rights infringed by AI art generators, other artists are using AI-generated art for inspiration. In the content generation marketplace, these AI tools can compete with writers and artists or can be used as human-augmenting tools to help writers and artists produce content more creatively, efficiently and collaboratively. 

How do advancements in AI technologies affect how we teach writing?

Paul Fyfe
Associate Professor of English

At the start of this year, ChatGPT made many people nervous about the potential impact of AI on student writing. Even though generative AI will likely affect various professions and domains, student writing became a lightning rod for concerns about cheating, as if students could press a button and produce essays or completed homework. However, the commentary hasn’t necessarily reflected students’ experiences; the reality is more complicated.

For the past few semesters, I’ve given students assignments to “cheat” on their final papers with text-generating software. In doing so, the majority of students learn (often to their surprise) as much about the limits of these technologies as their seemingly revolutionary potential. Some come away quite critical of AI, believing more firmly in their own voices. Others become curious about how to adapt these tools for different goals, or what professional or educational domains they could impact. Few, however, believe they can or should push a button to write an essay; none appreciate the assumption they will cheat.  

As AI becomes more common, so might an engaged, interdisciplinary AI literacy become a common aspect of students’ educations.

Grappling with the complexities of “cheating” also moves students beyond a focus on specific tools, which are changing stunningly fast, and toward a more generalized AI literacy. Frameworks for AI literacy are still being developed; mechanisms for teaching it are needed just as urgently.

Beyond ChatGPT, faculty and administrators must reckon with where AI literacy fits into their curricula, at levels from K-12 through higher ed. Experimenting with AI in the classroom can help faculty members learn alongside students what kinds of assignments and learning opportunities these tools might open, what critical perspectives should support them, and what guardrails we still need.  

Yet not every class can or should focus on these technologies, even if they’re likely to be affected by them. While it’s difficult to coordinate an institutional response to fast-moving technologies, generative AI seems significant enough to warrant such collective action. As AI becomes more common, so might an engaged, interdisciplinary AI literacy become a common aspect of students’ educations.